nginx and cronolog

Since the last century, I’ve been in the habit of piping my web server log files through cronolog and off to automatically selected files in the pattern /var/log/http/2015/10/23/access.log. This works quite well for me because way back when, I wrote a little log processing script called Logmonster.

After all these years, Logmonster still runs a while after midnight (via periodic) and:

  • parses the web server logs by date and vhost
  • feeds them through Awstats
  • compresses them

Back when Logmonster was named Apache::Logmonster, it required installing cronolog and making a few small changes to httpd.conf:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %v" logmonster
CustomLog "| /usr/local/sbin/cronolog /var/log/http/%Y/%m/%d/access.log" logmonster
ErrorLog "| /usr/local/sbin/cronolog /var/log/http/%Y/%m/%d/error.log"

Years later, after I got tired of maintaining Apache, lighttpd was all shiny and new and it was similarly easy to configure, making these changes to lighttpd.conf:

accesslog.format = "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %v"
accesslog.filename = "|/usr/local/sbin/cronolog /var/log/http/%Y/%m/%d/access.log"
server.errorlog = "/var/log/http/error.log"

Now, after spending more time than I wanted to determining why lighttpd and haproxy stopped playing nice together (Most HTTP POST commands time out. No good reason why. Remove haproxy, works fine. Replace lighttpd nginx behind haproxy, works fine.) so I replaced lighttpd with nginx. That required figuring out how to get cronolog type logging to work in nginx.

Nearly all my cronolog+nginx search returned only instructions for setting up logging to a FIFO, which I thought was a nifty idea. So I created the FIFOs, configured nginx, and upon startup, nginx just hangs. No idea why. It’s also requires setting up the FIFOs before nginx could startup, so I didn’t love that idea. Then I found instructions showing how to configure log rotation within nginx.conf. That’s exactly what I was looking for.

This is my solution for timestamp based logging with nginx:

log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$server_name"';
if ($time_iso8601 ~ "^(?\d{4})-(?\d{2})-(?\d{2})") {}
access_log /var/log/http/$year/$month/$day/access.log main;

Is a change of political climate in the air?

On May 21st,  leaders representing 6.5 million companies in 130 countries called on policy makers to shift towards low-carbon economies including carbon pricing and an end to fossil-fuel subsidies.

Yesterday, June 1st,  Six oil and gas “Majors” called on the UN Convention on Climate Change to introduce carbon pricing and markets.

If this keeps up, Fox News will admit climate change is real, Rick Perry will admit that government can create jobs, and lions will lay down with lambs.

Apple data centers on 100% renewable power

Apple is spending an eye popping $850 million to build a ginormous solar farm (280 megawatts) that will power their entire California operations. This new solar farm is not to be confused with the 70MW solar farm they’re building in Arizona, the $55 million “under way” third solar farm (17.5MW) in North Carolina, the two 20MW solar farms they’re building in China, or the existing 20MW solar farm near Reno, NV, or the two existing 20MW solar farms in N. Carolina.

The backstory is that in 2010, Apple wanted to buy renewable energy from Duke to power their Maiden N.C. data center. It wasn’t even legal in N. Carolina. In 2011 Apple bypassed the N.C. coal lobby by purchasing 100 acres of land and in 2012 they finished building (est. $100 million) the first non-utility 20MW solar farm. At the same time, they also built a 5MW fuel cell farm. In 2013 they doubled their fuel cell farm to 10MW and built another 20MW solar farm. Apple has since been producing 100% of the power they need in N.C.

While I believe that Tim Cook is sincere about reducing Apple’s carbon footprint, I also think it’s likely that spending over a billion dollars on solar panels is a very good investment. Apple is famously cash rich and by spending today and owning the solar farms, Apple fixes their energy prices at today’s rates for the next 30 years. Apple has taken a large and variable cost and turned it into a fixed cost that is no longer subject to price inflation or fluctuation. What Apple is also purchasing is energy stability.

Apple is also becoming an energy supplier. For the first 10 years, PG&E will purchase 150MW of production and Apple gets 130MW.  In the last 20 years, Apple gets 100% of production. It’s likely that their operations will have expanded to utilize the power (as has the NC data center) but if not, they’ll have little trouble selling their surplus capacity.

While Apple was first in the, “okay then, we’ll build it ourselves” solar game, the even bigger story is that 2014 was the year solar arrived in Main Street USA. In just 2014, nearly 70% of the worlds solar power generation came online with several companies having more installed solar than Apple: Wal-Mart (105MW), Kohl’s (50MW), and Costco (48MW). IKEA is not far behind with 39MW. Apple isn’t even the largest purchaser of solar as Intel, Kohl’s, Whole Foods, Dell, and Johnson & Johnson all purchase more solar power than Apple. What was so special about solar in 2014?

Swanson’s Law observes that solar modules tend to drop in price by 20% for every doubling of cumulative shipped volume. Apple deployed 60MW between 2012-2014 and during that same time, photo voltaic capacity more than doubled. By being out in front and building not just demand, but also solar capacity, Apple helped 2014 be the year of solar grid parity in 3 NE states, California, Arizona, and Hawaii. It is predicted that grid parity will arrive in “many” US markets in 2015 and Deutsche Bank predicts solar grid parity for all 50 states in 2016. With Apple deploying another 407MW of solar In just 2015-2016, that prediction seems like slam dunk.

Bandwidth shaping on Mac OS X

A few  years ago I sampled each of the “All My Music In the Cloud” services (iTunes Match, Amazon  Cloud, Google Play). For them to stream my music back to all my devices, I first had to first upload all my music (82 GB of data) to each service.

The iTunes Match upload was far smaller because Apple has the worlds largest music library and iTunes Match only uploaded my songs that weren’t already already in their collection. That should have made the upload process quick, except that something about the upload mechanism Apple uses caused severe network congestion and network stalls of 5 full seconds. I blamed it on iTunes and used the built-in IPFW firewall to plumb a 256Kbps pipe so that iTunes Match uploads would stop erring out and I could use my internet connection during the long upload process.

ipfw pipe 1 config bw 256KBytes/s
ipfw add 1 pipe 1 src-port 443
ipfw add 2 pipe 1 dst-port 443

That IPFW solution worked just as well for throttling the other cloud music services.

Fast-forward a couple years to Mac OS 10.10.3 and the new Photos app that stores all my photos in the cloud. There’s a process named photolibraryd and it seems to have that same nasty behavior. The symptoms are identical but I can’t use IPFW because Apple removed it in OS X Yosemite. I understand, as I too stopped using IPFW years ago in favor of PF.  But Apple doesn’t provide ALTQ, the PF bandwidth shaper. So the PF firewall has no bandwidth shaping abilities. Or so I  thought.

After a bit of hunting, I found the Network Link Conditioner within the Hardware IO Tools for Xcode. Even better, a GUI interface for accomplishing my goal. I downloaded it, set up a 256Kbps upload limit and I could once again let photos upload while I use my internet connection.

By what dark magic has Apple accomplished this task?  Inspecting the network interface didn’t turn up anything special so I checked the firewall rules (sudo pfctl -sa) and found dummynet rules! In the PF ruleset! And increasing dummynet packet counters. Hmmmm.

Dummynet is part of IPFW, so apparently rather than implementing ALTQ,  Apple decided to modify PF to support dummynet. The man page for pf.conf doesn’t even contain the term ‘dummy’ but I expect that’ll come eventually. In the meantime, the intarwebs can help you find documentation for how to write rules for it.

An auspicious start to DOS programming

In 1992  I was a young geek of 19 years. My programming experience consisted of the BASIC programs in the manual that came with our Commodore 64 and a few others in our schools Apple II lab. I had also written a few HyperCard and FileMaker apps on the Mac in my bedroom, where I did all the typesetting for my Dad’s print shop. [Thanks so much dad, for buying that first Mac Plus instead of a Compugraphic typesetting machine].

My vocational training in Mechanical Drafting had landed me an entry level position at Kysor/Cadillac as the blueprint clerk. Before long I rearranged the print room to maximize the efficiency of the engineers and myself, leaving me with hours of spare time each day. Often I would roam the engineering department, in search of  engineering projects, much to the delight of the engineers who could often find drudge work to offload.

During one of these lulls, I was chatting with David, a bright young lad who worked in the QA department. David was also quite fond of computers and told me of an escapade in which some students at his school had written a login simulator that captured and stored passwords when users logged into an system infected with their program.

Our engineering files were stored on a Novell Netware server connected by a token ring network. Each DOS computer logged in using a Novell program (login.exe, IIRC). The password capturing program seemed like an interesting challenge so I acquired my first DOS compiler (Qbasic or PowerBasic, I can’t recall which I used for this task) and wrote login.bas. I simulated the login screen perfectly, stored the passwords to a file, and then passed them on to the real login program, logging the user in. It  offered the user no indication that foul play was at hand.

Pleased with my results, I showed Rick, our network admin. I explained that I hadn’t inspected the contents of the file, knew what was in it, and turned my back while he inspected it. It turns out that Rick wasn’t terribly fond of being informed that his network security wasn’t all that secure. A few of his heated words I recall were, “that’s not your job!” He immediately escalated the matter to Keith, our VP of Engineering, intent on having me fired.

On that day, it was quite fortunate for me that I had set a precedent of doing a lot of engineering work that was not my job. Unbeknownst to me, the wheels of my first promotion were already set in motion specifically because of the extra-curricular not my job work I had been doing. That day ended with me getting a stern talking to. Soon thereafter, I was promoted and my new job involved writing software for Kysor.