TLS management

Let’s Encrypt, TLS certificates, and HAproxy

I’m evolving. As always, the change is being driven by the most pernicious of motivators: pain. I’ve sold, installed, and upgraded SSL/TLS certificates for years. It’s always been mildly painful: I maintain an offline CA where I generate all the keys and CSR (certificate requests). Then I submit the CSRs to whichever Certificate Authority / Reseller has the best current pricing, get back the new signed certificate from the CA, archive it, and finally install the key, crt, and CA chain file at the destination.

It can be painful and annoying enough that clients regularly hire me to install their certificates for them. To reduce the pain, I’ve encouraged long-duration (3+) year certs. I also have custom scripts tailored to my private CA to reduce the keystrokes. Even so, managing a few dozen certificates was onerous. It didn’t help that every application / daemon (apache, nginx, lighttpd, haproxy, dovecot, qmail, postfix, haraka, etc.) has their own special syntax and sometimes format for configuring the TLS certificates.

Two things happened in 2016 that made TLS management not suck:

  1. The Internet Security Research Group released Let’s Encrypt(https://letsencrypt.org). It’s a free and highly automated Certificate Authority that validates domain ownership (via DNS or HTTP) and issues certificates in seconds.
  2. I’ve moved all my web servers behind HAproxy. Now all TLS certs for web servers get deployed to haproxy and the job is done. No messing with lighttpd, apache, or nginx configs. Configure HAproxy get to an A+ at SSLLabs and it covers all the web servers.

Let’s Encrypt provides free signed certificates in just a few seconds, so long as one is willing to invest the time and energy into automating it. I’ve settled on [acme.sh](https://github.com/Neilpang/acme.sh) as my preferred client and once I’ve generated a certificate, it automatically renews and re-deploys it when needed. Just right.

HAproxy now does all the TLS termination, URL routing, scheme upgrades (http -> https), and rewrites. This greatly simplifies the backend web server configs. Need mod_perl, use Apache. Need CGI support, use lighttpd. For everything else I use nginx. Now all of them are simpler to deploy and upgrade.

US manufacturing

The the USA is still one of the largest manufacturers in the world. Our manufacturing sector is producing as much today as it ever has:


source: tradingeconomics.com

While it’s true that some (a small fraction) US manufacturing jobs have moved overseas (especially textiles), the vast majority of manufacturing job losses are due to automation. It is machines that have taken those jobs, not foreigners or immigrants.

On balance, NAFTA was a very big win for the USA and our trading partners Canada and Mexico. The primary reason NAFTA hasn’t helped Mexico far more is due to our ill conceived and almost entirely ineffective war on drugs.

heat pump water heater

In July I purchased a GE Geospring ($700 at Lowes in Seattle) 50 gallon heat pump water heater. I installed it myself in the basement. It’s wired the same as a typical electric water heater, so I just ran a new circuit of 10 gage wire and hooked it up.

Heat pump water heaters make more noise than traditional water heaters. If I happen to walk by the open door to the basement, I can hear it but I don’t consider it “loud.” It makes a little less noise than a dehumidifier, a lot less noise than an old dishwasher, but a fair bit more noise than my new ultra-quietest-one-available dishwasher. I’d guess in the neighborhood of 65 decibels.

Heat pump water heaters cool the area they’re in. I consider that a feature, as the basement is our “cool dry” storage area. Despite the output of cool air, the basement was about 64° before I put the heat pump water heater in and it’s still usually 64° after. That’s because the concrete floor and walls have lots of thermal mass so it takes a LOT of input to change the temps significantly.

A heat pump also dehumidifies the air. It has a condensate drain where the water obtained is drained off. Over the course of a week, the condensate measured about a quart for our family of four. Not huge, not “replaces a dehumidifier,” but welcome never-the-less.

The install docs recommend installing it in a garage or basement and I agree. You could put it in a large closet or pantry, but you’d want to have insulated doors if it’s adjacent to a “relaxing” area of the house.

Thus far, I’m very fond of my heat pump water heater.

nginx and cronolog

Since the last century, I’ve been in the habit of piping my web server log files through cronolog and off to automatically selected files in the pattern /var/log/http/2015/10/23/access.log. This works quite well for me because way back when, I wrote a little log processing script called Logmonster… This is my solution for timestamp based logging with nginx:

Since the last century, I’ve been in the habit of piping my web server log files through cronolog and off to automatically selected files in the pattern /var/log/http/2015/10/23/access.log. This works quite well for me because way back when, I wrote a little log processing script called Logmonster.

After all these years, Logmonster still runs a while after midnight (via periodic) and:

  • parses the web server logs by date and vhost
  • feeds them through Awstats
  • compresses them

Back when Logmonster was named Apache::Logmonster, it required installing cronolog and making a few small changes to httpd.conf:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %v" logmonster
CustomLog "| /usr/local/sbin/cronolog /var/log/http/%Y/%m/%d/access.log" logmonster
ErrorLog "| /usr/local/sbin/cronolog /var/log/http/%Y/%m/%d/error.log"

Years later, after I got tired of maintaining Apache, lighttpd was all shiny and new and it was similarly easy to configure, making these changes to lighttpd.conf:

accesslog.format = "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %v"
accesslog.filename = "|/usr/local/sbin/cronolog /var/log/http/%Y/%m/%d/access.log"
server.errorlog = "/var/log/http/error.log"

Now, after spending more time than I wanted to determining why lighttpd and haproxy stopped playing nice together (Most HTTP POST commands time out. No good reason why. Remove haproxy, works fine. Replace lighttpd with nginx behind haproxy and it works fine.) so I replaced lighttpd with nginx. That required figuring out how to get cronolog type logging to work in nginx.

Nearly all my cronolog+nginx search returned only instructions for setting up logging to a FIFO, which I thought was a nifty idea. So I created the FIFOs, configured nginx, and upon startup, nginx just hangs. No idea why. It’s also requires setting up the FIFOs before nginx could start up, so I didn’t love that idea. Then I found instructions showing how to configure log rotation within nginx.conf. That’s exactly what I was looking for.

This is my solution for timestamp based logging with nginx:

log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$server_name"';
if ($time_iso8601 ~ "^(?\d{4})-(?\d{2})-(?\d{2})") {}
access_log /var/log/http/$year/$month/$day/access.log main;

Is a change of political climate in the air?

On May 21st,  leaders representing 6.5 million companies in 130 countries called on policy makers to shift towards low-carbon economies including carbon pricing and an end to fossil-fuel subsidies.

Yesterday, June 1st,  Six oil and gas “Majors” called on the UN Convention on Climate Change to introduce carbon pricing and markets.

If this keeps up, Fox News will admit climate change is real, Rick Perry will admit that government can create jobs, and lions will lay down with lambs.

Apple data centers on 100% renewable power

Apple is spending an eye popping $850 million to build a ginormous solar farm (280 megawatts) that will power their entire California operations. This new solar farm is not to be confused with the 70MW solar farm they’re building in Arizona, the $55 million “under way” third solar farm (17.5MW) in North Carolina, the two 20MW solar farms they’re building in China, or the existing 20MW solar farm near Reno, NV, or the two existing 20MW solar farms in N. Carolina.

The backstory is that in 2010, Apple wanted to buy renewable energy from Duke to power their Maiden N.C. data center. It wasn’t even legal in N. Carolina. In 2011 Apple bypassed the N.C. coal lobby by purchasing 100 acres of land and in 2012 they finished building (est. $100 million) the first non-utility 20MW solar farm. At the same time, they also built a 5MW fuel cell farm. In 2013 they doubled their fuel cell farm to 10MW and built another 20MW solar farm. Apple has since been producing 100% of the power they need in N.C.

While I believe that Tim Cook is sincere about reducing Apple’s carbon footprint, I also think it’s likely that spending over a billion dollars on solar panels is a very good investment. Apple is famously cash rich and by spending today and owning the solar farms, Apple fixes their energy prices at today’s rates for the next 30 years. Apple has taken a large and variable cost and turned it into a fixed cost that is no longer subject to price inflation or fluctuation. What Apple is also purchasing is energy stability.

Apple is also becoming an energy supplier. For the first 10 years, PG&E will purchase 150MW of production and Apple gets 130MW.  In the last 20 years, Apple gets 100% of production. It’s likely that their operations will have expanded to utilize the power (as has the NC data center) but if not, they’ll have little trouble selling their surplus capacity.

While Apple was first in the, “okay then, we’ll build it ourselves” solar game, the even bigger story is that 2014 was the year solar arrived in Main Street USA. In just 2014, nearly 70% of the worlds solar power generation came online with several companies having more installed solar than Apple: Wal-Mart (105MW), Kohl’s (50MW), and Costco (48MW). IKEA is not far behind with 39MW. Apple isn’t even the largest purchaser of solar as Intel, Kohl’s, Whole Foods, Dell, and Johnson & Johnson all purchase more solar power than Apple. What was so special about solar in 2014?

Swanson’s Law observes that solar modules tend to drop in price by 20% for every doubling of cumulative shipped volume. Apple deployed 60MW between 2012-2014 and during that same time, photo voltaic capacity more than doubled. By being out in front and building not just demand, but also solar capacity, Apple helped 2014 be the year of solar grid parity in 3 NE states, California, Arizona, and Hawaii. It is predicted that grid parity will arrive in “many” US markets in 2015 and Deutsche Bank predicts solar grid parity for all 50 states in 2016. With Apple deploying another 407MW of solar In just 2015-2016, that prediction seems like slam dunk.

Bandwidth shaping on Mac OS X

By what dark magic has Apple accomplished this task? Inspecting the network interface didn’t turn up anything special so I checked the firewall rules (sudo pfctl -sa) and found dummynet rules! In the PF ruleset! And increasing dummynet packet counters.

A few  years ago I sampled each of the “All My Music In the Cloud” services (iTunes Match, Amazon  Cloud, Google Play). For them to stream my music back to all my devices, I first had to first upload all my music (82 GB of data) to each service.

The iTunes Match upload was far smaller because Apple has the worlds largest music library and iTunes Match only uploaded my songs that weren’t already already in their collection. That should have made the upload process quick, except that something about the upload mechanism Apple uses caused severe network congestion and network stalls of 5 full seconds. I blamed it on iTunes and used the built-in IPFW firewall to plumb a 256Kbps pipe so that iTunes Match uploads would stop erring out and I could use my internet connection during the long upload process.

ipfw pipe 1 config bw 256KBytes/s
ipfw add 1 pipe 1 src-port 443
ipfw add 2 pipe 1 dst-port 443

That IPFW solution worked just as well for throttling the other cloud music services.

Fast-forward a couple years to Mac OS 10.10.3 and the new Photos app that stores all my photos in the cloud. There’s a process named photolibraryd and it seems to have that same nasty behavior. The symptoms are identical but I can’t use IPFW because Apple removed it in OS X Yosemite. I understand, as I too stopped using IPFW years ago in favor of PF.  But Apple doesn’t provide ALTQ, the PF bandwidth shaper. So the PF firewall has no bandwidth shaping abilities. Or so I  thought.

After a bit of hunting, I found the Network Link Conditioner within the Hardware IO Tools for Xcode. Even better, a GUI interface for accomplishing my goal. I downloaded it, set up a 256Kbps upload limit and I could once again let photos upload while I use my internet connection.

By what dark magic has Apple accomplished this task?  Inspecting the network interface didn’t turn up anything special so I checked the firewall rules (sudo pfctl -sa) and found dummynet rules! In the PF ruleset! And increasing dummynet packet counters. Hmmmm.

Dummynet is part of IPFW, so apparently rather than implementing ALTQ,  Apple decided to modify PF to support dummynet. The man page for pf.conf doesn’t even contain the term ‘dummy’ but I expect that’ll come eventually. In the meantime, the intarwebs can help you find documentation for how to write rules for it.

An auspicious start to DOS programming

In 1992  I was a young geek of 19 years. My programming experience consisted of the BASIC programs in the manual that came with our Commodore 64 and a few others in our schools Apple II lab. I had also written a few HyperCard and FileMaker apps on the Mac in my bedroom, where I did all the typesetting for my Dad’s print shop. [Thanks so much dad, for buying that first Mac Plus instead of a Compugraphic typesetting machine].

My vocational training in Mechanical Drafting had landed me an entry level position at Kysor/Cadillac as the blueprint clerk. Before long I rearranged the print room to maximize the efficiency of the engineers and myself, leaving me with hours of spare time each day. Often I would roam the engineering department, in search of  engineering projects, much to the delight of the engineers who could often find drudge work to offload.

During one of these lulls, I was chatting with David, a bright young lad who worked in the QA department. David was also quite fond of computers and told me of an escapade in which some students at his school had written a login simulator that captured and stored passwords when users logged into an system infected with their program.

Our engineering files were stored on a Novell Netware server connected by a token ring network. Each DOS computer logged in using a Novell program (login.exe, IIRC). The password capturing program seemed like an interesting challenge so I acquired my first DOS compiler (Qbasic or PowerBasic, I can’t recall which I used for this task) and wrote login.bas. I simulated the login screen perfectly, stored the passwords to a file, and then passed them on to the real login program, logging the user in. It  offered the user no indication that foul play was at hand.

Pleased with my results, I showed Rick, our network admin. I explained that I hadn’t inspected the contents of the file, knew what was in it, and turned my back while he inspected it. It turns out that Rick wasn’t terribly fond of being informed that his network security wasn’t all that secure. A few of his heated words I recall were, “that’s not your job!” He immediately escalated the matter to Keith, our VP of Engineering, intent on having me fired.

On that day, it was quite fortunate for me that I had set a precedent of doing a lot of engineering work that was not my job. Unbeknownst to me, the wheels of my first promotion were already set in motion specifically because of the extra-curricular not my job work I had been doing. That day ended with me getting a stern talking to. Soon thereafter, I was promoted and my new job involved writing software for Kysor.