Open Source = more secure?

One of the many arguments Open Source advocates make is is that OSS is more secure because “anyone and everyone” can review the source.  This critical crypo bug in the GnuTLS library takes that idea out back and shoots it. Execution style.

(I’m not being critical of OSS. After all, I’m an OSS author and contribute to quite a few OSS projects. There are plenty of compelling arguments for OSS software, but increased security isn’t one of them.)

Pencil Sharpeners

While volunteering in my son’s classroom, his teacher asked if I’d sharpen some pencils, “Having a supply of sharp pencils is the bane of my existence!” I grabbed her basket of pencils and headed to the sharpening station, in a shared resource room. There I found this lovely little X-Acto XLR 1818 Electric Pencil Sharpener.

X-Acto sharpener

I sharpened about 25 pencils before the unit overheated. After 30 minutes it still refused to work. After 45 minutes I was able to sharpen 20 more pencils before it overheated again. Frustrated, I decided to engineer a better solution.

Design considerations:

• manual sharpeners don’t overheat
• teachers might be upset if I removed the electric sharpener
• pencil shavings should be dealt with
• doesn’t require [much] more space than a 11×17″ box lid
• one-handed operation is desirable

The first step was to acquire some good pencil sharpeners. I read a bunch of Amazon reviews and ultimately found penciltalk.org where pencil sharpening nerds hang out and write about their passion for sharpeners. I whittled down my list to these four which I purchased:

• Classroom Friendly
• Classic Manual (Deli 0620)
• Stanley Bostitch MPS1BLK (Amazon)
• Westcott Axis iPoint Evolution Electric Heavy Duty (15509) Amazon

After the sharpeners arrived, I grabbed a sheet of graph paper and a ruler. I measured how much clearance each sharpener needed to avoid skinned knuckles. Then I produced this sketch.

Pencil Station

With a design in hand, I headed to the garage and found an 8’ piece of 1” thick shelving. Because MDF wouldn’t hold a dado joint, I  glued each edge and screwed in L-brackets on the 4 back  corners (not pictured). Then I added the angle brackets to stiffen up the front. The result is a sharpening station that’s very heavy and stable.

Pencil Station

All three manual sharpeners came with a round L bracket designed to mount on the edge of a tabletop. I wanted a more secure attachment and the slippery shelf surface didn’t help. The solution was to add a layer of non-slip padding between the sharpener and shelf. Combined with the included bracket, the sharpeners have remained firmly attached for half a school year.

To keep the automatic sharpeners from sliding when pressing a pencil into them, I applied a pad of industrial strength Velcro hooks to the bottom shelf and hook-and-loop pads to the electric sharpeners. Now they too remain firmly in place while sharpening.

I am now experienced in bulk pencil sharpening. Every pencil in that basket is very sharp. I’m a fan of the Wescott and Classroom Friendly sharpeners. The fastest technique I’ve found is to load the Classroom Friendly, which grips the pencil and allows one-handed sharpening. I sharpen that pencil with my right hand, and sharpen another in the Westcott with my left. Both sharpeners are fast and good. I can settle into a rhythm where I’m cranking out two sharp pencils every 10 seconds.

I can see no evidence of anyone using the X-Acto any more. The Bostich is a piece of junk. It will only sharpen perfect pencils, it doesn’t produce a great point, and emptying the shavings is much harder than the Classroom Friendly and electric sharpeners.

What do the teachers think?

Hi Matt,

When I spoke with our staff this morning about pencil sharpeners, their eyes lit up! They would love to have one station per grade level (two for kindergarten). The total would be ten, if possible.

Mike
—-
Mike VanOrden – Principal

Value of a Hijacked PC

I’ve recently been writing mail server software that detects whether the remote is a compromised PC sending spam (frequently the case) versus a legit mail server whose connection should be permitted. The quantity of hijacked PCs is staggering, which made this article all the more interesting:

http://krebsonsecurity.com/2012/10/the-scrap-value-of-a-hacked-pc-revisited/

The value of a hacked PC
The value of a hacked PC

Nissan Leaf musings

In June of 2013, we leased a 2013 Nissan Leaf SV. It has since been our daily driver, making a 40 mile daily round-trip commute. It’s also the first car to leave the driveway on weekends. We like the car. A lot. It is fun to drive, spacious with 4 passengers, and fits 6 full shopping bags in the trunk. Our intent is to use all 12,000 miles per year allowed. After a few months driving, the fuel results are in.

  • electric bill: increased $30/mo.
  • gasoline: decreased by $100/mo.

Switching to electric resulted in an expense reduction of $70/mo. Our lease payment is $220/mo. If we reduce the lease payment by the fuel savings, our net payment is $150/mo. That’s a low payment for a $32,000 car.

Considering that 90% of our electricity here is renewable (hydro + wind), and electric cars are 90% efficient, and that ICE (internal combustion engines) are less than 30% efficient, the environmental impact of switching to electric is a big bonus.

The range is occasionally a limit. Lucas and I drove it to Meany Lodge. We’d have made the 75 mile trip except for the climb over Snoqualmie Pass. We had to stop at the pass and ‘juice up’ for a 1/2 hour, adding 10 miles of range. Then the Leaf nimbly climbed the forest service roads up to the lodge. We returned home with 25 miles to spare. Thousands of feet of elevation makes a meaningful difference in range.

It would be challenging if our only car was electric. We can’t pile 4 of us and luggage into the Leaf and drive to the Redwood Forests. Despite the Leaf’s great handling, the passes are just far enough away, uphill, in cold weather, that we’ll be taking the Fusion hybrid (37 mpg) on ski trips. For the 3% of our household driving that we don’t take the Leaf, range is the limiting factor.

Authoritative DNS servers, 2 or 3?

If you have some other rationale for [having a third DNS server], please feel free to elaborate.

The most basic reason for a 3rd DNS server is to increase availability. Every DNS primer advises having at least two DNS servers, geographically dispersed, and on different networks. Nearly every DNS operator starts out with two servers in the same rack, in the same subnet. Eventually, a failure will snowball and impact the many services above DNS, teaching the operator the value of isolation.

Even with 2 servers and appropriate geographic and network redundancy, eventually, a failure (fiber cut, power failure, server crash, etc.) will have 50% of your authoritative DNS offline for an extended period. During such failures, users will notice and complain. Within a day. In decades of experience, I’ve noticed that when the DNS server count is greater than two, a DNS server can be down for weeks before the first complaint arrives. Weeks.

Unless the operator has excellent monitoring tools (a small percentage), a DNS server failure can go unnoticed for hours or days. Some failures are subtle, such as zone file corruption that causes a single zone to not get published. The third server reduces outage impact from 50% to 33% of queries that fail.

For most operators, the more common reason for 3 servers is performance. By locating  DNS servers geographically closer to users, the round-trip-time of DNS lookups is reduced. This can also be achieved with 2 DNS servers and unicast IPs (http://www.ietf.org/rfc/rfc3258.txt). For the non-unicast enabled, having three or more DNS servers accomplishes that same purpose. Three seems to be the “sweet spot.”  If you survey the most popular sites, you’ll find they usually have 3 or more NS records.

My cadillac.net DNS cluster has one DNS server in Paris. That was by request of a French client. When all 3 servers were in the USA, their French web sites “felt” slower. We fixed that by moving one DNS server to Paris. Because DNS recursors remember how fast DNS servers respond, they tend to favor those nearest, resulting in better performance for end users. The difference is measurable with network tools, but more importantly, it’s perceptible to end users.

Another of my European clients has a significant portion of their user base in the USA. They have two DNS servers in Europe and 1 on each coast of the USA, so that DNS responses are fast for everyone. In 2010, they moved a couple of their more popular domains to a premium DNS provider for week long trial. They were unable to realize the promised increase in DNS performance or web traffic despite the premium $1,500/mo for the “Enterprise” DNS service. We believe that’s because we already had DNS servers geographically near the majority of their user base.

My clients in Australia prefer a couple DNS servers in the USA and one along the Pacific rim. For the same reason.

For most providers, the majority of their DNS traffic is local, covering less than 1,000km geographically. In those cases, the remainder may not be worth optimizing for. When it is, having a DNS server you can locate nearer your users can deliver substantial performance improvements.

Having three DNS servers, especially when each is in a different data center and different networks, identifies you as an experienced DNS operator that understands why you’d want number 3.

FreeBSD aliases, done well

While consulting on a FreeBSD server, the owner mentioned that some of his IP aliases had to be manually applied after the server rebooted. He is using the ipv4_addrs_[name] syntax, which is deprecated, according to the rc.conf man page.

While reading the man page, it also states that the syntax that I’ve been using for years ifconfig_[name]_alias[N] is also deprecated, and instead suggests the use of ifconfig_[name]_aliases, which appears to be a better solution.

The man page also mentions the /etc/start_if.[name] method, but provides no examples or explanation. So I searched and found UNIXgod’s guide to sane IP aliases. His example shows an example start_if file:

#!/bin/sh
#/sbin/ifconfig $1 alias <public_ip> netmask 0xffffffff
/sbin/ifconfig $1 alias 10.50.50.100 netmask 0xffffffff

…..

It looked like a reasonable approach so I removed my IP alias definitions from /etc/rc.conf and put them into /etc/start_if.[name] files. After booting, I noticed that the first alias on each interface was not defined.

I believe I understand why. Since the start_if.[name] script runs before the interfaces are configured, the first alias entry is treated as the primary IP and not an alias. Later, when the network device is configured by the ifconfig_[name] entry in rc.conf, it overwrites the address defined by that first alias. The easy workaround is to put the primary IP (or a placeholder) as the first entry in each /etc/start_if.[name] file.

I further improved upon UNIXgod’s example by declaring a couple variables. Doing so reduces the duplicate tokens and increases the clarity of each entry. Here’s an example of one of my files:

#!/bin/sh
IFC=”/sbin/ifconfig lo0 inet”
AMASK=”netmask 0xffffffff”

$IFC 127.0.0.1 netmask 0xff000000
$IFC alias 127.0.0.2 $AMASK
$IFC alias 127.0.0.3 $AMASK

….

I have seen or experienced the problems described by the deprecated methods. Perhaps this new technique will see fewer issues in the coming years.

Friends don’t let friends use GoDaddy

Today I received a phone call from GoDaddy. The representative inquired if I was still using the three SSL certificates that I purchased in 2007. I confirmed that I was. He then told me that since they were 1024-bit certificates, the U.S. government requires that they be discontinued by Dec. 31, 2013. That was factual error #1. It’s not the government but The Certificate Authority / Browser Forum that is requiring its members to deprecate 1024-bit SSL before Jan. 1, 2014.

Since GoDaddy no longer issues 10-year certs, the solution he offered was to issue new 5-year certificates, and they’d ‘refund the difference’ afterwards. I asked if they supported longer keys lengths such as 4096 bit. He didn’t know, apologized several times as he looked it up, and then told me that they did not. Factual error #2. GoDaddy currently supports key lengths from 2048 to 4096 bits.

Realizing that I had a barely trained sales droid that knew very little about SSL, I asked him how much it would cost to issue the new certs. The total for the 3 certs would be $892.37. I informed him that was not going to happen, thanked him for his time, and hung up.

Instead, I logged onto GoDaddy’s cert site, opened each SSL certificate in turn, clicked the “Re-Key” button, and uploaded a new 2048-bit CSR. Moments later I had new GoDaddy CA signed 2048-bit SSL certs issued for free.

Thanks for the “help” GoDaddy. That’s the kind of “service” that insures I’ll continue to steer business away from your predation.

Long Range Leaf: driving to Meany Lodge

Can our 2013 Nissan Leaf make it from Seattle to Meany Lodge on a single charge? The distance from our house is 77 miles and the nominal Leaf range is 84 miles. The key factor on this trip is Snoqualmie Pass, and the 3,022 feet we’d have to climb. I figured if I had any range left at the top of the pass, the gain from regenerative braking down the back side would just barely suffice.

At 7AM I pulled up the LEAF app and told the car to warm up the cabin. While still laying in bed. That’s a really nice feature! At 8AM we left Seattle with a toasty warm car and a full charge. I drove in ECO mode with the climate control off and the cruise set at 60 mph. As we passed North Bend, I had about 37 miles to go with a predicted range of 37 miles. That’s also where the climb begins in earnest.

As we climbed the pass, the charge meter dropped rapidly, yielding low battery alerts two miles before the summit. As I crested the pass, I guesstimated 5 miles left (the Leaf stops reporting below 6) plus the 5 miles I’d gain from regen. braking would total 10 miles worth of charge. Since the lodge is 13 miles from the pass, I opted to stop at the Chevron and juice up.

Lucas and I took a 20 minute ice cream break while our Leaf guzzled electrons until it had 10 miles of predicted range. I figured with 10 in the tank plus 5 from braking, I’d have plenty for the climb up to the lodge. We arrived with 8 miles remaining. Without the pit stop, we’d have likely run out a mile or two shy.

At the weekend work party, I took the head off TomCat’s Chevy 292 straight six engine, cut and ground steel plates and pipes, used a MIG welder for the first time (last weld, 25 years ago!), laid culvert, dug water trenches, and pressed apple cider. Lucas had a great time playing with a pack of 8 youngsters whose parents were getting the lodge ready for the ski season.

On Sunday afternoon we left the lodge with a full charge and a predicted range of 81 miles. After climbing 500 feet over the pass, the range crept higher and higher with each mile driven, peaking at 104 miles. As before, the key factor was elevation, and it’s mostly downhill on the return trip. We arrived home with 25 miles of range.

In summary, the Leaf has more than enough range to get home from Meany Lodge, but not quite enough to get there, due primarily to elevation. If we didn’t have another car, we could surely cover the distance by driving slower. With fair weather in daylight, 55 mph should do. If the weather was cold and headlights and/or climate control were needed, it might be possible at 45mph. I wouldn’t consider it fun.

Synology for FreeBSD backups

How I configured a Synology NAS as a backup server, using NFS and AMD on FreeBSD.

In February 2013, a client purchased a Synology NAS. They hired me to set up their mail server to back up to it each night. In addition, I wrote them a restore script. After an email address is specified, it presents a list of available snapshots and provides the ability to restore mailbox contents. It is now September, the solution has been working flawlessly, so I’m sharing a few details.

To fully preserve the permissions, ownership, and metadata, mounting the Synology via NFS was the best option. A potential issue with NFS is that the mail server would be hung if the Synology was unavailable. Because the NAS is only used for backups, I chose to automount the NFS share. That reduces the window for NFS hang exposure to about 20 minutes a day.

To configure automounter, I added this entry to /etc/fstab:

10.10.10.10:/volume1/mail /mnt/synology nfs rw,noauto 0 0

and these entries to /etc/rc.conf:

amd_enable=”YES”
amd_flags=”-a /.amd_mnt -l syslog /mnt/synology /etc/amd.synology”

and this to /etc/amd.synology:

/defaults type:=nfs;opts:=rw,grpid,resvport,vers=3,proto=tcp
* rhost:=synology;rfs:=/volume1/mail/snapshots

Once the automounter (amd) was properly configured, it was a simple matter to install and configure rsnapshot. Each night, the periodic scripts triggers the backup which automounts the NAS. Shortly after the backups conclude, amd disconnects it.