Nissan Leaf musings

In June of 2013, we leased a 2013 Nissan Leaf SV. It has since been our daily driver, making a 40 mile daily round-trip commute. It’s also the first car to leave the driveway on weekends. We like the car. A lot. It is fun to drive, spacious with 4 passengers, and fits 6 full shopping bags in the trunk. Our intent is to use all 12,000 miles per year allowed. After a few months driving, the fuel results are in.

  • electric bill: increased $30/mo.
  • gasoline: decreased by $100/mo.

Switching to electric resulted in an expense reduction of $70/mo. Our lease payment is $220/mo. If we reduce the lease payment by the fuel savings, our net payment is $150/mo. That’s a low payment for a $32,000 car.

Considering that 90% of our electricity here is renewable (hydro + wind), and electric cars are 90% efficient, and that ICE (internal combustion engines) are less than 30% efficient, the environmental impact of switching to electric is a big bonus.

The range is occasionally a limit. Lucas and I drove it to Meany Lodge. We’d have made the 75 mile trip except for the climb over Snoqualmie Pass. We had to stop at the pass and ‘juice up’ for a 1/2 hour, adding 10 miles of range. Then the Leaf nimbly climbed the forest service roads up to the lodge. We returned home with 25 miles to spare. Thousands of feet of elevation makes a meaningful difference in range.

It would be challenging if our only car was electric. We can’t pile 4 of us and luggage into the Leaf and drive to the Redwood Forests. Despite the Leaf’s great handling, the passes are just far enough away, uphill, in cold weather, that we’ll be taking the Fusion hybrid (37 mpg) on ski trips. For the 3% of our household driving that we don’t take the Leaf, range is the limiting factor.

Authoritative DNS servers, 2 or 3?

If you have some other rationale for [having a third DNS server], please feel free to elaborate.

The most basic reason for a 3rd DNS server is to increase availability. Every DNS primer advises having at least two DNS servers, geographically dispersed, and on different networks. Nearly every DNS operator starts out with two servers in the same rack, in the same subnet. Eventually, a failure will snowball and impact the many services above DNS, teaching the operator the value of isolation.

Even with 2 servers and appropriate geographic and network redundancy, eventually, a failure (fiber cut, power failure, server crash, etc.) will have 50% of your authoritative DNS offline for an extended period. During such failures, users will notice and complain. Within a day. In decades of experience, I’ve noticed that when the DNS server count is greater than two, a DNS server can be down for weeks before the first complaint arrives. Weeks.

Unless the operator has excellent monitoring tools (a small percentage), a DNS server failure can go unnoticed for hours or days. Some failures are subtle, such as zone file corruption that causes a single zone to not get published. The third server reduces outage impact from 50% to 33% of queries that fail.

For most operators, the more common reason for 3 servers is performance. By locating  DNS servers geographically closer to users, the round-trip-time of DNS lookups is reduced. This can also be achieved with 2 DNS servers and unicast IPs (http://www.ietf.org/rfc/rfc3258.txt). For the non-unicast enabled, having three or more DNS servers accomplishes that same purpose. Three seems to be the “sweet spot.”  If you survey the most popular sites, you’ll find they usually have 3 or more NS records.

My cadillac.net DNS cluster has one DNS server in Paris. That was by request of a French client. When all 3 servers were in the USA, their French web sites “felt” slower. We fixed that by moving one DNS server to Paris. Because DNS recursors remember how fast DNS servers respond, they tend to favor those nearest, resulting in better performance for end users. The difference is measurable with network tools, but more importantly, it’s perceptible to end users.

Another of my European clients has a significant portion of their user base in the USA. They have two DNS servers in Europe and 1 on each coast of the USA, so that DNS responses are fast for everyone. In 2010, they moved a couple of their more popular domains to a premium DNS provider for week long trial. They were unable to realize the promised increase in DNS performance or web traffic despite the premium $1,500/mo for the “Enterprise” DNS service. We believe that’s because we already had DNS servers geographically near the majority of their user base.

My clients in Australia prefer a couple DNS servers in the USA and one along the Pacific rim. For the same reason.

For most providers, the majority of their DNS traffic is local, covering less than 1,000km geographically. In those cases, the remainder may not be worth optimizing for. When it is, having a DNS server you can locate nearer your users can deliver substantial performance improvements.

Having three DNS servers, especially when each is in a different data center and different networks, identifies you as an experienced DNS operator that understands why you’d want number 3.

FreeBSD aliases, done well

While consulting on a FreeBSD server, the owner mentioned that some of his IP aliases had to be manually applied after the server rebooted. He is using the ipv4_addrs_[name] syntax, which is deprecated, according to the rc.conf man page.

While reading the man page, it also states that the syntax that I’ve been using for years ifconfig_[name]_alias[N] is also deprecated, and instead suggests the use of ifconfig_[name]_aliases, which appears to be a better solution.

The man page also mentions the /etc/start_if.[name] method, but provides no examples or explanation. So I searched and found UNIXgod’s guide to sane IP aliases. His example shows an example start_if file:

#!/bin/sh
#/sbin/ifconfig $1 alias <public_ip> netmask 0xffffffff
/sbin/ifconfig $1 alias 10.50.50.100 netmask 0xffffffff

…..

It looked like a reasonable approach so I removed my IP alias definitions from /etc/rc.conf and put them into /etc/start_if.[name] files. After booting, I noticed that the first alias on each interface was not defined.

I believe I understand why. Since the start_if.[name] script runs before the interfaces are configured, the first alias entry is treated as the primary IP and not an alias. Later, when the network device is configured by the ifconfig_[name] entry in rc.conf, it overwrites the address defined by that first alias. The easy workaround is to put the primary IP (or a placeholder) as the first entry in each /etc/start_if.[name] file.

I further improved upon UNIXgod’s example by declaring a couple variables. Doing so reduces the duplicate tokens and increases the clarity of each entry. Here’s an example of one of my files:

#!/bin/sh
IFC=”/sbin/ifconfig lo0 inet”
AMASK=”netmask 0xffffffff”

$IFC 127.0.0.1 netmask 0xff000000
$IFC alias 127.0.0.2 $AMASK
$IFC alias 127.0.0.3 $AMASK

….

I have seen or experienced the problems described by the deprecated methods. Perhaps this new technique will see fewer issues in the coming years.

Friends don’t let friends use GoDaddy

Today I received a phone call from GoDaddy. The representative inquired if I was still using the three SSL certificates that I purchased in 2007. I confirmed that I was. He then told me that since they were 1024-bit certificates, the U.S. government requires that they be discontinued by Dec. 31, 2013. That was factual error #1. It’s not the government but The Certificate Authority / Browser Forum that is requiring its members to deprecate 1024-bit SSL before Jan. 1, 2014.

Since GoDaddy no longer issues 10-year certs, the solution he offered was to issue new 5-year certificates, and they’d ‘refund the difference’ afterwards. I asked if they supported longer keys lengths such as 4096 bit. He didn’t know, apologized several times as he looked it up, and then told me that they did not. Factual error #2. GoDaddy currently supports key lengths from 2048 to 4096 bits.

Realizing that I had a barely trained sales droid that knew very little about SSL, I asked him how much it would cost to issue the new certs. The total for the 3 certs would be $892.37. I informed him that was not going to happen, thanked him for his time, and hung up.

Instead, I logged onto GoDaddy’s cert site, opened each SSL certificate in turn, clicked the “Re-Key” button, and uploaded a new 2048-bit CSR. Moments later I had new GoDaddy CA signed 2048-bit SSL certs issued for free.

Thanks for the “help” GoDaddy. That’s the kind of “service” that insures I’ll continue to steer business away from your predation.

Long Range Leaf: driving to Meany Lodge

Can our 2013 Nissan Leaf make it from Seattle to Meany Lodge on a single charge? The distance from our house is 77 miles and the nominal Leaf range is 84 miles. The key factor on this trip is Snoqualmie Pass, and the 3,022 feet we’d have to climb. I figured if I had any range left at the top of the pass, the gain from regenerative braking down the back side would just barely suffice.

At 7AM I pulled up the LEAF app and told the car to warm up the cabin. While still laying in bed. That’s a really nice feature! At 8AM we left Seattle with a toasty warm car and a full charge. I drove in ECO mode with the climate control off and the cruise set at 60 mph. As we passed North Bend, I had about 37 miles to go with a predicted range of 37 miles. That’s also where the climb begins in earnest.

As we climbed the pass, the charge meter dropped rapidly, yielding low battery alerts two miles before the summit. As I crested the pass, I guesstimated 5 miles left (the Leaf stops reporting below 6) plus the 5 miles I’d gain from regen. braking would total 10 miles worth of charge. Since the lodge is 13 miles from the pass, I opted to stop at the Chevron and juice up.

Lucas and I took a 20 minute ice cream break while our Leaf guzzled electrons until it had 10 miles of predicted range. I figured with 10 in the tank plus 5 from braking, I’d have plenty for the climb up to the lodge. We arrived with 8 miles remaining. Without the pit stop, we’d have likely run out a mile or two shy.

At the weekend work party, I took the head off TomCat’s Chevy 292 straight six engine, cut and ground steel plates and pipes, used a MIG welder for the first time (last weld, 25 years ago!), laid culvert, dug water trenches, and pressed apple cider. Lucas had a great time playing with a pack of 8 youngsters whose parents were getting the lodge ready for the ski season.

On Sunday afternoon we left the lodge with a full charge and a predicted range of 81 miles. After climbing 500 feet over the pass, the range crept higher and higher with each mile driven, peaking at 104 miles. As before, the key factor was elevation, and it’s mostly downhill on the return trip. We arrived home with 25 miles of range.

In summary, the Leaf has more than enough range to get home from Meany Lodge, but not quite enough to get there, due primarily to elevation. If we didn’t have another car, we could surely cover the distance by driving slower. With fair weather in daylight, 55 mph should do. If the weather was cold and headlights and/or climate control were needed, it might be possible at 45mph. I wouldn’t consider it fun.

Synology for FreeBSD backups

How I configured a Synology NAS as a backup server, using NFS and AMD on FreeBSD.

In February 2013, a client purchased a Synology NAS. They hired me to set up their mail server to back up to it each night. In addition, I wrote them a restore script. After an email address is specified, it presents a list of available snapshots and provides the ability to restore mailbox contents. It is now September, the solution has been working flawlessly, so I’m sharing a few details.

To fully preserve the permissions, ownership, and metadata, mounting the Synology via NFS was the best option. A potential issue with NFS is that the mail server would be hung if the Synology was unavailable. Because the NAS is only used for backups, I chose to automount the NFS share. That reduces the window for NFS hang exposure to about 20 minutes a day.

To configure automounter, I added this entry to /etc/fstab:

10.10.10.10:/volume1/mail /mnt/synology nfs rw,noauto 0 0

and these entries to /etc/rc.conf:

amd_enable=”YES”
amd_flags=”-a /.amd_mnt -l syslog /mnt/synology /etc/amd.synology”

and this to /etc/amd.synology:

/defaults type:=nfs;opts:=rw,grpid,resvport,vers=3,proto=tcp
* rhost:=synology;rfs:=/volume1/mail/snapshots

Once the automounter (amd) was properly configured, it was a simple matter to install and configure rsnapshot. Each night, the periodic scripts triggers the backup which automounts the NAS. Shortly after the backups conclude, amd disconnects it.

AT&T cramming: Part 2

In 2011, a miscreant abused AT&T’s Wireless Access Protocol payment system and add unauthorized charges to my account. When I noticed the charges, I called AT&T. After getting the initial “we can only refund 3 months” runaround, I escalated the matter  until I got a full refund. I also had AT&T add Purchase Blocker to all 4 of my lines.

Today I logged in to my AT&T account and noticed that, quite helpfully, AT&T now highlights these “billed mobile purchases.” Unfortunately, two new recurring charges were added to my account in January. Jesta, one of the companies behind the crammed charges are self-professed scammers that were fined $1.2M by the FTC in August. As part of the judgement, they are required to refund all charges when customers request it.

AT&T knowingly allows these scammers to bill AT&T customers, which greatly peeves me. I tallied up the charges, called AT&T, and requested a full refund. After getting the same “we can only refund 60 days” runaround, I again escalated the matter and have a full refund being processed. AT&T has no idea why Purchase Blocker got dropped from two of my lines but it has been re-added.

The most interesting part is the email excerpts from Jesta cited in the FTC complaint. Beyond admitting their business is a scam, they discuss ways to keep their return rate below 17%, the rate at which T-Mobile takes away Jesta’s ability to charge customers. AT&T cares even less for their customers and is willing to let scammers reach a 18.5% return rate. Is there any legitimate business with return rates above 10%?

the ROI on LED

Did you do a break even analysis yet? How long will it take you to recoup the expense?

The way to calculate break even (or Return On Investment) is to know roughly how much each bulb costs to use. To determine that, I built a spreadsheet that listed all 49 light fixtures in my house, the number of bulbs in each fixture, watts per bulb, lumens, and the estimated hours of monthly use. From that list, I picked the 24 most expensive bulbs to operate and replaced them with LEDs, at the cost of $217.

Conclusions:

  • Halogen track lights are horrifically inefficient. Replace immediately.
  • Old transformers are terribly inefficient. Replace immediately.
  • LED track light bulbs are hard to find locally and horrifically expensive. Instead, buy direct from China.
  • Considering their lumen output, 4′ fluorescent bulbs aren’t that bad
  • The ROI is usually less than a year for bulbs used more than an hour a day

For the bulbs in my “top 24” list, the ROI period was less than 12 months, and that was purchasing the bulbs at late 2012 prices. Today I can buy most of those bulbs for about 30% less, so the ROI is even faster. Today at Costco, I purchased 850 lumen dimmable LED bulbs for $8 each.

Also consider that many of the bulbs I replaced were CFL. The savings in going from CFL to LED are much lower than when switching from incandescent, lowering my ROI. But the instant on, dimming, and improved light quality of LED bulbs make the switch worth while.

What LED’s do you recommend?

I recommend whatever LED bulbs cost about $10 for 850 or more lumens. I would buy them only at a local store with a good return policy. Out of 40 bulbs, I’ve had two fail. At $10/ea, they cost just enough that it’s worth taking them back for an exchange.

It’s worth noting that both my bulb failures were on the same power circuit as the 12v track lights, and I suspect the 12v power transformer played a role in their failures.

Did you bypass CFL altogether?

We used many CFL bulbs from 2009-2012. The light quality of the earliest ones was quite awful, so we confined them to areas where that didn’t matter. Price was never an issue, as Seattle City Light subsidizes them: a 6-pack of CFL bulbs has cost $1 for years now. As CFL bulb quality improved, CFL bulbs found their way into more rooms. But unlike LED bulbs, they never became good enough that we liked them.