My two-week review of Optimum Online (October 2015)

So, I went back to Optimum.  My FiOS bill from Verizon had been slowly but steadily increasing over the past couple of years, with new fees and random price increases bringing my last bill up to $192.  This was with FiOS Extreme (a single DVR set-top box), 50/50 Mbps internet, and a home landline.

Moving to Optimum, i got their 50/25 internet, Optimum Silver TV (a single set-top box with their cloud DVR), and home phone for ~$140/month, guaranteed for 12 months.  There were some upfront costs associated with porting the home phone number and the install itself but it seemed like it was worth it for a $50/month savings.

After two weeks I can safely say that Optimum in 2015 is still inferior to FiOS and while I don’t regret leaving, I’m definitely ready to go back. What’s wrong?

The set-top box is still slow.

For years, before FiOS, I dealt with Cablevision’s horrifically slow boxes. The most obvious example of this slowness was changing channels: the time between when you pressed the “channel-up” or “channel-down” buttons and when the channel actually changed on screen was nearly 1 second. All menus were slow as well. While the Samsung box I got from Cablevision is definitely faster than that, there’s still a 100ms-200ms lag when changing channels. With the Verizon Motorola box I returned, the latency wasn’t noticeable – maybe 20 milliseconds?

The cloud DVR is terrible

We watch 100% of our TV content over DVR, so I was pretty excited about Cablevision’s cloud DVR and not having to worry about storage or recording conflicts. In practice, the DVR has a terrible UI – the episode name and number aren’t shown. Verizon’s DVR shows lots of info about the recorded episode (original air date, for example)

The real killer problemw ith the DVR though is that the signal just totally craps out when playing it back. Last night we watched this week’s episode of Homeland from DVR and there were lots of visual artifacts (green blobs and such) and a couple of periods where the DVR thought the video was playing but the picture on the screen was stopped. There was also a 10-15 second period during which the entire screen went black with no sound, as if there was just a hole in the stream recorded. When I backed it up it was still there on replay, so we missed a 10-15 second chunk of the show. These things NEVER happened with FiOS.

Home phone has to plug into the cablemodem.

This isn’t a huge problem but it’s annoying. We wanted the cablemodem down in the basement, but since there’s no phone jack down there, there’s no way to get a signal into the phone lines upstairs. We use cordless phones mostly, so it’s not a big deal, but we did have a fax machine (still a necessity sometimes!).

Internet isn’t nearly as fast.

All Verizon’s plans offer the same speeds up and down. Cablevision doesn’t. Even on Cablevision’s best plan, Ultra 101, you get 101 Mbit down but only 35 up. Verizon has speeds up to 150/150 reasonably priced now. The download speed seems to be on par with what we were getting with Verizon’s 50/50.

Phone calls require dialing “1”

This is really just aggravating. We’ve gotten used to dialing area codes, but Cablevision requires also dialing “1” in front of every number. Why?

Regional sports fee

This isn’t really a Cablevision thing since Verizon also added this bullshit $4.99 onto our bill, but as someone who doesn’t watch any sports, having to shell out $60/year for it explicitly is infuriating.

Going back to Verizon?

Yesterday I priced out Verizon and their pricing structure has really changed a lot, but for ~$171 I can now get their Ultimate HD package, 150/150 Internet, home phone and a DVR STB. That’s a big upgrade from what I had previously with them, and certainly better service. I’ll probably give Cablevision some more time to get used to it, but everybody in my house hates that we changed, so probably just a matter of time before we go back. I’m really surprised and disappointed that Cablevision hasn’t gotten very far in the 3+ years since I last tried them.

Moved to

Back in February I moved this site from WordPress to Jekyll. I had gotten tired of WordPress’s endless security updates and running a MySQL db just for a blog (I have a longstanding hatred of MySQL). Jekyll solved those problems, but I essentially lost most of my older posts because the wp->Jekyll converter is kind of … special. But most of all it made posting so tedious that I gave up on it entirely. So I fired the old MySQL back up and exported the content and imported it here. If it works out I’ll move my DNS over to point here (just doing a 301 for now). 

How (the hell) do you set up Splunk Cloud on Linux?

This took me way longer than I would’ve thought, mostly due to horrible documentation. Here’s my TL;DR version:

  1. Sign up for Splunk Cloud
  2. Download and install the forwarder binary from here.
  3. Log in here and note the URL of your Splunk instance:

    In the above picture, assume the URL is

  4. Make sure your instances can connect to port tcp/9997 on your input host. Your input host is the hostname from above with “input-” prepended to it. So in our example, the input host is To ensure you can connect, try telnet 9997. If it can’t connect you may need to adjust your firewall rules / Security groups to allow outbound tcp/9997

Below are the actual commands I used to get data into our Splunk Cloud trial instance:

$ curl -O
$ sudo dpkg -i splunkforwarder-6.2.0-237341-linux-2.6-amd64.deb
$ sudo /opt/splunkforwarder/bin/splunk add forward-server
This appears to be your first time running this version of Splunk.
Added forwarding to:
$ sudo /opt/splunkforwarder/bin/splunk add monitor '/var/log/postgresql/*.log'
Added monitor of '/var/log/postgresql/*.log'.
$ sudo /opt/splunkforwarder/bin/splunk list forward-server
Splunk username: admin
Active forwards:
Configured but inactive forwards:
$ sudo /opt/splunkforwarder/bin/splunk list monitor
Monitored Directories:
		[No directories monitored.]
Monitored Files:
$ sudo /opt/splunkforwarder/bin/splunk restart

Installing a new SSL certificate in your ELB via CLI

For future me:

  1. Create the key and CSR:
    $ openssl req -out -new -newkey rsa:2048 -nodes -keyout
  2. Upload the CSR to your SSL vendor (in this case, DigiCert) and obtain the signed SSL certificate.
  3. Create a PEM-encoded version of the signing key. This is required for AWS/IAM certs. To check if your key is already PEM-encoded, just “head -1 site.key”. If the first line says “—–BEGIN PRIVATE KEY—–” then it’s NOT PEM-encoded. The first line should be “—–BEGIN RSA PRIVATE KEY—–“.
    $ openssl rsa -in -outform PEM -out
    writing RSA key
  4. Upload the certificate to the IAM keystore:
    $ aws iam upload-server-certificate --server-certificate-name star_site_20141014 --certificate-body file:///Users/evan/certs_20141014/site/certs/star_site_com.crt --private-key file:///Users/evan/certs_20141014/ --certificate-chain file:///Users/evan/certs_20141014/site/certs/DigiCertCA.crt
        "ServerCertificateMetadata": {
            "ServerCertificateId": "XXXXXXXXXXXXXXX",
            "ServerCertificateName": "star_site_20141014",
            "Expiration": "2017-12-18T12:00:00Z",
            "Path": "/",
            "Arn": "arn:aws:iam::9999999999:server-certificate/star_site_20141014",
            "UploadDate": "2014-10-14T15:29:28.164Z"

Once the above steps are complete, you can go into the web console (EC2 -> Load Balancers), select the ELB whose cert you want to change, click the “Listeners” tab, click the SSL port (443) and select the new cert from the dropdown.

Can I create an EC2 MySQL slave to an RDS master?


Here’s what happens if you try:

mysql> grant replication slave on *.* to 'ec2-slave'@'%';
ERROR 1045 (28000): Access denied for user 'rds_root'@'%' (using password: YES)
mysql> update mysql.user set Repl_slave_priv='Y' WHERE user='rds_root' AND host='%';

Note: this is for MySQL 5.5, which is unfortunately what I’m currently stuck with.

World of Warcraft on a 13″ Retina Macbook Pro

I stopped playing WoW in 2008, and since I didn’t need Windows for gaming, I ended up putting Fedora (and ultimately Ubuntu) on my old Core 2 Duo desktop. After years of fighting with slow computers, I recently bit the bullet and bought the 13″ Retina Macbook Pro (MGX82LL/A). Even though I hadn’t played WoW in years – or any other PC games, for that matter – the gamer in me was still reluctant to go with a computer with no dedicated video card. I’d read up extensively on the Intel Iris 5100 chipset in the Macbook but I couldn’t find anything about its performance in WoW, which was the least-taxing game I could think of.

Well, as fate would have it, Blizzard recently announced they’d be purging the names of characters who hadn’t logged in for 5+ years. Since I had a new computer and I didn’t want to lose my beloved Undead Rogue it seemed like a good time to rejoin. After a couple days of playing, I figured I’d write this post as a service to any other would-be Macbook Pro purchasers curious about its performance in WoW.

This isn’t a detailed benchmarking post – I’m not Anandtech. The short version is that the performance of WoW on the MGX82LL/A is very good. I get 30-60 frames per second basically everywhere, though with settings only set to “fair.” The main thing I wanted to report here is heat. The laptop gets HOT when playing WoW. I installed iStat Menus to get the sensor data – see below.

WoW Settings
WoW Settings
MGX82LL/A CPU temperature - Baseline
MGX82LL/A CPU temperature – Baseline
MGX82LL/A temperature in WoW
MGX82LL/A temperature in WoW

The CPU sensors show temperature increases of over 100ºF. That’s pretty darn hot. I’ll play with the settings to see if I can get the temperature to something more reasonable.

The m3.medium is terrible

I’ve been doing some testing of various instance types in our staging environment, originally just to see if Amazon’s t2.* line of instances is usable in a real-world scenario. In the end, I found that not only are the t2.mediums viable for what I want them to do, but they’re far better suited than the m3.medium, which I wouldn’t use for anything that you ever expect to reach any load.

Here are the conditions for my test:

  • Rails application (unicorn) fronted by nginx.
  • The number of unicorn processes is controlled by chef, currently set to (CPU count * 2), so a 2 CPU instance has 4 unicorn workers.
  • All instances are running Ubuntu 14.04 LTS (AMI ami-864d84ee for HVM, ami-018c9568 for paravirtual) with kernel 3.13.0-29-generic #53-Ubuntu SMP Wed Jun 4 21:00:20 UTC 2014 x86_64.
  • The test used to simulate 65 concurrent clients hitting the API (adding products to cart) as fast as possible for 600 seconds (10 minutes).
  • The instances were all behind an Elastic Load Balancer, which routes traffic based on its own algorithm (supposedly the instances with the lowest CPU always gets the next request).

The below charts summarize the findings.

average nginx $request_time
average nginx $request_time

This chart shows each server’s performance as reported by nginx. The values are the average time to service each request and the standard deviation. While I expected the m3.large to outperform the m3.medium, I didn’t expect the difference to be so dramatic. The performance of the t2.medium is the real surprise, however.

#	_sourcehost	_avg	_stddev
1	m3.large	6.30324	3.84421
2	m3.medium	15.88136	9.29829
3	t2.medium	4.80078	2.71403

These charts show the CPU activity for each instance during the test (data as per CopperEgg).


The m3.medium has a huge amount of CPU steal, which I’m guessing accounts for its horrible performance. Anecdotally, in my own experience m3.medium far more prone to CPU steal than other instance types. Moving from m3.medium to c3.large (essentially the same instance with 2 cpus) eliminates the CPU steal issue. However, since the t2.medium performs as well as the c3.large or m3.large and costs half of the c3.large (or nearly 1/3 of the m3.large) I’m going to try running most of my backend fleet on t2.medium.

I haven’t mentioned the credits system the t2.* instances use for burstable performance, and that’s because my tests didn’t make much of a dent in the credit balance for these instances. The load test was 100x what I expect to see in normal traffic patterns, so the t2.medium with burstable performance seems like an ideal candidate. I might add a couple c3.large to the mix as a backstop in case the credits were depleted, but I don’t think that’s a major risk – especially not in our staging environment.

Edit: I didn’t include the numbers, but the performance seemed to be the consistent whether on hvm or paravirtual instances.