Joe Innes

A collection of 7 posts

The AWS Outage—What Happened and Why Does It Matter?

Yesterday, one of Amazon’s data centres suffered some kind of catastrophic failure, and their S3 service went down. The US-EAST-1 DC is one of the data centres the tech giant have on the Eastern seaboard (it’s in North Virginia, to be precise). A large portion of the internet was affected, and a variety of websites were affected by the outage.

What is AWS and S3?

AWS stands for Amazon Web Services, and is a collection of technology platforms which can be used by anyone to run websites. It includes EC2 (virtual server hosting), RDS (hosted databases), SES (email sending and receiving service), and many other services such as S3, which was affected in the outage yesterday.

S3 stands for Simple Storage Service, and is basically an online file storage platform. For a small fee, website owners can upload static assets to Amazon, who will look after it and serve them up on request. As an example, Netflix, Reddit, Dropbox, Tumblr, and Pinterest all use S3 to host critical parts of their website. Think of it kind of like a cloud hosted USB hard drive.

Why Would a Web Developer Use S3?

Often, hosting platforms limit the amount of data a particular website can transfer in a month, or charge money based on the amount of data transferred. Using a third party file storage system can be much cheaper, and you only pay for what you use (as opposed to what you might use, which is the normal pricing model used by web hosts — you don’t have to pay for ‘unused space’ on your server).

If you have a website where users might upload content, then S3 already has all of the infrastructure needed to upload and manage content, and the web developer doesn’t need to write server code to handle file uploads.

S3 has an availability SLA of 99.9% (known as ‘three nines’), which works out at around 43 minutes per month of downtime, after which service credits are offered. This is much more than most web hosts offer.

So What Actually Happened?

As yet, Amazon have not released details of what exactly caused the outage, but it is clear that their US-EAST-1 DC was failing to deliver files some portion of the files stored there. Amazon referred to this simply as ‘increased error rates’, but many users were reporting a full outage. You may have seen images missing, found some websites completely unusable, or seen core functionality of websites working incorrectly.

In a rather funny twist, Amazon themselves use S3 to host their service status page, so they were not immediately able to update this to reflect the fact that the service was unavailable.

What Lessons Can Developers Learn From This?

In the wake of this outage, developers around the world will be under pressure to build more redundant storage solutions. This outage, although it was relatively short (a few hours), will probably result in an awful lot of lost productivity, potentially also lost sales and revenue. Business people in the higher echelons of companies will likely be very unhappy with this loss, and will be looking to mitigate going forwards.

What Lessons Can Amazon Learn From This?

In the short term, Amazon are going to remain the leading storage provider due to their cost and reputation. However, Amazon are going to need to work to rebuild confidence in their services going forward to retain their huge market share. Other providers will be starting to offer redundant solutions to compete with Amazon. To counter this, Amazon will need to consider whether they can make a profitable service with automatic fail-overs, duplicating data across multiple data centres, rather than relying on developers to implement this themselves.

If Amazon get their marketing right, this could even end up turning them a profit in the long run.

What Lessons Can Users Learn From This?

There’s not much you can do about this yourself, but it’s a great opportunity to better understand how much of the internet is heaped up in one place. Amazon Web Services is a huge platform that powers many of your favourite websites, and in the 10 years it’s been operational, it has mostly been working silently in the background.

As of writing this, S3 is back up, and all services in North America are operating normally. I would expect a preliminary root cause analysis to be available by the end of this week.


Top Tips For Getting Hacked

Here is a step-by-step tutorial for anyone who would like to get hacked.

  • When you install OSMC on your Raspberry Pi, be sure to leave the default user as osmc and the password for that account as osmc
  • To make sure that hackers can gain access to your Pi, make sure that you don’t configure password logons (and certainly don’t enforce them)
  • To save hackers having to scan all of your ports, be sure to leave SSH running on port 22
  • Remember: unless you set up port forwarding on your router, your Pi will only be accessible from your home network. Configure port forwarding to make sure that anyone trying to access your Pi remotely can do so
  • Wait until you log in via SSH yourself to see whether anyone has accessed your Pi. You’ll know when you log in and your Pi says that the last log in was from Italy
  • For bonus points, make sure to keep a bitcoin wallet somewhere on the Pi. Make sure it’s called ‘wallet.dat’, otherwise the hackers might not find it
  • If you are super eager to get started, why don’t you try to find an IRC bot written in Perl. Maybe it could be base 64 encoded and wrapped in an eval statement just to obfuscate it.
  • If you can’t find one, you could pop over into a quiet IRC channel where all the members are named Hack-1234 and see if anyone with a real handle can help you.
  • Don’t have any form of login monitoring set up on your servers.

Clearly, I would never be so stupid as to try any of the above, but theoretically, if I had, I would perhaps have changed ports, passwords, enabled (and enforced) key-based SSH sign on, and maybe I’d have set up the following in /etc/ssh/sshrc

ip=\`echo $SSH_CONNECTION | cut -d “ “ -f 1\`  
host=\`hostname\`  
ifttt= #IFTTT Maker key  
curl -i -s \  
-H “Accept: application/json” \  
-H “Content-Type:application/json” \  
-X POST — data ‘{“value1”:”’”$host”’”,”value2":”’”$USER”’”,”value3":”’”$ip”’”}’ \  
https://maker.ifttt.com/trigger/login/with/key/$ifttt > /dev/null

Migrating from Meteor Hosting (.meteor.com) to my own VPS

Sad news — the free and simple hosting provided by Meteor is coming to an end (as do all things), and so if you want to keep your apps, you need to migrate them to another host.

I followed the steps below with a brand new Digital Ocean droplet, but this should work with any VPS you have access to. If you don’t have access to a VPS, check out this article. You’ll also need to configure SSH access using a key, but that’s not too complicated. Google-fu will help you.

Deploying the app

The first step is to get the most recent version of the app itself — I know you’re using source control, so that won’t be a problem. Right?

git clone <your-repo>

Now, you’re going to use a tool called Meteor Up. The most recent version is actually available as mupx. Install it and then you can initiate a new Meteor Up project.

sudo npm install -g mupx  
mupx init

Meteor Up needs to use a settings file for Meteor, so if you have any custom entries in a settings.json file or something like that, you’ll need to migrate your entries into the new settings.json file that mupx has created.

Now, open up mup.json, and modify the file based on the comments. The most basic modifications are:

servers.host — enter your VPS’s IP address  
servers.username — normally ‘root’ will be fine here, depending on how your server is configured.  
servers.password — per best practice, you should probably comment this out and use the ‘pem’ line instead  
servers.pem — uncomment this line, and change it to “~/.ssh/id_rsa.decrypted”. Note that you will need to add a comma at the end of this line too.
appName — enter a one word name for your app. This will be used on the VPS as the name of docker container, so make it clear. Write this down on a piece of paper!  
app — the path to the app (on your local machine)
env.ROOT_URL — this will be used to set up the web server, make sure you set this correctly to a domain that you own, and that is pointing at the VPS.

If you need some help registering a domain name, check out this article.

You’re almost ready to deploy now — you just need an unencrypted version of your SSH key. Run the following command:

openssl rsa -in ~/.ssh/id\_rsa -out ~/.ssh/id\_rsa.decrypted

You should be prompted for your passphrase, and then you’re good to go! Obviously, make sure this key never gets out into the wild.

Your next step is to configure the server. While this sounds like a painstaking process, Meteor Up takes care of it all for you — just run the following command.

mupx setup

This shouldn’t take very long, and will install everything on your server, apart from the application itself.

Now for the fun bit — deploying your app. It’s as simple as:

mupx deploy

Now your app will be up and running on the new web host, accessible at the root URL you provided in the mup.json file.

Migrating the data

When you access the app, you might notice that you’ve lost all of the data in it. If this bothers you, the process for migrating the data over is a little more involved, but not too difficult.

First of all, make sure that you have Mongo installed on your computer (in case you don’t want Mongo, all you really need is the mongodump executable).

Next, from your Meteor app’s directory run the following command.

meteor mongo <your-app.meteor.com> --url

You’ll get back a reply that looks like this:

mongodb://client-id:password@server:27017/app\_name\_meteor_com

You need to extract the information above, and run the command below, using the data:

./mongodump -h <server> --port 27017 --username <client-id> --password <password> -d <app\_name\_meteor_com>

These two commands have to be run within a minute of each other according to the Internet, or they may not work (although I had no problems).

This will create a folder called ‘dump’ in your current working directory. You’ll need to copy this up onto your server. You can choose the location it will be uploaded to on the server yourself, but I just put it in /root/dump. It won’t be staying for long anyway.

scp -r dump root@<yourServer>:dump

Next up, you need to ssh into your server. Now, you’re going to copy the dump into your MongoDB container, and then open a shell inside that container to import the database. It’s all getting a little inception-y.

docker cp dump mongodb:/dump  
docker exec -it mongodb bash

Now, you’re inside the docker container, we need to remove the database that was automatically created for the app you deployed, and then you just need to import the database dump.

First step is to remove the empty database. We’re going to load up Mongo, access the database, and drop the one we don’t want. Follow the commands below:

mongo  
show dbs;

You should see three databases — local, test, and the third one, named after your app. This name is important, write it on a piece of paper.

Now, use your database, and drop it.

use <myDb>;  
db.dropDatabase();

Your app will stop working, but we have one step left to go — restoring the database from the .meteor.com hosted app.

If you’ve been paying attention, you should have two pieces of information written down on a piece of paper. The first is the Meteor app name. The second is the dumped database name. We need to restore the dumped database with the name of the Meteor app.

Run the following command:

mongorestore -d <yourAppName> dump/<appName\_meteor\_com>

In case you just restore the database as it is, your Meteor app won’t know where to find it.

Once you’ve done that, you should be good to go. Access your app at the new location, and you should see no difference in comparison to the meteor.com version — except it should be a bit faster, and won’t be subject to spin downs.


Setting Up My Home Server

So, I’ve got an old laptop kicking around, and I decided to spend an afternoon making it into a home server.

Choosing an OS

The Windows key on the bottom has run dry, so I’ve decided to spin up Xubuntu on it. I want a graphical interface because it will make managing it much easier, but I don’t want to sacrifice too much disk space or speed to it. I set the torrent to download, and headed over to get UNetbootin to burn the ISO to a USB stick. As I’m running on a Mac, UNetbootin has a few weird quirks — the favourites links on the Finder don’t work, and there’s no retina support, so it’s ugly, but it does the job, you just have to locate the file manually by traversing the whole directory tree.

UNetbootin can’t set active flags or write the MBR, so the next steps are to unmount the disk and then run a few terminal commands:

fdisk -e /dev/rdiskX

Where X is the number of the disk. Ignore the error message here, and then type the following, hitting enter at the end of each line:

f 1  
write  
exit

Then, download the Syslinux binaries from Kernel.org. You need the mr.bin file, which should be hiding under bios/mbr/mbr.bin. Once you’ve located it, do the following, replacing the X with the disk number again (you may also have to unmount the drive again):

sudo dd conv=notrunc bs=440 count=1 if=bios/mbr/mbr.bin of=/dev/diskX

Then follow the instructions on UNetbootin to burn the ISO file to an external drive. The whole process should only take a few minutes.

Installing the OS

I booted up the old laptop from the USB stick by setting it to the first priority in the BIOS. Then, choose the option to Install Xubuntu. The laptop booted cleanly into a desktop, and the installer launched immediately. I opted to do my own partition configuration, because I want to have the OS separated from data partitions. When it prompts for ‘Installation type’, choose ‘Something else’. I gave 10GB for the OS itself, 4GB swap, and the rest I formatted as XFS and mounted it at /mnt/data1 (eventually, I will have /mnt/data as pooled storage).

I name all of my computers after ships from Iain M. Banks’s Culture series, this one is no exception. Given the repurposing of the laptop as a server, I decided on ReformedNiceGuy as the hostname.

I chose to connect the laptop to the internet immediately and allow it to automatically update itself as it installed. The installation took about 30 minutes, including the downloaded updates. I restarted the computer, and hit a snag. Grub loaded, but Xubuntu wouldn’t — there was just a black screen. I’ve had a few problems with this laptop before, so tried running the grub command with nomodeset, and it booted like a charm, so I added this to the GRUB\_CMDLINE\_LINUX_DEFAULT line in the /etc/default/grub file, and ran sudo update-grub.

Configuring the OS

Once I logged in, there were (surprisingly, as I thought it would have installed them already) a few updates available. I ran them, and set the OS to automatically install updates.

Next up, I wanted to keep power consumption (and so fan spin-up and noise) low, so I ran sudo apt-get install cpufrequtils to install a CPU governor, and then ran:

sudo sed -i 's/^GOVERNOR=.*/GOVERNOR="powersave"/' /etc/init.d/cpufrequtils 

This command will tell the OS to always use powersave mode. This should help with my overheating problem too.

Next up is to allow file sharing. Because I opted for Xubuntu, it’s not baked in. There are a few ways to fix this, but I decided to just install Nautilus and use that instead. The command you need is:

sudo apt-get install nautilus nautilus-share

Once done, I opened a Nautilus window from the command prompt (sudo nautilus), navigated to the root directory, and right clicked on mnt and chose ‘Local Network Share’, at which point, I was prompted to install Samba and a few other dependencies, and restart the session.

I opted to restart the computer completely instead, and I was able to read files, but couldn’t create.

I set the permissions on the /mnt directory and its children to 777, and logged out and back in. Bingo! It works!

Next up, I decided to set up pooled drives.

Pooling drive space with mhddfs

This was pretty simple. I installed mhddfs with sudo apt-get install mhddfs, and then created a directory for the data: /mnt/data.

I ran the following command:

mhddfs /mnt/data1,/mnt/data2 /mnt/data -o allow_other

and shared the data directory. It works beautifully.

I decided to mount my portable USB drive to /mnt/data2, but you can set up any directory there. I can see 721GB of free space on the drive, which is nice, and about what I was expecting. Over wireless, files are taking a couple of seconds longer to load, but nothing dreadful, and I plan on plugging directly into the router once I’m happy with the setup.

Installing a media server

I wanted to run a media streaming server on this machine too, so I downloaded and installed Plex. I ran the following command to start it:

sudo service plexmediaserver start

It was then accessible on my home network at http://reformedniceguy:32400/web. I configured the server according to Plex’s instructions.

When Plex was indexing the files, it ended up overheating, and the laptop shut itself down (not very gracefully, I might add), so I tried running adding acpi_osi=Linux thermal.off=1 to grub, and I installed TLP. I also set the governor to powersave with the following command:

sudo cpufreq-set -g powersave

And that’s it. It’s up and running. I just had to configure Plex a bit. Total time taken was around 4–5 hours, including waiting for reboots, updates, etc.

But I have to start again, because none of my attempts to mitigate the overheating were actually successful, so I’m going to have to use my old Windows 8 Pro product key, and see if I can get that to work.


Choosing a VPS Provider

I use VPS Dime, and have been very please with the price and performance, but you could just as easily use AWS, Digital Ocean, or any other VPS provider.

The instructions provided here are for VPS Dime’s cheapest VPS, but once the server spins up, it makes no difference who the provider is. AWS is fairly complex and designed for enterprise clients, so the user interface is not as nice, but they do have a free tier which is more than enough to cater for most small sites.

Choose the type of VPS you would like with the sliders on the home page, and choose Buy Now! when you are happy with the price and specs.

You will see the following page:

Choose an appropriate hostname (it doesn’t have to be anything linked to your website, it doesn’t matter really here, it’s just for you to identify the server). Make a note of the root password! You will need this later.

You can then choose a geographical location (eg: Dallas). Ideally, this should be as close to where the majority of your visitors will be from as possible, although the connection speed might be a factor for you.

Choose an operating system. The rest of this series is based on Ubuntu Trusty Tahr (14.04), so select either the 32 or the 64 bit version of this operating system.

We won’t be using any of the other options in this section, so you can skip it. As you become more accustomed to web development, you may decide you want to fiddle with these, but let’s ignore them for the time being.

Choose Checkout >> on the right, and pay for your VPS.

Once your VPS is provisioned (it may take a little time), head to your account details page, and find the IP address. You will use this to log in to your VPS, so make a note of it alongside the root password. If you forgot to write down your root password, most VPS hosts will allow you to retrieve it somehow.


Setting Up Your DNS

You don’t want to have users typing in your IP address to access your server, you want them to be able to access your server via the name you paid for. The exact steps to follow will depend on your registrar.

First though, a little background. When you register your domain, you only have the name reserved. When someone tries to access your website, they will still need to be told where to find it. That’s where a name server comes in, and the system that is used for this is called DNS — the Domain Name System.

Along with your name, the registration from your domain will say “if you want to know where this website is, you need to talk to this server”. You computer will then go to the domain name server and say “excuse me, I’m looking for your-domain-name, could you tell me where it is please?”. The domain name server will then say “of course, it’s over there”.

In general terms, most registrars also have domain name servers, but this may not be the case for you. If not, you will need to explore how to set up your domain with a different name server, but your registrar should have some information on how to do this. On the client page for most registrars though, you should be able find a page which allows you to add DNS entries.

Most often, this will be listed under the ‘Advanced’ section, and may say something about DNS zones. You need to add an A record (an authoritative record). Most providers will give you the option to add A records, MX entries, CNAMEs, and text. You may also be able to set NS entries. Your A record should be for your-domain-name, your-VPS’s-ip. You may also wish to add a CNAME for *.your-domain-name, your-domain-name.

This will allow you to configure subdomains on your server without having to keep fiddling with your DNS server. We’ll come back to your DNS server in a later tutorial on setting up email.


Setting Up Your VPS

So now you have a server in the cloud. Nice! But at the moment, it’s not doing anything. It’s connected to the internet, sure, but it’s not actually ready for you to do anything with yet. In order to do that, you’re going to need to get used to the command line. If you’re on a Mac or Linux, SSH is built into your system, but if you’re on Windows, you’ll need to download an SSH client. I recommend PuTTY, but the choice is up to you. The only difference between PuTTY and the Mac/Linux SSH clients is that you will need to launch PuTTY separately to log in to your server, while Mac and Linux users can simply type ssh and they will be able to connect.

Once you’re ready, type the command below on Mac/Linux or try to establish a connection via PuTTY. You will be asked for a username and password. Use root as the username, and the root password you wrote down earlier to log in to the server.

ssh <server-ip>

Once you’re in, you’ll see a line ending in a $, and a place for you to type.

The first thing we need to do is change your password. Type:

passwd 

You will be prompted to change your root password. This helps to secure your server.

Next, make sure everything is up-to-date. Check for updates and install them by typing the two commands below:

apt-get update  
apt-get upgrade

Now your server is up-to-date, it’s time to set up a LAMP stack on it. LAMP stands for Linux, Apache, MySQL, PHP. There are alternative stacks available, but this is the most common configuration. You may not wish to use all of the features of the stack at the moment, but if you continue developing stuff, and want to try new stuff out, this is more or less the minimum you will need.

Apache

Apache is an enterprise grade web server, and you can install it with just a few keystrokes. Type:

apt-get install apache2

You will then have to press Y to confirm you want to install all of its dependencies, and Apache will install itself.

You can test Apache has installed correctly by navigating to your IP address, and you should see a page with the text “It works!”.

MySQL

MySQL is a database system. You can install it by typing:

apt-get install mysql-server 

You will have to press Y again. As part of the installation, you will be asked to set a password for the root user. This is important, make a note of this too. Once the installer has finished, it’s time to secure it a bit. Type:

mysql_secure_installation

Then press enter and answer yes to all questions except the first one. This will help tidy up some of the less secure default settings.

PHP

PHP is a server-side language that allows you to run most web apps. To install it, type:

apt-get install php5 php-pear php5-mysql

This will install the MySQL dependencies so that web apps can talk to your databases too. Now to test it, type the following:

echo "<?php phpinfo(); ?>" > /var/www/html/info.php

Then you can check PHP has installed correctly by visiting http://your-server-ip/info.php

Note: you will almost certainly have to install and enable additional PHP modules later if you want to do anything interesting, but this is more or less the bare minimum. You may also find you have a different document root depending on the exact setup of your server. This is the default, though.

Security

Next, you should set up a new user without root privileges. You can do this by typing:

adduser <username>

You can choose the user name. Fill in a new password, and as much info as you want to add about the user.

Don’t log out just yet, because you want to make your life easier by allowing you to switch to root temporarily to run commands. Do this by typing

visudo 

and then adding a line at the bottom that says

<username> ALL=(ALL:ALL) ALL

Then hit Ctrl+X, and then press Y and enter to save the file.

Still logged in as root, type:

nano /etc/ssh/sshd_config

Check for the line that starts with PermitRootLogin, and change it to

PermitRootLogin no

Now type:

reload ssh  
exit

You can now log back into your server using your own username and password, and perform any actions requiring root access by typing sudo in front of the command.