Joe Innes

A Few Snaps From My Birthday

Meat Coin

I wrote a PWA to help reduce meat consumption. It's a fairly simple app - you install it, and for every plant-based meal, you get one 'meat coin'. For every meal with meat, you use one 'meat coin'. A meal with dairy costs half a meat coin.

All data is stored locally in your browser*. There's no log in, no cloud server, no syncing, no analytics, no nothing, just nice big buttons for you to press.

Check it out at

Interesting stuff I used:

* this also includes the sandboxed browser PWAs run in.

Most Interesting Christmas Presents

My sister did a great job of Christmas shopping this year. I'm mostly bored of the whole giving gifts that no-one needs thing, but here are a couple that she gave that I thought were awesome.

Goal 04: Quality Education
ABOUT THE BAND Each #TOGETHERBAND embodies the spirit of the Global Goals. Made from Ocean Plastic, for every #TOGETHERBAND sold, 1KG of plastic is removed from marine environments. Remember, each pack contains two bands, one to wear & one to share. The clasp is made from Humanium Metal upcycled fr…

My Mum and I are both trained teachers (although I no longer teach). She gave us a twin pack of these to share. They're designed as a two pack, one which you wear, and the other which you give to someone to raise awareness. Proceeds go towards the UN's Global Goals project. Made of recycled plastic and decommissioned illegal firearms.

Homeless Tartan Scarf
The Big Issue Shop - Shopping with a social echo

Using a tartan specifically designed for the homeless, this scarf raises awareness of homelessness, which is a disgusting thing to have to do in the developed world as we move into 2020. It's from the Big Issue shop, so there's a strong charitable element involved in it as well, although I did struggle to work out on the Big Issue shop itself exactly how much buying a scarf like this has a direct contribution to solving the problem.

Long Distance Friendship Lamp - Wood | Wi-Fi Touch Lights
Light up loved ones’ lives—across town or across the world—with these in-sync lamps.

This one isn't as socially good, but is a great idea. Often, my sister is too busy or tired to call or write a message to mum, but if she taps her lamp in Switzerland, my mum's in the UK will also light up. You can press to choose colours and the lamp slowly fades out over an hour and half.

We got engaged!

Life without Facebook

I'm almost there, just need to start dotting the I's and crossing the T's with this new site, and I'm golden.

Weather Display

T-Mobile Austria's Customer Service Car Crash

It all started on April 4th when a well—meaning tweet was sent to T-Mobile Austria:

T-Mobile sent the following alarming response:

To understand why this is alarming, I need to take a little trip down the hallways of cybersecurity.

How companies store passwords

It's important to understand that when you log into a secure website, the company that you're sending your password to (should) never actually store that password. That sounds counter-intuitive on the first glance, after all, how can you validate the password that the user sent if you don't store it?

Let's start with the most simple version, we simply have a secret code to 'decode' your password, like a Caesar cipher, where I add one to every letter. Your password is 'hunter2', so when I save it in my database, I save ivoufs3. Next time you try to log in, you'll send me your password, and I'll add one to every letter, then compare it to what's in my database. If you type hunter2, then I'll always end up with ivoufs3, and I'll know you typed in the password. However, a hacker who breaks into the database only knows the encoded version of the password, not your original password.

This allows us to validate your password without ever actually saving your password. There are some really clever encryption methods which are much stronger than a Caesar cipher, but the principle is still the same.

A very clever encryption method might even be one-way. I won't go into the details of these here, but basically, what this means is that if you have the output, you can't 'reverse' it easily to get the input. A very simple example of a one-way encryption method would be to do something like:

  1. Convert using the Caesar cipher (hunter2ivoufs3)
  2. Create a sum using the differences between letters (so, i → v = 13, v → o = -7, giving '13 - 7 + 6 - 15 + 13 + 10', assuming the numbers are listed after the letters)
  3. Complete the sum, giving 20.
  4. Store this number in your database.

Even if a hacker gets the number 20 somehow, they will never be able to tell that your original password was hunter2.

This is a very simple example for illustration, and has serious problems because it produces a lot of 'collisions' (if I typed in au, as a password, then it would also give me 20, so I'd be able to log in using this 'wrong' password). There are some very well known functions which don't have this problem, and you can spend all day giving them different passwords, and they'll produce a different so-called 'hash' each time, and they're completely one-way.

But hackers are a clever bunch, and they soon realised that if these functions are reliable and quick in converting inputs to outputs, they can just try to create lists of every possible password. That way, they don't need to 'reverse' the maths, they just need to look it up. For example, one of the most popular algorithms is known as SHA-256, which gives us f52fbd32b2b3b86ff88ef6c490628285f482af15ddcb29541f94bcf526a3f6c7 for hunter2. Unfortunately, this is such a commonly used password, that F52FBD32B2B3B86FF88EF6C490628285F482AF15DDCB29541F94BCF526A3F6C7 is already listed in what are called 'Rainbow tables'—huge lists of passwords and their hashes, making it very simple for a hacker to look them up.

Determined hackers don't even need rainbow tables any more. For retail computer prices, anyone can get a computer built out that can run through every single possible 8 character password in less than 4 days.

To make it difficult to crack a large number of passwords at once, security experts recommend using a 'salt'. This is a random string added on to each password before hashing it.

For example, hunter2 + 1EF9888BCA gives us 895B71C0196C0246DA4E39048866C630443C29A3F54404513F2BD3FDAF762A61.

This second part (1EF9888BCA) is stored in the database next to my password's hash. Because it's different for every user, it makes it impractical to use a rainbow table to get a massive list of passwords all at once. It doesn't protect single users, so if a hacker gets access to this database, they can use the method described in the link above to get a single user's password in less than four days. It does mean though that it would take more than a year to get 100 users' passwords, and so on.

So, to protect individual users, there's often a secret 'pepper' used, which is stored somewhere away from the database, in a secure location. Then, when you 'hash' the password, you use the secret pepper as well:

hunter2 + 1EF9888BCA + CheeseIsGreat = C3D0DB7D178552362DECD0832615E1B5955FF65785F1D0A3EBDEDB96FE7C358A.

Even if the database is compromised, individual users passwords are still protected by the 'pepper', so the hacker would also need to find where this is stored, and compromise this too.

To summarise:

Username Password Salt Pepper Hash
joe hunter2 F52FBD32B2B3B86FF88EF6C490628285F482AF15DDCB29541F94BCF526A3F6C7
joe hunter2 1EF9888BCA 895B71C0196C0246DA4E39048866C630443C29A3F54404513F2BD3FDAF762A61
joe hunter2 1EF9888BCA CheeseIsGreat C3D0DB7D178552362DECD0832615E1B5955FF65785F1D0A3EBDEDB96FE7C358A
john hunter2 2BED984510 CheeseIsGreat 635B34FA70CC99B9D67C4C662622AB53D8CFEB08224AA74C9C5CE2AD10EFA705

So what did T-Mobile get wrong?

Because these 'hashes' are not reversible, and it's unlikely the T-Mobile customer service representatives are cracking the passwords every time to get the first four characters, we know that one of three things is happening (from least likely to most likely):

  1. T-Mobile are not salting their passwords, and are maintaining an internal rainbow table (maybe they are using a 'pepper')
  2. T-Mobile are not hashing their passwords at all
  3. T-Mobile are storing the first four characters of a user's password separately in plain text.

Every single one of these is a bad idea, security speaking.

'But Joe', you say, 'I thought this article was about customer service?!'. OK, OK, here we go:

Mistake № 1: Handwaving

Once Andrea's response was sent out, the original tweeter replied (politely), asking how it could be fixed.

What followed from T-Mobile was an absolute disaster of a tweet from a different social media manager:

Lesson: don't patronise your customers, or dismiss their concerns.

Mistake № 2: Doubling down

By this point, the story was starting to pick up a bit of momentum, and another tweeter weighed in asking:

At this point, Käthe should probably have checked with her boss before replying, but didn't. Her response was a hubris-filled surprise:

Now, I'm sure T-Mobile take security seriously, but this is Donald Trump-level bluster.

Lesson: rather than making a broad sweeping statement on a topic you clearly don't understand, check with an expert.

Mistake № 3: Making it personal

The following few tweets are some of the most bizarre tweets I've ever seen from a corporate account:

I'm surprised that no-one had relieved Käthe by this point, but she released a passive-aggressive tirade against @Korni22 until her shift ended.

Lesson: when you're in a hole, stop digging, and certainly don't start insulting people

Mistake № 4: Not owning it

Eventually, it seems Käthe disappeared from the picture, and T-Mobile's 'company spokesperson' Helmut weighed in with what is presumably an official opinion and statement which has been the subject of hasty conference calls.

No apology, no acknowledgement of concerns, and no explanation that passes any muster. I'm not a PR wizard, but at this point, surely a better approach would have been:

"We understand customers are concerned about security processes—customers' passwords are stored in encrypted databases, and we use one-time-PINs. Our responses yesterday were overconfident this was adequate. We're reviewing urgently and we'll let everyone know the outcome."

Lesson: own your mistakes, don't be afraid to apologise when needed

All in all, a bad day at the office for T-Mobile Austria. Customer service is rarely easy, but it's also pretty hard to get it this wrong.

Now for real: The UK Tax System Explained in Beer

An article posted on LinkedIn has been gaining popularity as a simplified explanation of the UK tax system. At first glance, it seems quite interesting and well thought through, so I did some fact checking. This is what it really looks like.

Suppose that once a week, ten men go out for beer. The bill for all ten comes to £288.

If they paid their bill the way we pay our taxes, it would go something like this: -

  • The richest man earns £781 per week.
  • The second richest two men earn about £3.80 each.
  • The next richest three men earn about £2.45 each.
  • The next richest two men earn about £1.75 each.
  • The poorest two men earn about £1.25 each.

The richest man shouts ‘The fairest way is to split it ten ways — everyone pays £2.80!’

He’s outvoted by the other nine men. They decide that the fairest way is to divide the bill proportionately based on income. The combined earnings at the table are about £800. Based on this, they came up with the following:

  • £781/£800 = 98%, so the richest man paid £281.
  • £3.8/£800 = 0.4%, so the next richest two men should paid £1.36 each.
  • £2.45/£800 = 0.3% so the next three richest men paid 88p each
  • £1.75/£800 = 0.2%, so the next richest two men paid 63p each.
  • £1.25/£800 = 0.15%, so the poorest two men paid 45p each.

So, that’s what they decided to do. At the end of the week, each man looked in his wallet.

  • The rich man found he had £500 in his wallet.
  • The second richest two men had £2.44 each left.
  • The next three had £1.57 left each.
  • The second poorest two men each had £1.12.
  • The poorest two men were left with 80p each.

The next time they go to the pub, the poorest 9 guys say to the richest ‘Look, we could barely afford to eat last week — we need to come up with a fairer way to split it.’

The men worked on the back of napkins, and tried to come up with a system that left everyone with enough money to eat. In the end, they settled on the following:

  • The first and second would pay 4p each
  • The third and fourth would pay 14p each.
  • The fifth, sixth and seventh would pay 27p each.
  • The eighth and ninth would pay 52p each.
  • And the tenth man (the richest) would pay £286.

At the end of the week, each man looked in his wallet again.

  • The rich man found he had £495 in his wallet.
  • The second richest two men had £3.28 each left.
  • The next three had £2.18 left each.
  • The second poorest two men each had £1.61.
  • The poorest two men were left with £1.21 each.

The poorest two men were just about able to afford bread and cheese for their families. The second poorest were able to buy eggs. The next three could buy some ham to have with their bread, cheese, and eggs. The second richest two men could also buy some salad items.

The rich man complained about how unfair the tax system was.

And that, ladies and gentlemen, is how the tax system really works.

The Silver Yaris

The alarm clock rings, but I’m already awake. I know it’s 6:45 now, and I’ve been awake for probably about twenty minutes already. I hit snooze anyway, because the air is cold in the flat. Nine minutes later, I drag one leg after the other over the edge of the bed, and stumble into the bathroom. When I turn the light on, I realise that I can’t see myself in the mirror, but I can’t be bothered to go back and get my glasses. I sigh and sit down to piss.

I brush my teeth, and squirt some anti-perspirant under my armpits, before I go back into the bedroom. I pull on a pair of shorts and a t-shirt, find my glasses on the bedside table, and put my phone in my pocket. I tie my shoelaces, and grab my headphones before plugging them in and hitting ‘play’ on my ‘thirty minute jog’ playlist.

I spent a lot longer than 30 minutes picking the songs to make sure they were just the right tempo to keep me moving before my morning coffee. It had worked, just about, when I went for my first jog on Monday. It’s Thursday now, and my calves have just about recovered enough to go again.

I close the door behind me, and twist the key, before tying it to my waistband. I’m already partway through the second song by the time I leave my front door, and I make a mental note to add another 5 minute song at the beginning of the playlist.

Here goes nothing, I say to myself, as I put one foot in front of the other, and my calves remind me how much I’ve let myself go.

There’s a special smell to the air in the morning, and I quite enjoy it, even though the coldness of it burns the inside of my nose a little. I’m still not thinking very clearly and I almost get hit by a car as I run out across the street towards the green.

We call it the green, but really, it’s just a cultivated bit of moorland at the edge of the town. I run through the flowerbeds, and I get to the end of the green before I have to slow to a walk. I didn’t make it this far on Monday, so I’m quite pleased with myself. I walk until the end of the song, and then I start running again, out onto the moor. It’s mostly footpaths and tracks here, with a couple of access roads, so I’m a bit surprised to see a silver car parked up the hill a bit on the right.

I decide to jog past to see the car up close. As I get closer, I can see it’s a Toyota Yaris, probably about fifteen years old. I pretend I’m a secret agent running for a getaway car, because that kind of thing gets me through a run, even though I’m nearly thirty and it makes me feel a bit childish. I can see a figure in the passenger seat. I can’t think of where the driver would have gone to — it’s a weird place to park to go down into town, but there’s nothing else in any other direction.

I’m nearly there, and I’m imagining a hail of bullets zinging past my head as I approach the car from the passenger side, but my side is starting to hurt, so I slow down to a quick walk. I take a few breaths and carry on walking. As I walk up to the car, I can see some kind of weird cable reaching around the far side from the back of the car to the roof, and I wonder if it’s a tow rope, and the driver has gone to find someone who can give the car a pull.

Suddenly, the pieces click into place. The cable isn’t attached to a tow hook that’s been left on the roof, it’s going in to the driver’s window. And the cable isn’t a cable, it’s a hose. And the hose isn’t attached to the tow eye, it’s duct taped onto the exhaust. And the passenger isn’t moving.


I start to run, and I’ve forgotten all about the spies chasing me. I can’t hear the engine running, and I don’t know if that’s a good thing or not. It takes me about ten seconds to get my hand on the passenger’s door. I tug, but the door won’t budge. The passenger’s eyes are closed, and don’t flutter when I scream.

I sprint to the other side, and pull the hose out of the window, for all the good it’ll do. The driver’s door is locked too, so I punch the glass. It flexes a little but doesn’t give, and I think I break my little finger. I try a couple of times with my elbow, but bursts of pain shoot down into my wrist, so I stop. I look around for a rock. I grab a pointed looking one from next to the wheel, and use the sharp end to smash the window. The window turns into a rain of glass, and I reach in to unlock the door from the inside.

For a second I stop before diving into the car — I’m not sure whether the gases would affect me even if the door was open. I decide that it’s worth taking the risk but I’m not entirely sure what to do, so I slap the passenger across the face. His head falls away to the side, and his eyes still don’t open. I undo his seatbelt, thinking about the absurdity of buckling up as you asphyxiate yourself as I try to drag him across the car.

I’m not strong enough to move him more than a few inches with my arms this outstretched, so instead, I pull up the lock inside the door, and run around to the passenger side. As I open the passenger door, the body slumps out of the car onto the floor, and I realise for the first time that his lips are blue. I drag him onto his back, and I think back to my first aid training, and I remember that CPR is fifteen chest compressions to two breaths. It seems appropriate to start with the breaths, but as soon as my lips touch his, I know there’s no point. His lips are stone cold. I sit back and feel for a pulse on his wrist, and I’m almost excited as I feel a throb, before realising that I’m using my thumb, and it’s my own pulse I’m feeling. I try again with my fingers, and feel nothing. I try with his neck, and still can’t find anything.

I lie on my back for a minute, exhausted and out of ideas.

Then I take out my phone, and dial 999.

Operator, which emergency service do you require?

Which emergency service do you require when you find a dead body? I sure as hell don’t know. I know that I’m probably supposed to say where I am and what the situation is, but all I can manage is “I think he’s dead. I broke the window, he’s cold. It’s a silver Yaris.”

This is a short story based on a writing prompt found here, and isn’t based on any real-life situation.

The AWS Outage—What Happened and Why Does It Matter?

Yesterday, one of Amazon’s data centres suffered some kind of catastrophic failure, and their S3 service went down. The US-EAST-1 DC is one of the data centres the tech giant have on the Eastern seaboard (it’s in North Virginia, to be precise). A large portion of the internet was affected, and a variety of websites were affected by the outage.

What is AWS and S3?

AWS stands for Amazon Web Services, and is a collection of technology platforms which can be used by anyone to run websites. It includes EC2 (virtual server hosting), RDS (hosted databases), SES (email sending and receiving service), and many other services such as S3, which was affected in the outage yesterday.

S3 stands for Simple Storage Service, and is basically an online file storage platform. For a small fee, website owners can upload static assets to Amazon, who will look after it and serve them up on request. As an example, Netflix, Reddit, Dropbox, Tumblr, and Pinterest all use S3 to host critical parts of their website. Think of it kind of like a cloud hosted USB hard drive.

Why Would a Web Developer Use S3?

Often, hosting platforms limit the amount of data a particular website can transfer in a month, or charge money based on the amount of data transferred. Using a third party file storage system can be much cheaper, and you only pay for what you use (as opposed to what you might use, which is the normal pricing model used by web hosts — you don’t have to pay for ‘unused space’ on your server).

If you have a website where users might upload content, then S3 already has all of the infrastructure needed to upload and manage content, and the web developer doesn’t need to write server code to handle file uploads.

S3 has an availability SLA of 99.9% (known as ‘three nines’), which works out at around 43 minutes per month of downtime, after which service credits are offered. This is much more than most web hosts offer.

So What Actually Happened?

As yet, Amazon have not released details of what exactly caused the outage, but it is clear that their US-EAST-1 DC was failing to deliver files some portion of the files stored there. Amazon referred to this simply as ‘increased error rates’, but many users were reporting a full outage. You may have seen images missing, found some websites completely unusable, or seen core functionality of websites working incorrectly.

In a rather funny twist, Amazon themselves use S3 to host their service status page, so they were not immediately able to update this to reflect the fact that the service was unavailable.

What Lessons Can Developers Learn From This?

In the wake of this outage, developers around the world will be under pressure to build more redundant storage solutions. This outage, although it was relatively short (a few hours), will probably result in an awful lot of lost productivity, potentially also lost sales and revenue. Business people in the higher echelons of companies will likely be very unhappy with this loss, and will be looking to mitigate going forwards.

What Lessons Can Amazon Learn From This?

In the short term, Amazon are going to remain the leading storage provider due to their cost and reputation. However, Amazon are going to need to work to rebuild confidence in their services going forward to retain their huge market share. Other providers will be starting to offer redundant solutions to compete with Amazon. To counter this, Amazon will need to consider whether they can make a profitable service with automatic fail-overs, duplicating data across multiple data centres, rather than relying on developers to implement this themselves.

If Amazon get their marketing right, this could even end up turning them a profit in the long run.

What Lessons Can Users Learn From This?

There’s not much you can do about this yourself, but it’s a great opportunity to better understand how much of the internet is heaped up in one place. Amazon Web Services is a huge platform that powers many of your favourite websites, and in the 10 years it’s been operational, it has mostly been working silently in the background.

As of writing this, S3 is back up, and all services in North America are operating normally. I would expect a preliminary root cause analysis to be available by the end of this week.

Pomodoro Timer

Start your Pomodoro!



Pause Reset

Change length

Set pomodoro length (mins)

Cancel Set

Top Tips For Getting Hacked

Here is a step-by-step tutorial for anyone who would like to get hacked.

  • When you install OSMC on your Raspberry Pi, be sure to leave the default user as osmc and the password for that account as osmc
  • To make sure that hackers can gain access to your Pi, make sure that you don’t configure password logons (and certainly don’t enforce them)
  • To save hackers having to scan all of your ports, be sure to leave SSH running on port 22
  • Remember: unless you set up port forwarding on your router, your Pi will only be accessible from your home network. Configure port forwarding to make sure that anyone trying to access your Pi remotely can do so
  • Wait until you log in via SSH yourself to see whether anyone has accessed your Pi. You’ll know when you log in and your Pi says that the last log in was from Italy
  • For bonus points, make sure to keep a bitcoin wallet somewhere on the Pi. Make sure it’s called ‘wallet.dat’, otherwise the hackers might not find it
  • If you are super eager to get started, why don’t you try to find an IRC bot written in Perl. Maybe it could be base 64 encoded and wrapped in an eval statement just to obfuscate it.
  • If you can’t find one, you could pop over into a quiet IRC channel where all the members are named Hack-1234 and see if anyone with a real handle can help you.
  • Don’t have any form of login monitoring set up on your servers.

Clearly, I would never be so stupid as to try any of the above, but theoretically, if I had, I would perhaps have changed ports, passwords, enabled (and enforced) key-based SSH sign on, and maybe I’d have set up the following in /etc/ssh/sshrc

ip=\`echo $SSH_CONNECTION | cut -d “ “ -f 1\`  
ifttt= #IFTTT Maker key  
curl -i -s \  
-H “Accept: application/json” \  
-H “Content-Type:application/json” \  
-X POST — data ‘{“value1”:”’”$host”’”,”value2":”’”$USER”’”,”value3":”’”$ip”’”}’ \$ifttt > /dev/null

Javascript Testing For Idiots Who Don’t Understand Anything They’ve Read About It So Far

So, I’ve heard a lot about ‘testing’, and why it’s a great thing, and I’m fully on board. I’ve read all the ‘red, green, refactor’, I understand the main principles behind Test Driven Development, and I’ve wanted to start for months.

But, I’ve never been able to wade through the hipster coffee shop menu of tools that will help me to do my tests. I tried to write my own ‘testing’ module, and it looked a bit like this:

export default function(fn, arg, expected) {  
  return fn(arg) === expected;  

This was great, because I could write the following:

import Test from ‘./Test.js’;

function helloWorld(name) {  
  return "Hello " + name;  

var passing = Test(helloWorld, "Joe", "Hello Joe");  
if (passing) {  
  console.log('All tests passing!');  
} else {  
  console.log('One or more tests failed :(');  

I was very happy, because I’d written my own test. Then I realised that this would be no good for testing my great random number generator. So I iterated. Instead of passing in an expected value, I’d pass in a second function to check the result of the first! Obviously I’d have to pass in the arguments provided to the first function as well…

export default function(fn, arg, expected) {  
  let result = fn(arg);  
  let passing = expected(result, arg);  

Now, I could do the following:

import Test from ‘./Test.js’;

function getRandomNumber(max) {  
  return Math.floor((Math.random() * max) + 1);  

function isMyNumberRandom(number, max) {  
  if (isNaN(number)) {  
    return false;  
  } else if (isNaN(parseFloat(number))) {  
    return false;  
  } else if (!isFinite(number)) {  
    return false;  
  } else if (number > max) {  
    return false;  
  return true;  

var passing = Test(getRandomNumber, 5, isMyNumberRandom);  
if (passing) {  
  console.log('All tests passing!');  
} else {  
  console.log('One or more tests failed :(');  

Even better! So now, I can write one function to produce a value, and another to check that the value is what I want it to be! Then I wrote a function that took two arguments. Suddenly, I needed to rewrite my “Test” function again, and decided that there must be a better way.

What is out there?

There’s very little out there in terms of documentation or beginner tutorials for someone who wants to get started testing. There are a few tutorials, but they jump from step zero to step ten without really explaining what they’re doing.

The testing frameworks out there are just more advanced versions of what I wrote above.

OK, so they might have exciting names like Mocha, and Chai, and Jasmine, but basically, all of them are just clever implementations of the above, with a lot of the hard work done for you.

How do I get started?

That’s up to you, but I’ll let you know how I started: I cheated. I used Facebook’s create-react-app script to generate a client side app with everything pre-configured.

npm install -g create-react-app  
alias crap=create-react-app # This step is not strictly required...  
crap MyApp  
cd MyApp

Now you can run

npm test

and it will show you that you have one, passing test. You can open up the App.test.js file to work out what is your passing test, and you’ll see the following:

import React from 'react';  
import ReactDOM from 'react-dom';  
import App from './App';  
it('renders without crashing', () => {  
  const div = document.createElement('div');  
  ReactDOM.render(<App />, div);  

So what is this actually doing? The block of imports is showing us everything we need to run these tests (in this environment). We need React, obviously. We also need ReactDOM so React can interact with the DOM. Then finally, we need your component (which may have any number of its own imports).

Notice what we’re not importing:

  • special ‘test’ modules
  • ‘assertion libraries’ (whatever they are)
  • mocks, stubs, spies
  • any kind of virtual browser to run the tests.

These are already part of the implementation, all you need to do is write your tests, not configure the testing tools.

Let’s take a look in a bit more detail at the second part:

it('renders without crashing', () => {  
  const div = document.createElement('div');  
  ReactDOM.render(<App />, div);  

So, in the first line of this, we’re pretending to write real English, rather than code. This is fairly normal for tests, and helps to show what tests are failing later on. We then start an arrow function for your test. The test creates a div, and renders the component into it.

This type of test is known as a smoke test, a term which was originally used in plumbing — you’d light a fire underneath a pipe, and then watch for smoke leaking out of the system. The term was also used later (and it’s not clear whether there’s any direct relationship between the two uses) for testing electronics — plug it in, switch it on, and if smoke comes out, something went wrong. Whatever the root of this term in software development, what it basically means is an incredibly simple test to confirm that everything’s running more or less as expected, without having to do any careful analysis.

Although this is a great first test, I’m not actually a big fan of this as a demonstration, because this is more or less the only ‘test’ you’ll ever write like this. In loose terms, as far as I can work out, this whole thing is wrapped in a ‘try’ block, and if an error is thrown, the test fails, but you can’t really do anything clever with it apart from check for exceptions thrown by your app (which you’ll see in the console anyway).

Test Driven Development — For Real

So, the first step in TDD is to write a failing test. Well, theoretically. The first step really is to decide what you want to achieve. To keep things nice and simple, and to get started, I’m going to write a simple component which renders a card with a title.

So the first thing I’m going to need is a Card component. Remember the TDD mantra — red, green, refactor — so the first thing I need is a failing test. It’s up to you how you organise your tests, but so far I’ve been doing it as one ‘test’ file per component, so I created a new test file: Card.test.js


In it, I imported React and the React DOM, as well as my component.

import React from 'react';  
import ReactDOM from 'react-dom';  
import Card from './Card';

OK, so, now I’ve got everything I need to start testing, so I’ll just write the same test as I had before (except with the ‘Card’ component). I’m actually going to add a wrapper around the whole thing which will show me what I’m testing — this is important when your tests have logical ‘categories’, like components.

describe('a card', () => {  
  it('renders without crashing', () => {  
    const div = document.createElement('div');  
    ReactDOM.render(<Card />, div);  

So, I run the test and sure enough, I get one passing test (for the app component) and one failing test (for the card component).

We can move on now and start writing code.


You’re not reading this article because you want to learn React, so I’m going to assume you know how to write a simple component. The trick to ‘getting to green’ is to write the minimum amount of code possible to pass the test. Don’t get all fancy with state initialisation, or click handlers, or anything — your test only covers the existence of the component — so just write the simplest render function you can imagine:

render() {  
  return <div></div>  

Run your tests again — both are now passing!


Well, there’s not really much that needs refactoring here, this is the simplest possible code, so let’s write a new test.


First, let’s think about what functionality we want — we want a title to be displayed on our card in an h2 tag. Furthermore, we want to be able to pass in this title as part of a ‘data’ prop.

At the moment, we don’t have the libraries we need to actually read the component, so we’re going to use AirBnB’s enzyme utilities. First up, let’s install it:

npm install --save-dev enzyme

Enzyme has LOTS of utilities, and I have no idea what they all do, but the one I want is ‘shallow’, which will allow me to render a single component and check it for stuff. Here, I’m going to import it, and describe the test.

import { shallow } from 'enzyme';
describe('a card', () => {  
  it('renders without crashing', () => { ... });  
  it('displays a title when this is passed to it', () => {   

I haven’t actually written any test code yet, so if I run this test, it’ll report as passing. Although this is an extreme example, it demonstrates why it’s important to make sure you get a ‘red’ before you start coding.

So now I’m going to start writing the code for the test itself. First up, I’m going to define the ‘data’ prop which I’ll pass in to the component

const data = {  
      title: "Test title"  

Then, I need to write up what this is going to look like when it’s rendered correctly (in JSX)

const expectedResult = <h2>Test title</h2>;

Now, rather than rendering the component as I did before, I’m going to use Enzyme’s shallow utility, and I’m going to call this the ‘wrapper’ (because this is the ‘wrapper’ for the h2).

const wrapper = shallow(<Card data={data}/>);

The last thing I’m going to do is to write an assertion, which is more complicated than it sounds. It’s just a clever way of writing the test which is nice and easy to read:


This assertion is based on the Jest framework (which comes out of the box with create-react-app), and you can read more about its APIs here — basically, you write ‘expect’, then an expression, then you chain it to one of the methods that will compare the evaluated expression to something. The ‘wrapper.contains’ here comes from the enzyme’s shallow utility — you can read its API here.

So all together, my test looks like this:

it('displays a title when this is passed to it', () => {  
  const data = {  
    title: "Test title"  
  const expectedResult = <h2>Test title</h2>;  
  const wrapper = shallow(<Card data={data}/>);  

I run this test — and it fails. Great. Now you can write some more code!


Again, write the absolute minimum code you need for the test to pass:

render() {  
  return <h2>Test title</h2>  


OK, that feels bad. The test is passing, but I’m cheating! Get used to it. TDD is about doing the bare minimum. If you can pass the test, then you need to write another one. Don’t delete the old one though! It’ll be useful for making sure you’re not going backwards when you’re writing new code.


I don’t want myself to cheat any more, so I’m going to recalculate the title every time I run the test:

it('displays an arbitrary title when this is passed to it', () => {  
  const randomString = Math.random().toString(36).replace(/\[^a-z\]+/g, '').substr(0, 5);  
  const data = {  
    title: randomString  
  const expectedResult = <h2>{randomString}</h2>;  
  const wrapper = shallow(<Card data={data}/>);  


I might as well do it properly now…

render() {  
  return <h2>{}</h2>  


This will work, and it fits the requirement of the test. I can’t simplify this code any more, so I’ll move on. While I was rethinking this code though, I realised that if no data is passed to the component, it won’t even render — because it’ll try and access the ‘title’ property of ‘data’, which will be undefined. Time for a new test!


it('renders, but is empty if it doesn't get any data', () => {  
  const expectedResult = <h2></h2>;  
  const wrapper = shallow(<Card />);  


The simplest way I can think of is chaining up an if statement or two to pass this test:

render() {  
  if ( && {  
    let title =;  
    return <h2>{}</h2>  
  } else {  
    return <h2></h2>;  


The code above works well, but won’t be too useful in future in case I want to render anything else (like a description) onto the card. Instead, I’m going to put the whole thing in a try/catch block. That way, I can add in additional things that will be required to the try section without writing lots more ugly if statements.

render() {  
  try {  
    let title =;  
    return <h2>{}</h2>  
  } catch (err) {  
    return <h2></h2>;  


So that’s more or less it. Lather, rinse and repeat. Keep iterating — think of a feature, add a failing test, and then add code to pass the test. It’s nowhere near as complicated as it was when I first started reading about it. By now, you should have a nice set of passing tests to get you started, and a much better idea of the ‘hows’ of testing.

How To Trick Everyone Into Thinking You're Not A Racist

Top tips for pretending you voted ‘Leave’ because you’re not a racist! Not a single immigrant in sight! (that’s just how you like it, isn’t it?)

You say: “The Common Agricultural Policy is not fair, and is very expensive to the UK”.
Your opponent says: “Do you like Italian wine and French cheese? If the CAP didn’t exist, you’d be stuck drinking Greek wine and eating American cheese.”

You say: “Laws agreed on in Brussels overrule British law”.
Your opponent says: “I know, isn’t it horrible having to pay for everything in Euros! Oh wait, we got to opt out of that, didn’t we? Well, it’s horrible that foreigners don’t have to show their passports when they come to the UK! What, you mean we opted out of that too? Well, it’s terrible that we can’t pardon people-traffickers! Hang on, you can’t possibly say that we opted out of that too?! Britain supports 88% of decisions coming out of Brussels, including stuff like the working time directive and in reality, we get opt-out agreements for big stuff like Schengen, or the Euro.”

You say: “We could abolish the tampon tax.”
Your opponent says: “Because David Cameron and George Osborne seem committed to reducing VAT on feminine hygiene goods, I wholeheartedly agree that we should leave the EU.”

You say: “We could have saved Port Talbot, and we could save companies in similar situations in future!”
Your opponent says: “Port Talbot was costing Tata Steel a million pounds a day. Why not just send the invoice to the British taxpayer instead?”

You say: “We wouldn’t have to worry about renewable energy any more, and could profit from our oilfields and fracking.”
Your opponent says: “Well, that’s a perfectly valid point, Gideon de Pfeffel”.

You say: “We wouldn’t have to depend on the EU to make trade deals with other countries.”
Your opponent says: “Great! We could organise very favourable rates for shipping goods to North Korea!”

You say: “We could take the money that we currently pay into the EU, and use it to save the NHS!”
Your opponent says: “That’s a great idea, and I will certainly put the rest of the money towards the NHS. Just as soon as renovations are complete on my duck pond”.

Why Accessibility Matters, Even When You Think It Doesn't

When designing for the web, accessibility is often forgotten, and this is a bad thing. It’s bad because it means that users with screen readers and the like are not able to use your site.

This is unprofessional, and may even be legally classed as discrimination. There is precedent for companies being sued (and losing) for not making their website accessible. The Wall Street Journal have published a story in the past few weeksin which a judge ruled that Colorado Bag’n Baggage needed to change their website, and pay $4,000 damages to a blind man for failing to make their website. In addition, The company have been instructed to pay the plaintiff’s legal fees, expected to be more than $100,000.

If that’s not reason enough for you, there’s another, off-label (pun intended) use for Aria attributes you may not have thought of.

I work for a company that contracts for another company, who pay for a service delivered by a third party. Their website is not under my control, nor under the control of the company I work for directly, but I have to use it every day. If we want to make usability enhancements, or change the default behaviour, there’s a significant cost implication, and no-one wants to be left with the bill at the end of the day.

That said, the website leaves a lot to be desired. One of the functionalities the website offers is a monitoring screen for incoming chats in a variety of languages. These languages are displayed in small ‘windows’.

Whenever the page is refreshed, the windows (actually divs) all tile on top of each other in the top right hand corner. There is also a small memory leak which means that the page needs to be refreshed around once every four to six hours.

This shouldn’t be a problem, because this view is not really meant to be a dashboard. However in my office, we have this view on a large screen, displaying all of the incoming chats to ensure we pick them up in time.

Our chat managers organise the windows in a specific order, so that the language of the incoming chat is visible instantly. So, rather than incurring a large bill for making the modifications on the site itself, and running the risk of the vendor coming back and saying ‘wont-fix’, I decided to write a small Chrome extension that will move each window to the desired location.

At first, I thought this would be easy, as each window has a unique ID. I tried simply typing the following into the browser console.

document.getElementById(#divId).style.cssText = "position: absolute; left: 0; top: 0;";

It worked beautifully. The targetted div (the incoming English chats) flew up to the top left hand corner of the screen.

So I refreshed, and tried the code again. Nothing happened. It appears this ID is not just unique per language queue, it is unique per session. So that idea went down the drain.

I hunted around, looking for anything I could use to target the chat windows. All of them had the same class applied, so I tried to iterate through each element, and apply the styles based on that.

This worked, but I couldn’t choose the order of the windows. We could have lived with this, but I wanted the extension to respect the order our chat managers had agreed on, so this wasn’t an acceptable solution. To make matters worse, the order of the elements seemed to change every time the page refreshed.

Eventually, I spotted the following attached to the div:

aria-label="Chat with the Service Desk"

Each language was labelled with the prompt a user would receive when they wanted to chat to us. I could work with that.

I quickly wrote up a couple of objects to index the styles and the languages, and then wrote this:

document.querySelector("[aria-label='" + langIds[lang] + "']").style.cssText = lookupStyles[lang];

After iterating through the languages, all of the windows snapped into place exactly as desired.

So when you’re thinking about making your site accessible, don’t just think about the straightforward stuff. Adding accessibility to your site gives others the opportunity to hook in to your software in ways you might never have considered.

Delegating my Job to VBA

In my work, one of my major embarassments is falling behind my colleagues. I work with one other person on our Knowledge Base, and one of our goals is to have every document reviewed once every 6 months. I’m responsible for making sure that 250 or so articles are up to date, and I’m always falling behind and not being able to keep on top of my follow-ups with knowledge owners.

I’ve had enough of copying and pasting into Outlook, so I decided to write a macro that does the work for me.

Now, I’m no Excel expert, but I can write a little macro. So I started by exporting a list of the articles I’m responsible for from our ticketing tool. The tool allows me to export the HTML body of the article, and the ‘next review date’, as well as the article owner. Unfortunately, there’s no way to pull to article owner’s email address automatically out of the system, so I realised I would need to come up with some additional fields.

I settled on 5 columns at the beginning of the data:

A: Name of reviewer
B: Email address of reviewer
C: To Send?
D: Review due?
E: In progress

The ‘Review due’ field has the simple formula:


I’m working on row 2 to begin with, and column H contains the ‘Next review date’.

Next up, the formula for the ‘To Send?’ field:

=IF(AND(D2, J2<>"", NOT(E2)), TRUE, FALSE)

This checks whether the article is due for a review, whether there is text in the article, and whether this is in progress. If the article exists, is not currently in progress, and is pending a review, then this column will be TRUE. The reason for the text check is that we currently have a few legacy articles which have their content in a different field. We are working through to fix this, but for the time being, we need to avoid empty emails.

Next, it’s fairly simple — in pseudocode, it looks like this

for each non-empty row  
  create a new email  
    set the 'To' field to the article owner's email address  
    set the 'Subject' to 'Knowledge Review: ' + the article title  
    set the 'Message' to a template email + the article content  
    display the email ready to send  

Turns out, Excel lets you create the HTML body of an email really easily. In the end, the entire macro looks like this:

Sub SendReviewRequest()

Dim OutApp As Object  
Dim OutMail As Object  
Dim rng As Range

Set OutApp = CreateObject(“Outlook.Application”)  
 Dim x As Integer  
 ‘ Set numrows = number of rows of data.  
 NumRows = Range(“A3”, Range(“A3”).End(xlDown)).Rows.Count  
 For x = 2 To NumRows + 2  
 If Range(“C” & x + 1).Value = True Then  
 Set OutMail = OutApp.createitem(0)  
 Sheets(“Raw Data”).Activate  
 msg1 = “Dear “ & Range(“A” & x + 1) & “,”  
 msg1 = msg1 & “<p>As part of our ongoing knowledge review processes, we have identified that the article below is up for review. Could you please take a look at it, and confirm that the information is correct, or make any necessary modifications and corrections?</p>”  
 msg1 = msg1 & “<p>Thank you,</p>”  
 msg1 = msg1 & “<p>Joe Innes</p><hr>” & Range(“L” & x + 1).Value  
 On Error Resume Next  
 With OutMail  
 .To = Range(“B” & x + 1)  
 .cc = “”  
 .BCC = “”  
 .Subject = “Knowledge Review “ & Range(“H” & x + 1).Value  
 .HTMLBody = msg1  
 End With  
 ‘ SendKeys “^{ENTER}”  
 On Error GoTo 0  
 Set OutMail = Nothing  
 End If  
Set OutApp = Nothing

End Sub

I disabled autosending (the commented line) because I want to manually review each email, and stuck a button on the main worksheet.

Bingo, it works. Once each email is sent, I switch the ‘In progress’ field to TRUE to avoid sending the same email, but I’m planning on implementing an auto-incrementing ‘Follow-up’ column that subtly changes the body of the email every time the macro runs, so I can simply run the macro once a week to perform follow-ups as well as reach out to new article owners, but this will already save me hours of work.

My Four F's of Effective Email Management

I work for a multinational company, and I’ve tried every to-do list manager going, I’ve even tried to write a few myself. I inevitably stick with it for a week or two, and then give up. There’s a very simple reason for this.

My mailbox is my to-do list.

I rarely have action items that are not generated directly from an email, and I hate duplicating my efforts.

What I’ve tried

I’ve tried every productivity system going. GTD, Pomodoro, Don’t Break The Chain, Trusted Trio, and many more, and none of them really fit well with the way my work goes. I can rarely dedicate 25 uninterrupted minutes to a task, I don’t have daily tasks, and GTD means I spend more time filing and logging my daily tasks than I do working on them.

I’ve always loved the Inbox Zero ideology, but the key principles are challenging, and don’t work well with the way I need to manage my inbox.

Instead, I have developed my own system, based on Microsoft’s PIFEM with only a few tweaks.

PIFEM In One Minute

  • PIFEM uses a series of Outlook Categories and Due Date Flags for all items in your Inbox.
  • 4 Search folders are configured to allow you to filter on items that are only Due on certain days, and are grouped by Category/Priority
  • This allows you to very quickly pivot on different groups of items to see things such as:
  • Items due today
  • Highest priority items
  • Low priority items to be read on “your bookshelf”
  • Most importantly, it allows you to prioritise each item, and it allows you to define when you want to work on an item.

How I adapted it

First of all, I analysed my current ways of working, and determined ways to streamline the process. I based my steps on the 4 D’s:

  • Delete
  • Delay
  • Delegate
  • Do

This was a great starting point, but there were some tweaks I needed to make.


I try never to delete emails. This is partly for evidence reasons (CYA in case someone accuses me of not doing my job properly), and partly for reference reasons. A lot of useful knowledge is stored in my mailbox, and I want that to be available and searchable further down the line. I played around for a while with calling this step ‘Archive’, before settling on ‘File’ (File fits my 4 F’s mnemonic).


There’s never any reason simply to delay acting on an email. The only kind of email that needs ‘delaying’ is one that requires action, but not immediately. As a result, I renamed this step ‘Follow-up’.


I rarely need to delegate something — I’m not yet high enough in the food chain. If it’s not something I can act on, I need to Forward it to someone who can act on it.


At the end of the day, this is the whole point of any productivity system — doing stuff. And what’s the point of doing stuff? To finish it. Yes, I might get involved in a longer email thread with multiple action points, but each email is likely to only have one or two action items, and I should be able to Finish with the email.

The Four F’s

Most time management systems insist on spending no longer than two minutes on each email. My system is to spend as long as I need until I’m ready to classify the message based on my four F’s. I work through the F’s in this way for each new email:

  1. Finish — can I finish this now? Do I have time to get this done right now? If so, do it. It might take two minutes, it might take twenty, but if I don’t have time or I can’t finish it for whatever reason, move on to the next step. Note that ‘Finishing’ an email simply means completing all action items. There may be times when I classify an email as ‘Finished’ after scheduling a meeting to discuss in more detail. This is fine. You need to read between the lines and determine each actionable task from the email. Once there are no more actionable tasks, the email is finished, even if you expect further replies in the email thread with more action items.
  2. Forward — given all the time in the world, would I be able to finish this? If not, forward it to someone who can. If I could finish this task, and I have decided not to, would it be appropriate to pass it to someone else? If yes, forward it, otherwise, move to the next step.
  3. Follow-up — I’ve decided not to Finish this, and not to Forward it for someone else to Finish. Is this a task I need to do at some point? If it is, schedule a Follow Up. This may be today, tomorrow, at some point this week or at some point next week. If I don’t need to do anything, then move on to the final step.
  4. File — move this email to an archive, as it contains no actionable tasks.

Practical Setup

You’ll need to set up a few quick steps and a few folders in order for the system to work effectively.


You should create a single, ‘Filed’ folder. You can call this what you like, but this will be the holding ground for all of your emails once they’re cleared out of your inbox.

Next, you’ll need to set up a few custom search folders, searching in your Inbox. The criteria are as follows:

  1. Inbox (for triage) — Due Date — does not exist
  2. Follow Up Today — Flag Status — not equal to — Completed and Due Date — on or before — [Today]
  3. Follow Up Later — Flag Status — not equal to — Completed and Due Date — on or after — [Tomorrow]
  4. Finished — Only items which: — are marked completed — note that this will also need to include your ‘Filed’ folder.

You’re all set with your folders, now to set up the Quick Steps

Quick Steps

The following quick steps should be set up. You can fiddle with these a little if you want to set up any additional preferences or classifications.

1. Finish — Flag Message — Mark Complete and Move to folder — Filed. I also mark the email as read, but this is optional.

2. Forward — Forward — to — blank and Mark complete. I also mark the email as read, but this is optional.

3. Follow Up — Flag Message — This week.

4. File — Flag Message — Mark Complete and Move to folder — Filed. I also mark the email as read, but this is optional.

You’ll notice that quick steps 1 and 4 do the exact same thing. It is intentional that these are separated — the idea is to force me to decide how to handle the email. The ‘Finish’ quick step is only for once I have completed any action items, the ‘File’ step means no action is required. You could consolidate these quick steps into one if you like, but I would recommend against it to avoid confusion.

Usage Examples

Now you’re all set up, you just need to practice using the system. I’ve put a few examples below, feel free to think about your own

  • I receive an email from my boss, asking me to dig into a particular issue. I think it will take approximately half an hour to do, but it’s not a critical priority. I flag this for Follow up — Today
  • I receive an email from my colleague requesting urgent assistance with a case. It will likely take about 10 minutes to reply, but it’s very urgent, so I write and send my reply, and then flag this as Finished
  • I am copied in on an email requesting further information about something. I don’t need to act on this at all as it is already directed to the right person, so I flag this to be Filed
  • I am copied in on another email asking for some clarification. It’s not addressed to the right person, and I can’t see the right person in the recipients. I can’t answer the query myself, so I Forward the email to the right person.
  • My manager has asked me for a status update on a current project by the end of the week. It’s neither urgent, nor is it something I could easily do now given my current workload. I flag this for Follow up — This week.
  • I have now reached the end of my inbox, so I open my Follow Up — Today folder, and begin Finishing the tasks. Once my motivation starts to fade, I switch back to my inbox, and start from the top again.
  • As soon as both my inbox and my Follow Up — Today folders are empty, I move on to the Follow Up Later folder, and begin working through this folder. I check each email against the four F’s, which allows me to identify any low hanging fruit, or action items I no longer need to complete.
  • Once my Inbox, and my Follow Up folders are all empty, Solitaire beckons. Note that I still haven’t managed to get to this stage yet…

Migrating from Meteor Hosting ( to my own VPS

Sad news — the free and simple hosting provided by Meteor is coming to an end (as do all things), and so if you want to keep your apps, you need to migrate them to another host.

I followed the steps below with a brand new Digital Ocean droplet, but this should work with any VPS you have access to. If you don’t have access to a VPS, check out this article. You’ll also need to configure SSH access using a key, but that’s not too complicated. Google-fu will help you.

Deploying the app

The first step is to get the most recent version of the app itself — I know you’re using source control, so that won’t be a problem. Right?

git clone <your-repo>

Now, you’re going to use a tool called Meteor Up. The most recent version is actually available as mupx. Install it and then you can initiate a new Meteor Up project.

sudo npm install -g mupx  
mupx init

Meteor Up needs to use a settings file for Meteor, so if you have any custom entries in a settings.json file or something like that, you’ll need to migrate your entries into the new settings.json file that mupx has created.

Now, open up mup.json, and modify the file based on the comments. The most basic modifications are: — enter your VPS’s IP address  
servers.username — normally ‘root’ will be fine here, depending on how your server is configured.  
servers.password — per best practice, you should probably comment this out and use the ‘pem’ line instead  
servers.pem — uncomment this line, and change it to “~/.ssh/id_rsa.decrypted”. Note that you will need to add a comma at the end of this line too.
appName — enter a one word name for your app. This will be used on the VPS as the name of docker container, so make it clear. Write this down on a piece of paper!  
app — the path to the app (on your local machine)
env.ROOT_URL — this will be used to set up the web server, make sure you set this correctly to a domain that you own, and that is pointing at the VPS.

If you need some help registering a domain name, check out this article.

You’re almost ready to deploy now — you just need an unencrypted version of your SSH key. Run the following command:

openssl rsa -in ~/.ssh/id\_rsa -out ~/.ssh/id\_rsa.decrypted

You should be prompted for your passphrase, and then you’re good to go! Obviously, make sure this key never gets out into the wild.

Your next step is to configure the server. While this sounds like a painstaking process, Meteor Up takes care of it all for you — just run the following command.

mupx setup

This shouldn’t take very long, and will install everything on your server, apart from the application itself.

Now for the fun bit — deploying your app. It’s as simple as:

mupx deploy

Now your app will be up and running on the new web host, accessible at the root URL you provided in the mup.json file.

Migrating the data

When you access the app, you might notice that you’ve lost all of the data in it. If this bothers you, the process for migrating the data over is a little more involved, but not too difficult.

First of all, make sure that you have Mongo installed on your computer (in case you don’t want Mongo, all you really need is the mongodump executable).

Next, from your Meteor app’s directory run the following command.

meteor mongo <> --url

You’ll get back a reply that looks like this:


You need to extract the information above, and run the command below, using the data:

./mongodump -h <server> --port 27017 --username <client-id> --password <password> -d <app\_name\_meteor_com>

These two commands have to be run within a minute of each other according to the Internet, or they may not work (although I had no problems).

This will create a folder called ‘dump’ in your current working directory. You’ll need to copy this up onto your server. You can choose the location it will be uploaded to on the server yourself, but I just put it in /root/dump. It won’t be staying for long anyway.

scp -r dump root@<yourServer>:dump

Next up, you need to ssh into your server. Now, you’re going to copy the dump into your MongoDB container, and then open a shell inside that container to import the database. It’s all getting a little inception-y.

docker cp dump mongodb:/dump  
docker exec -it mongodb bash

Now, you’re inside the docker container, we need to remove the database that was automatically created for the app you deployed, and then you just need to import the database dump.

First step is to remove the empty database. We’re going to load up Mongo, access the database, and drop the one we don’t want. Follow the commands below:

show dbs;

You should see three databases — local, test, and the third one, named after your app. This name is important, write it on a piece of paper.

Now, use your database, and drop it.

use <myDb>;  

Your app will stop working, but we have one step left to go — restoring the database from the hosted app.

If you’ve been paying attention, you should have two pieces of information written down on a piece of paper. The first is the Meteor app name. The second is the dumped database name. We need to restore the dumped database with the name of the Meteor app.

Run the following command:

mongorestore -d <yourAppName> dump/<appName\_meteor\_com>

In case you just restore the database as it is, your Meteor app won’t know where to find it.

Once you’ve done that, you should be good to go. Access your app at the new location, and you should see no difference in comparison to the version — except it should be a bit faster, and won’t be subject to spin downs.

Setting Up My Home Server

So, I’ve got an old laptop kicking around, and I decided to spend an afternoon making it into a home server.

Choosing an OS

The Windows key on the bottom has run dry, so I’ve decided to spin up Xubuntu on it. I want a graphical interface because it will make managing it much easier, but I don’t want to sacrifice too much disk space or speed to it. I set the torrent to download, and headed over to get UNetbootin to burn the ISO to a USB stick. As I’m running on a Mac, UNetbootin has a few weird quirks — the favourites links on the Finder don’t work, and there’s no retina support, so it’s ugly, but it does the job, you just have to locate the file manually by traversing the whole directory tree.

UNetbootin can’t set active flags or write the MBR, so the next steps are to unmount the disk and then run a few terminal commands:

fdisk -e /dev/rdiskX

Where X is the number of the disk. Ignore the error message here, and then type the following, hitting enter at the end of each line:

f 1  

Then, download the Syslinux binaries from You need the mr.bin file, which should be hiding under bios/mbr/mbr.bin. Once you’ve located it, do the following, replacing the X with the disk number again (you may also have to unmount the drive again):

sudo dd conv=notrunc bs=440 count=1 if=bios/mbr/mbr.bin of=/dev/diskX

Then follow the instructions on UNetbootin to burn the ISO file to an external drive. The whole process should only take a few minutes.

Installing the OS

I booted up the old laptop from the USB stick by setting it to the first priority in the BIOS. Then, choose the option to Install Xubuntu. The laptop booted cleanly into a desktop, and the installer launched immediately. I opted to do my own partition configuration, because I want to have the OS separated from data partitions. When it prompts for ‘Installation type’, choose ‘Something else’. I gave 10GB for the OS itself, 4GB swap, and the rest I formatted as XFS and mounted it at /mnt/data1 (eventually, I will have /mnt/data as pooled storage).

I name all of my computers after ships from Iain M. Banks’s Culture series, this one is no exception. Given the repurposing of the laptop as a server, I decided on ReformedNiceGuy as the hostname.

I chose to connect the laptop to the internet immediately and allow it to automatically update itself as it installed. The installation took about 30 minutes, including the downloaded updates. I restarted the computer, and hit a snag. Grub loaded, but Xubuntu wouldn’t — there was just a black screen. I’ve had a few problems with this laptop before, so tried running the grub command with nomodeset, and it booted like a charm, so I added this to the GRUB\_CMDLINE\_LINUX_DEFAULT line in the /etc/default/grub file, and ran sudo update-grub.

Configuring the OS

Once I logged in, there were (surprisingly, as I thought it would have installed them already) a few updates available. I ran them, and set the OS to automatically install updates.

Next up, I wanted to keep power consumption (and so fan spin-up and noise) low, so I ran sudo apt-get install cpufrequtils to install a CPU governor, and then ran:

sudo sed -i 's/^GOVERNOR=.*/GOVERNOR="powersave"/' /etc/init.d/cpufrequtils 

This command will tell the OS to always use powersave mode. This should help with my overheating problem too.

Next up is to allow file sharing. Because I opted for Xubuntu, it’s not baked in. There are a few ways to fix this, but I decided to just install Nautilus and use that instead. The command you need is:

sudo apt-get install nautilus nautilus-share

Once done, I opened a Nautilus window from the command prompt (sudo nautilus), navigated to the root directory, and right clicked on mnt and chose ‘Local Network Share’, at which point, I was prompted to install Samba and a few other dependencies, and restart the session.

I opted to restart the computer completely instead, and I was able to read files, but couldn’t create.

I set the permissions on the /mnt directory and its children to 777, and logged out and back in. Bingo! It works!

Next up, I decided to set up pooled drives.

Pooling drive space with mhddfs

This was pretty simple. I installed mhddfs with sudo apt-get install mhddfs, and then created a directory for the data: /mnt/data.

I ran the following command:

mhddfs /mnt/data1,/mnt/data2 /mnt/data -o allow_other

and shared the data directory. It works beautifully.

I decided to mount my portable USB drive to /mnt/data2, but you can set up any directory there. I can see 721GB of free space on the drive, which is nice, and about what I was expecting. Over wireless, files are taking a couple of seconds longer to load, but nothing dreadful, and I plan on plugging directly into the router once I’m happy with the setup.

Installing a media server

I wanted to run a media streaming server on this machine too, so I downloaded and installed Plex. I ran the following command to start it:

sudo service plexmediaserver start

It was then accessible on my home network at http://reformedniceguy:32400/web. I configured the server according to Plex’s instructions.

When Plex was indexing the files, it ended up overheating, and the laptop shut itself down (not very gracefully, I might add), so I tried running adding acpi_osi=Linux to grub, and I installed TLP. I also set the governor to powersave with the following command:

sudo cpufreq-set -g powersave

And that’s it. It’s up and running. I just had to configure Plex a bit. Total time taken was around 4–5 hours, including waiting for reboots, updates, etc.

But I have to start again, because none of my attempts to mitigate the overheating were actually successful, so I’m going to have to use my old Windows 8 Pro product key, and see if I can get that to work.

A Punch in the Teeth for UKIP and the Greens

If you voted for UKIP, I feel sorry for you. Politically, I’m so far way from you, you’d need a telescope to see me. Still, you deserve better representation.

I wrote a little calculator to show how powerful your vote was. Check it out below. Play with it. Go ahead, I’ll wait.


Something is fundamentally wrong when one vote is worth 169 times more than another. But, if you voted for the Democratic Unionist Party, and your neighbour voted UKIP, your vote was that much stronger.

Over the coming weeks and months, you will see a whole host of articles with titles like ‘Who would have won under PR?’. They will recalculate votes share to show you a ‘proportional parliament’. I’m not about to do that, because the fact of the matter is that people would not have voted the same way. What I’m hoping to do is illustrate the unfairness in the system we use in the UK.

Who elects the government in the UK?

The UK is a representative democracy. That means that people influence the running of the country indirectly. This is a compromise, but generally speaking a fair one. If every single person had to vote on every single issue, then nothing would ever get done. So we elect people to vote for us, and a single person represents a large number of other people. This representative will vote on their behalf in parliament. There’s nothing particularly bad about this. It makes sense and means that the country can run effectively.

Each of these representatives can choose to give their allegiance to a political party. A party is a group of like-minded people who feel the same way about things. Party members are sometimes asked to vote in a different way to how they feel. They are paid back when others do the same on their behalf. For example, let’s imagine the following two proposals we have to vote on:

  • To remove January from the calendar, because it’s the most depressing month
  • To force the BBC to run a Breaking Bad marathon weekend three times a year

I love Breaking Bad. I think that the marathon idea is great, but I know that not everyone loves it as much as I do. But my birthday is in January, so I don’t find it depressing at all, so would vote to keep it. My friend John thinks Breaking Bad is kind of all right but would probably prefer people buy box sets. He hates January though. He thinks the idea of removing January from the calendar is great. John and I agree that we will both vote yes on both proposals. That way, we both get what we want in exchange for helping someone else get what they want.

A political party is this, on a larger scale. When a political party gets more than half of the representatives in parliament, they can form a government. If a party thinks they can, they can try form a government with fewer than half of the representatives in parliament. This is called a minority government. This is uncommon, but happens. The reason this is uncommon is because other parties will often group together. They can then form a majority. This results in bigger compromises than necessary in a majority government. This is known as a coalition.

All this seems pretty fair. So what’s good about the way the UK runs its elections?

Benefits of First Past The Post

As part of a fair commentary on the system, it is only reasonable to present its benefits. I mean, why would anyone have chosen the First Past The Post system in the first place? It’s in use in most places throughout the former British Empire. It does have some attractive features, a few of which are below.

First, majority governments are much more likely. This is potentially a good thing; coalition governments are often slow in making decisions. It’s also likely to produce a strong opposition, acting as a balancing force in parliament.

FPTP is also easy to understand, explain, and count. A child can understand the system, and results can be delivered quickly. Voters are unlikely to be confused.

Finally, under First Past The Post, extreme parties are much less likely to be able to build momentum.

So given all these positives, what’s the problem?

Local representation

The first problem is that the representatives are chosen based on geographical regions. Each representative (seat) is chosen to represent a group of people from a particular area of the country. This all but guarantees some people will not be represented. For example, Richmond in North Yorkshire was the safest Conservative seat at the 2010 election. They got the most votes by far, and have never lost this seat. At this election, the Tories lost 10% of the vote there to UKIP. They were still able to hold the seat with over 50% of the vote. There are clearly a lot of centre-right voters in Richmond. But, if I am a centre-left voter living in Richmond, who wants someone with similar views to me in parliament, I’m stuck. I can vote Labour in every election my entire life, and my voice will never be heard in parliament.

Additionally, votes from Tory supporters above the minimum level needed don’t matter.

Election methodology

The way that the local representatives are chosen is a unfair too. To win, you only have to have one more vote than the person who came second. While this seems fair, when you dig deeper, it becomes clear that this is not true. For example, lets imagine we have 13 representatives. Candidate 1 was born in January. To celebrate, they want to give everybody January off as paid holiday. Candidate 2 was born in February, and wants to do the same thing, except with February. So on through until December. Candidate 13, however, is a nasty person, and wants to increase the working week to 80 hours minimum.

It’s a constituency of 14 people, so when the election results come in, it’s easy to count. Each of the first twelve candidates get 1 vote each. Candidate 13 got two votes (probably the town’s two factory owners — it was a secret ballot, so we’ll never know for sure). Candidate 13 beat all the other 12 candidates, and so is the winner.

But wait, only 2 people voted to have the working week increased to 80 hours! The other 12 all voted to get a month off work! According to the way elections are run in the UK, tough.

The Inevitable Two Party State

Although First Past The Post encourages majorities and discourages extremists, it simplifies all elections into a left vs right debate. Voters have no way to control the drift of the country, and smaller parties get wiped out.

Lowest Common Denominator

First Past The Post is strongly biased towards the lowest common denominator. Almost no-one gets what they want, they get the most acceptable compromise. This is also known as the most broadly acceptable candidate. This is biased in favour of white men.

So how can we improve the current system? Isn’t this something that we could have fixed five years ago?

Electoral Reform Referendum

Five years ago, we had a referendum on electoral reform, and we voted against it. Many Conservatives and Labour supporters will highlight this as the electoral reform is discussed. But, AV only went a tiny way towards fixing the problems. Under AV, everyone in the example above would have had a week off work. This is clearly an improvement. However, imagine one more candidate standing who wants two months off work. Most people realise this is unsustainable, but would rather this than an 80 hour work week. As a result, they put him down as their last acceptable choice. Because he ends up getting all the ‘last acceptable choice’ votes, he ends up winning. AV is strongly biased towards ‘compromise candidates’, and still encourages tactical voting.

Single Transferable Vote

Many people who voted no in the AV referendum would have voted yes for Single Transferable Vote. STV is a proportional system, which would almost completely end tactical voting. Under STV, the number of representatives would increase, but almost no votes would be wasted. STV is basically the same as AV, except you have more than one winning candidate per constituency.

The way it works is simple: you rank candidates you want to win. You can also choose not to rank candidates if you don’t want them to win. If your first choice candidate has enough votes to win, your vote is reallocated to your second choice candidate. If your candidate has the least number of votes, then the same happens. This process is repeated until all the seats are filled. This still results in a few wasted votes (if you don’t vote for any of the winning candidates), but it is roughly proportional. It also keeps most of the key benefits of FPTP. For example, while majority governments are less likely, it encourages coalitions based on policies. This is more likely to reflect what the population generally think. For example, in a vote on staying in Europe, Conservatives, Labour, and Lib Dems would all want to stay. They would likely agree to work together to stop UKIP from pushing their agenda through.


The real question that no-one is talking about though is why we are not moving with the times. In today’s world, I can take a quiz that tells me which Disney Princess I am, but I can’t express my opinion on a particular party policy. Given today’s technology it would be simple to build a platform to allow voters to express their opinions. Sensible limits could be set. For example if a million netizens express their disagreement with a Commons vote, it needs to be re-debated. If five million express disagreement, there needs to be a referendum. Parties could use the platform to understand how their voters feel about certain issues.

There are a small number of difficulties to overcome. For example, how can we ensure that each person gets only one vote? Maybe some form of two-factor authentication? A secret code sent via post to the voter’s registered address could work. How can we decide which of parliament’s hundreds of bills are opened up to netizen participation? Maybe bills where the percentage of voting MPs was higher than the most recent election’s voter turnout? This would indicate a heavily whipped vote. Or those on which the winning margin was small? That would indicate a particularly controversial topic.

Regardless, it seems sensible to bring society into the information age. Voters should have a real say on issues that affect them.

Whatever you think is the best way to reform the electoral system, none of the major parties are likely to care. FPTP benefits them, and so they are resistant to change.

If you think the scandal of wasted votes and huge disparity in voter power is a shameful reflection on our society, visit the Electoral Reform Society’s homepage. You can learn more about what alternatives there are and what you can do.

Registering a Domain Name

The first step in becoming a webmaster is choosing a domain name. Some domains are easier to register than others. .com TLDs are the easiest. My own TLD (.es) is supposed to be used for Spanish websites, and so I chose a Spanish company to buy the domain from. Now, I have to do all of my domain management in Spanish, which is great for my language skills, but not ideal if you struggle with other languages.

If you want to follow the steps below to get your site up and running, then choose domain name registration without hosting. This is normally pretty cheap, and generally shouldn’t cost more than $20 per year unless you want a domain name that is in high demand.

As a side-note, there are some unscrupulous services out there that allow you to search to see if a domain name is available. If you use one of these, aim to buy your domain as quickly as possible, as some of these services will try to buy the name to try to extort a higher fee out of you.

The best way to find available domain names is using the site, and searching for your domain name. You’re looking for the following response:



\*\* server can’t find yourdomain.tld: NXDOMAIN

Once you’ve found an available domain name, and you’re ready to buy, you’ll need to choose a registrar. A registrar is simply a company that files the paperwork to say that you are the person who owns the domain you have chosen.

Be aware that if you want a rarer TLD, you may be restricted in the number of registrars that will cater for you. You will have to choose the registrar that best fits your needs. For example .es domains are known for being difficult.

Before you sign up for a domain name, check it carefully, and try to avoid the mistakes made by the following sites:


Once you’re happy you’ve got the perfect domain, you can buy most of them from any one of hundreds of registrars, but here are a few recommendations below (none of these are affiliate links):

  • Namecheap
  • Hover
  • GoDaddy — while GoDaddy’s hosting services tend to be pretty poor performance, they have good promotions on domain names. There are some horror stories about them though.

Featured image credit: Grey Hargreaves

Designing My Web App

I’ve found a gap in the market. This is going to earn me billions. I’ve already chosen my Ferrari. All I have to do now is actually make the damn thing.

What is it?

It’s an employee training tracker. Yes, we can use Excel spreadsheets, Access databases, and pen and paper, but it doesn’t do all the fun stuff I want it to do.

What fun stuff?

Glad you asked.

  • It’ll list trainings, summaries, time taken, prerequisites, whether they’re part of on-boarding or in-service training regimes (or both), and links to any resources needed to complete the training.
  • It’ll list trainees, the trainings they’ve completed, the trainings they haven’t yet completed, trainings they haven’t completed but they have completed all the prerequisites for, whether they’re a full employee or being on-boarded, how long it will take for them to enter service.
  • It’ll list trainers, and how many trainings have been completed in a particular week, along with what the training was and how long it took.
  • It’ll have fancy interfaces for setting new trainings, adding trainers and trainees, and dashboard views.

Who’s going to use it?

The application will be used by three distinct groups, but realistically, it is mostly a compliance and reporting tool. Trainees are unlikely to ever actually bother checking. The three groups are:

  • Trainees themselves — to check their progress.
  • Trainers — to access resources and log trainings delivered
  • Compliance officers and on-boarding managers — to check status dashboards and provide lists of trainings delivered for ISO9000/ISO9001 compliance

Why am I going to do this?

Well, put simply, it fills a gap. I am currently training new joiners at work, and find that our programme is not logically constructed, resources are all over the place, and reporting that a training has been completed is done via email. Hardly ideal, reliable, or thorough.

How am I going to do this?

Now for the fun question. The How. It will be a web application that interfaces with a backend API. The backend API will be simple, ArrestDB looks fine for my needs to begin with. The database it sits in front of will have three tables:

  1. Trainees
  2. List of trainings
  3. Completed trainings

It will have to provide lists and individual objects in JSON format. Each of these tables will have specific columns, to be defined definitely later.

The front-end will handle putting data into the databases. There will be forms to add a new trainee/trainer, a new training, and to record a training that was delivered. The form to add a new training will allow you to set any of the existing trainings as prerequisites. The form to record a training will ask whether this is an in-service training or on-boarding training, and allow the trainer to select from a list of trainees appropriately. The form to add a new trainee/trainer will capture the email address, name, and hire date.

There will also be a few reporting views. One will list all available trainings, and be filterable. One will list all registered users, also filterable. One will listed all completed trainings recently in two modes — sensible mode (i.e.: one row per training session, regardless of the number of attendees) and stupid mode (i.e.: one row per training session per attendee). One will be an employee overview, showing what trainings they have completed, and what they are eligible for.

I have decided to use React to implement the front-end JavaScript, although I’m more familiar with Angular. I will slap a Bootstrap front-end on it to begin with it, and worry about theming it later.

What about version 2.0?

Version 2.0 will feature passwordless logins, securing the front-end and the API using authentication tokens sent to the user’s registered email, and then stored locally on the browser. As hotdesking is common in my environment, 10 tokens will be valid at the same time, with the user able to revoke any token at any time, or all of them. Exporting to CSV will have been implemented. Theming will be implemented, as will branding.

There will be a “training planner”, where multiple employees can be selected, and a list of trainings for which they are all eligible calculated. There may be some exciting calendar integration, but no promises.

Wow, that sounds awesome, can I help?

Sure! I will publish the GitHub repository alongside a “first commit” post in the coming days. If you steal my idea and run with it on your own though, I’ll hunt you down.

Footnote: I know that “trainings” is not the correct English term. I’m from England, and spent two years teaching English. But I don’t live in the UK, and here in Budapest, “trainings” is perfectly fine — and it beats typing “training courses” every time.

Choosing a VPS Provider

I use VPS Dime, and have been very please with the price and performance, but you could just as easily use AWS, Digital Ocean, or any other VPS provider.

The instructions provided here are for VPS Dime’s cheapest VPS, but once the server spins up, it makes no difference who the provider is. AWS is fairly complex and designed for enterprise clients, so the user interface is not as nice, but they do have a free tier which is more than enough to cater for most small sites.

Choose the type of VPS you would like with the sliders on the home page, and choose Buy Now! when you are happy with the price and specs.

You will see the following page:

Choose an appropriate hostname (it doesn’t have to be anything linked to your website, it doesn’t matter really here, it’s just for you to identify the server). Make a note of the root password! You will need this later.

You can then choose a geographical location (eg: Dallas). Ideally, this should be as close to where the majority of your visitors will be from as possible, although the connection speed might be a factor for you.

Choose an operating system. The rest of this series is based on Ubuntu Trusty Tahr (14.04), so select either the 32 or the 64 bit version of this operating system.

We won’t be using any of the other options in this section, so you can skip it. As you become more accustomed to web development, you may decide you want to fiddle with these, but let’s ignore them for the time being.

Choose Checkout >> on the right, and pay for your VPS.

Once your VPS is provisioned (it may take a little time), head to your account details page, and find the IP address. You will use this to log in to your VPS, so make a note of it alongside the root password. If you forgot to write down your root password, most VPS hosts will allow you to retrieve it somehow.

Setting Up Your DNS

You don’t want to have users typing in your IP address to access your server, you want them to be able to access your server via the name you paid for. The exact steps to follow will depend on your registrar.

First though, a little background. When you register your domain, you only have the name reserved. When someone tries to access your website, they will still need to be told where to find it. That’s where a name server comes in, and the system that is used for this is called DNS — the Domain Name System.

Along with your name, the registration from your domain will say “if you want to know where this website is, you need to talk to this server”. You computer will then go to the domain name server and say “excuse me, I’m looking for your-domain-name, could you tell me where it is please?”. The domain name server will then say “of course, it’s over there”.

In general terms, most registrars also have domain name servers, but this may not be the case for you. If not, you will need to explore how to set up your domain with a different name server, but your registrar should have some information on how to do this. On the client page for most registrars though, you should be able find a page which allows you to add DNS entries.

Most often, this will be listed under the ‘Advanced’ section, and may say something about DNS zones. You need to add an A record (an authoritative record). Most providers will give you the option to add A records, MX entries, CNAMEs, and text. You may also be able to set NS entries. Your A record should be for your-domain-name, your-VPS’s-ip. You may also wish to add a CNAME for *.your-domain-name, your-domain-name.

This will allow you to configure subdomains on your server without having to keep fiddling with your DNS server. We’ll come back to your DNS server in a later tutorial on setting up email.