Tracking down stray open connections in rethinkdb

Over the past few days rethinkdb has been giving me errors about handshake timeouts due to too many open connections. If you’ve had similar ‘handshake timeout’ errors you’ve probably got the same problem.

Somewhere in the Tower Storm codebase connections were being made and not closed properly. Unfortunately the code base is huge and database calls are made in many places. Also when rethinkdb errors out it doesn’t give a stack trace or any indication of where connections are being tied up.

But I figured out a way to find connections that were not being closed properly.

Here’s my original connection code. This code is based off the example on the rethinkdb site.

### rethinkdb-client.coffee ###

r = require('rethinkdb')
netconfig = require('../config/netconfig')

db = r
db.onConnect = (callback) ->
r.connect {host: netconfig.db.host, port: netconfig.db.port, db: 'towerstorm'}, (err, conn) ->
if err then throw new Error(err)
if !conn then throw new Error("No RethinkDB connection returned")
callback(err, conn)

module.exports = db

And here’s how I modified the onConnect function to find the connections that were not being closed:

db.onConnect = (callback) ->
stack = new Error().stack
r.connect {host: netconfig.db.host, port: netconfig.db.port, db: 'towerstorm'}, (err, conn) ->
if err then throw new Error(err)
if !conn then throw new Error("No RethinkDB connection returned")
setTimeout ->
if conn && conn.open
console.log("Connection created was not closed in 5 seconds. Stack: ", stack)
, 5000
callback(err, conn)

Firstly the line:

stack = new Error().stack

Gets a stack trace of how we reached this db.onConnect function.

Then just before returning the connection I setup a callback to check the connection in 5 seconds. If it detects the connection is still open it gives me a stack trace showing exactly where it was opened and I can add a conn.close() in the appropriate spot.

And easy as that you can find and kill all your stray rethinkdb connections.

 

If you’re unit testing Javascript use Sinon.js, it’s more useful than you expect

For a long time I didn’t use any unit testing libraries with Javascript. After all unlike Java you can do anything with your objects. If you want your car to be a cat that can walk you simply modify the object directly. So what’s the point of having a mocking library?

I discovered sinon.js a few months ago and immediately fell in love. It made me realize how much useless boilerplate code I had in my unit tests and immediately helped me write cleaner, more elegant code.

Here’s a very basic example of how I used to mock functions before and after sinon:

/** Before sinon.js **/

getAnimationSheetArgs = null;
impactMock.game.cache.getAnimationSheet = function() {
getAnimationSheetArgs = arguments
};
bullet.loadAnimations();
assert(getAnimationSheetArgs != null);
assert.equal(getAnimationSheetArgs[0], "img/bullets/awesome.png");
assert.equal(getAnimationSheetArgs[1], 5);
assert.equal(getAnimationSheetArgs[2], 15);
/** After sinon.js **/

impactMock.game.cache.getAnimationSheet = sinon.spy();
bullet.loadAnimations();
assert(impactMock.game.cache.getAnimationSheet.calledWith("img/bullets/awesome.png", 5, 15));

Before Sinon I had to declare a variable to hold the arguments passed to each function I wanted to mock. After using sinon this became one line and verifying the correct arguments were passed to the function can be done in one function call.

It may not look like much, but when you have over 1000 unit tests (as Tower Storm now has) it adds up to a lot of time saved.

Sinon also provides tons of functionality to stub out functions. You can make a method automatically return certain values or even call a callback with specific arguments (for testing async code).

Let’s say you want to test a render function to ensure user details are displayed. It looks like this:

var AdminController = {
userInfo: function (req, res) {
userId = req.param('id');
User.findById(userId, function (err, user) {
if (err) return res.send(500);
res.jsonp(200, user.data);
});
}
}

Now we only want to test that user.data is being sent to the browser. We don’t want to actually hit the database and find a user with findById so we need to mock it out.

it("Should send user.data to the browser", function () {
var mockUser = {data: {name: 'test'}};
sinon.stub(User, 'findById').callsArgWith(1, null, mockUser);
req = {param: sinon.stub().returns(123)};
res = {jsonp: sinon.spy()};
AdminController.userInfo(req, res);
assert(res.jsonp.calledWith(200, {name: 'test'}));
User.findById.restore()
});

On line 1 we create a mock user which we want to display. Then we stub the User.findById method to instantly call argument 1 (the callback) with the 2 arguments null and mockUser (for it’s err and user arguments).
On line 3 we create req as an object with just the param method. We set param to a sinon stub and make it instantly return 123 (the user’s id, although it can return anything as User.findById doesn’t even use it).
On line 4 we create res as an object that only has a jsonp method. We set this method to be a sinon spy as it doesn’t need to return anything, it only needs to record what it was called with.
On lines 5 and 6 we call the method and check that res.jsonp was successfully called with the users data using sinons handy calledWith function.
Finally on line 7 we call restore on User.findById to remove the stub and restore it’s original functionality. This is so if we have tests in the future that want to use the original function they won’t break unexpectedly.

This is by far the easiest way I’ve found to mock and unit test javascript though if you know of a better way let me know. I’m always trying to be as efficient as possible.

 

Hundreds of robots in every home

I used to have hundreds dreams and ideas I wanted to pursue. So many that I jumped from task to task like mad trying to make something happen and only ended up scratching the surface of a few. Recently I’ve realized there only 2 major ideas I keep coming back to that I wish to pursue more than anything else:

  • Building Tower Storm into a big successful game that millions of people play and enjoy and I earn enough off to never have to worry about money again.
  • Building a robotics company that makes producing your own food automatically a possibility for everyone on the planet.

Tower Storm is already in progress, so in this post I want to talk about my vision for the future of robots.

I don’t believe there will be human like robots in every home as in bionical man, but I do think there will be hundreds of machines that will do most menial chores automatically. They’re already being made, in a crude unrecognisable form, via 3D Printers and open source circuitboards.

This reminds me of computers in the 70’s, they were toys that you had to solder and program yourself, there was no way everyone was going to have one let alone use it all the time. I feel that robots are at a similar stage.

What is needed now is a standard platform that all robots can build upon to work together to accomplish larger tasks. The robots would be modular and accomplish one function in a simple way on their own and they could then be combined in thousands of ways or sold in pre-packed sets to the non geeks out there.

Lets take cooking for an example. Making a spaghetti is somewhat complicated, but each step and microstep is pretty simple. Lets break down the steps:

  • Cook Sauce
    • Put pot on hotplate
    • Turn hotplate on
    • Cook Onions
      • Peel onion
      • Slice onion
      • Transfer onion to pot
      • Stir for 2 minutes
    • Add Mince
      • Defrost mince
      • Transfer to pot
      • Stir for 2 minutes
    • Add jar of sauce
      • Open jar
      • Pour jar into pot
    • Stir for 10 Minutes
  • Cook Pasta
    • Open pasta packet
  • Put sauce on pasta
  • Add cheese on sauce

When you lay it out like that you can see there are a lot of small steps, but none are too complicated for a robot to do. So why don’t they do it? Well I believe they will 10 – 20 years from now, we simply need to build a system for robots to work together in a neat way (like unix pipes) so they can collaborate and make tasks like preparing dinner completely automated.

We’ve already got the infrastructure with 3D Printers and arduinos to make this happen, if the community created open source designs for robots that can do each of these tasks then anyone, anywhere could build robots that automate many of their cooking tasks for them, eventually not needing to cook at all.

What has me even more excited than cooking is producing food automatically. It seems even more complicated than cooking, but if we break it down into separate components we could build robots that create and maintain tiny at home vegetable farms or fruit trees.

Then if these designs are open sourced anyone anywhere in the world can live effortlessly off the grid with all their food made for them by a team of robots. Once we’ve automated much of the first worlds food supply we can even help out those less fortunate in the world. Everyone in the world could use them to have their own completely automated unlimited production farm.

This is what excites me most about 3D Printing and open source circuitboards. Not gadgets and toys but machines that can bring complete automation to everything we don’t want to do in our lives. so everyone in the world is free to focus on what they enjoy, creating amazing things and learning and giving back to society.

Sure it’s insanely complicated, but it’s doable,

 

How to mock and unit test moment.js

I’ve been using moment.js in some client work for a rather complex booking form.Unfortunately date related functionality is almost impossible to manually test so we implemented unit tests to ensure everything worked as it should. Code like this is perfect for unit testing because it’s completely rules based.

Mocking moment.js seemed hard at first but wasn’t too complicated after a bit of thought. Here’s an example of how to mock it using mocha.

First we have the booking manager validate function that needs to be tested. This function ensures that customers cannot make bookings for the next day after 3pm. It also ensures customers cannot make bookings for saturday and monday after 3pm on friday.

/** booking-manager.js **/

BookingManager = {
getCurrentTime: function () {
return moment().tz(config.TIMEZONE)    
},
validate: function(formData) {
var bookingDay, currentTime;
currentTime = this.getCurrentTime();
bookingDay = moment(formData.dateAndTime.date, "YYYYMMDD").tz(config.TIMEZONE);
if (bookingDay.isBefore(currentTime, 'day') || bookingDay.isSame(currentTime, 'day')) {
return false;
}
if (currentTime.format('HH') >= config.NEXT_DAY_CUTOFF_TIME) {
if (bookingDay.isSame(moment(currentTime).add('days', 1), 'day')) {
return false;
}
if (currentTime.day() === 5 && bookingDay.day() === 1) {
return false;
}
}
if ((currentTime.day() === 6 || currentTime.day() === 0) && bookingDay.day() === 1) {
return false;
}
return true;
}
}

Now we need to test this function to ensure it works. The problem is we have to test booking restrictions are working on every day of the week. So we need to mock out the getCurrentTime function to pretend we’re submitting the form at different times.

/* booking-manager-unit-test.js **/

describe("BookingManager", function() {
var currentDayOfWeek, currentHour, originalGetCurrentTime, formData;
currentHour = null;
currentDayOfWeek = null;
originalGetCurrentTime = null;
formData = null;

beforeEach(function() {
var currentDayOfWeek, currentHour, originalGetCurrentTime;
currentHour = 12;
currentDayOfWeek = 1;
formData = { dateAndTime: { date: null }};
originalGetCurrentTime = bookingManager.getCurrentTime;
bookingManager.getCurrentTime = function() {
var currentTime;
currentTime = originalGetCurrentTime.call(bookingManager);
currentTime.day(currentDayOfWeek);
currentTime.hour(currentHour);
return currentTime;
};
});

afterEach(function() {
return bookingManager.getCurrentTime = originalGetCurrentTime;
});
});



This is the meat of the mocking. We set up two variables currentHour and currentDayOfWeek. Then we use these in our mock getCurrentTime function to fake the current time.

describe("validate", function() {
it("Should return false if the booking date is the current day or before", function() {
var formData, today;
today = bookingManager.getCurrentTime();
formData.dateAndTime.date = today.format('YYYYMMDD');
assert.equal(bookingManager.validate(formData), false);
});

it("Should return false if the booking is a next day booking and the time is past 3pm", function() {
var currentHour, formData, tomorrow;
currentHour = 16;
tomorrow = bookingManager.getCurrentTime().add('days', 1);
formData.dateAndTime.date = tomorrow.format('YYYYMMDD');
assert.equal(bookingManager.validate(formData), false);
});

it("Should return false if the booking is a next day booking and the time hour is 3pm", function() {
var currentHour, formData, tomorrow;
currentHour = 15;
tomorrow = bookingManager.getCurrentTime().add('days', 1);
formData.dateAndTime.date = tomorrow.format('YYYYMMDD');
assert.equal(bookingManager.validate(formData), false);
});

it("Should return false if the booking is made on saturday for monday", function() {
var currentDayOfWeek, formData, monday;
currentDayOfWeek = 6;
monday = bookingManager.getCurrentTime().add('days', 2);
formData.dateAndTime.date = monday.format('YYYYMMDD');
assert.equal(bookingManager.validate(formData), false);
});

it("Should return false if the booking is made on sunday for monday", function() {
var currentDayOfWeek, formData, monday;
currentDayOfWeek = 0;
monday = bookingManager.getCurrentTime().add('days', 1);
formData.dateAndTime.date = monday.format('YYYYMMDD');
assert.equal(bookingManager.validate(formData), false);
});

it("Should return false if the booking is made on friday after 3pm for monday", function() {
var currentDayOfWeek, currentHour, formData, monday;
currentDayOfWeek = 5;
currentHour = 15;
monday = bookingManager.getCurrentTime().add('days', 3);
formData.dateAndTime.date = monday.format('YYYYMMDD');
assert.equal(bookingManager.validate(formData), false);
});

it("Should return true if the booking is made on friday before 3pm for monday", function() {
var currentDayOfWeek, currentHour, formData, monday;
currentDayOfWeek = 5;
currentHour = 13;
monday = bookingManager.getCurrentTime().add('days', 3);
formData.dateAndTime.date = monday.format('YYYYMMDD');
assert.equal(bookingManager.validate(formData), true);
});

it("Should return true if the booking is a next day booking and the time is before 3pm", function() {
var currentHour, formData, tomorrow;
currentHour = 12;
tomorrow = bookingManager.getCurrentTime().add('days', 1);
formData.dateAndTime.date = tomorrow.format('YYYYMMDD');
assert.equal(bookingManager.validate(formData), true);
});

it("Should return true if it's past 3pm but the booking is not a next day booking", function() {
var currentHour, formData, future;
currentHour = 16;
future = bookingManager.getCurrentTime().add('days', 2);
formData.dateAndTime.date = future.format('YYYYMMDD');
assert.equal(bookingManager.validate(formData), true);
});
});


As you can see using this mock function is quite easy. You simply change the currentHour or currentDayOfWeek variables and that’s what the time becomes.

Have fun coding and let me know if you find any issues with the above code or feel it can be improved.

 

How to structure your NodeJS Models

In MVC your models are objects that save out to the data store and do basic low level manipulations on your data. Over the past few years I’ve worked with many different ways of structuring models and have finally come to a solution I feel is elegant, simple and easy to understand. It’s the exact structure I use for Tower Storm and it’s based mainly off the way the mongoose npm package works.

Here’s an example of the most common model of all time: User. The database functions in these examples are all psudocode but should be pretty self explanitory as to what they’re doing. You can get a copy of the source code at https://github.com/timjrobinson/nodejsmodels

The trouble with writing models is sometimes you want an instance of an object which you can change. Like you want to change a specific users username. And other times you want to get all users of a specific type. But you don’t want the method for fetching users to be part of each user because that doesn’t make any sense.

/** user.js **/

var User = function (data) {
this.data = data;
}

User.prototype.data = {}

User.prototype.changeName = function (name) {
this.data.name = name;
}

User.findById = function (id, callback) {
db.get('users', {id: id}).run(function (err, data) {
if (err) return callback(err);
callback(null, new User(data));
});
}

module.exports = User;

So what we’ve done is created a User class that can both be an instance of a user and has a static method to find users. This means you can use the same User model to do anything you need to do with users. Each user also has a data object. This is where any data that should be saved back to the database is stored. Here’s an example of how you’d use it:

/** app.js **/

User = require("./user.js")

var toby = new User({name: "Toby"});

User.findById(42, function (err, user) {
user.changeName("Brian");
});

We first created a user called Toby. Then we found a user of id 42 from the database and once it was returned we changed that users name to Brian.

Now a model like this is still missing quite a few important features. Firstly it can’t save out to the database, lets fix that with a save function:

/** user.js **/

User.prototype.save = function (callback) {
var self = this;
db.get('users', {id: this.data.id}).update(JSON.stringify(this.data)).run(function (err, result) {
if (err) return callback(err);
callback(null, self); 
});
}

This is why we stored all the users data in this.data rather than straight onto the user. Because it makes saving and loading the user from the database super easy. You just stringify it to save and do new User(data) when loading.

Now it’s bad practice to access all it’s data by changing user.data.something and sometimes you may forget and wonder why user.name is always undefined when the user clearly has a name. So lets add getters and setters to the model.

/** user.js **/

User.prototype.get = function (name) {
return this.data[name];
}

User.prototype.set = function (name, value) {
this.data[name] = value;
}

I also like to enforce some sort of database schema. Otherwise with many developers on a team it’ll be hard to keep track of all the different variables the user could have.

/** schemas.js **/

schemas = {
user: {
id: null,
name: null,
password: null
}
}

module.exports = schemas;
/** user.js **/

var schemas = require("./schemas.js");
var _ = require("lodash");

User.prototype.sanitize = function (data) {
data = data || {};
schema = schemas.user;
return _.pick(_.defaults(data, schema), _.keys(schema)); 
}

That last line in sanitize uses a couple of lodash functions to only keep variables that are in the user schema. _.defaults merges all variables from schemas.user into data that don’t exist already. _.keys gets all the keys from the schema. _.pick only keeps the variables who’s names were returned by _.keys. Basically it ensures this.data matches our schema exactly.

Now we can use this sanitize function both in the constructor of user and also before saving. That way every user will look exactly the same in the database with no stray data. Here’s what these functions look like now:

user.js

var User = function (data) {
this.data = this.sanitize(data);
}

User.prototype.save = function (callback) {
var self = this;
this.data = this.sanitize(this.data);
db.get('users', {id: this.data.id}).update(JSON.stringify(this.data)).run(function (err, result) {
if (err) return callback(err);
callback(null, self); 
});
}

We now have an easy to use User model that can retrieve users from the database, change their properties and save them back with all data being sanitized and checked automatically.

Now many of our models will use similar functions for get/set/save/sanitize. So instead of copy and pasting code you should create a generic model that you can extend all your other models from. Inheritence is kind of tricky to implement with Javascript so unless you’re an amazing coder I’d recommend looking into es6’s classes or using coffeescript, which is what I use for Tower Storm.

You can download and play with all the source code (including a little mock database for getting and saving data) for the tutorial at: https://github.com/timjrobinson/nodejsmodels

Let me know if you have any questions or feel this model structure could be improved.

 

RethinkDB – Too many open files

After dealing with RethinkDB “too many open files” errors for the past few weeks I finally submitted a github issue and discovered what the problem was – I wasn’t closing my connections.

I thought there was some kind of connection pooling in rethinkdb so when you did r.connection it would either use an existing connection or get a new one. Unfortunately this wasn’t the case. Turns out when you’re done with your connection you have to close it or it will tie up resources until the connection times out. Do it enough times and you’ll run into this too many open files error.

Here’s an example of how to code your nodejs models using rethinkdb best practices. This is based off the example on the rethinkdb site.

### rethinkdb-client.coffee ###

r = require('rethinkdb')
netconfig = require('../config/netconfig')

db = r
db.onConnect = (callback) ->
r.connect {host: netconfig.db.host, port: netconfig.db.port, db: 'towerstorm'}, (err, conn) ->
if err then throw new Error(err)
if !conn then throw new Error("No RethinkDB connection returned")
callback(err, conn)

module.exports = db
### user.coffee ###

db = require("./rethinkdb-client")

class User
data: null

constructor: (data) ->
@data = data

User.findById = (id, callback) ->
if !id then return callback(new Error("Invalid ID passed to User.findById"))
db.onConnect (err, conn) ->
if err then return callback(err)
db.table('users').get(id).run conn, (err, userInfo) ->
conn.close() #This is what I forgot, remember to do this
if err then return callback(err)
return callback(null, new User(userInfo))

User.findByUsername = (username, callback) ->
db.onConnect (err, conn) =>
if err then return callback(err)
db.table('users').getAll(username, {index: 'username'}).run conn, (err, cursor) ->
if err then return callback(err)
cursor.toArray (err, results) ->
conn.close() #Do this after cursor.toArray, not before
users = results.map((userInfo) -> new User(userInfo))
return callback(null, users)


If you’re only getting one item you should close the connection immediately after the run() callback returns. If you’re getting multiple items you’ll get a cursor instead of a single item to your callback so call conn.close() after you’ve done cursor.toArray().

 

Daily coding lessons

I’ve always wanted to write more about coding. The problem with programming is there is so much builds on top of prior knowledge and whenever I begin writing an article I start teaching the prior knowledge required. Then I teach the prior knowledge required for that knowledge and so on until I get bored of writing and never hit publish (seriously I have about 20 articles like this in my drafts right now).

So I’m going to do an experiment: Raw daily coding lessons. Each day after I’ve worked for a few hours I’m going to write a post about what I learnt that day to hopefully help others who get stuck in similar situations. It may lack context and prior information. It may be raw and somewhat useless for most people. But i’m hoping over time I can get better at explaining things and teaching so my mistakes can help others, not just myself.

If you want more information on anything I write about just let me know and I’ll try and help you out 🙂

This experiment begins today, wish me luck.

 

My first NPM module – Selenium IDE to Nightwatch converter

This year my two new years resolutions were to contribute to open source more and teach coding to others. Because in the 10 years I’ve been coding I’ve done very little to give back to the community I’ve learnt so much from. I was always afraid of not giving the best advice or being seen a bad programmer but I now finally feel confident enough to contribute and teach others.

I started off slow sending pull requests and issues to modules I use and after 2 years of working with Node I’ve finally published my first NPM module.

It’s called seleniumide2nightwatch (side2nw on npm) and it will convert selenium files you’ve exported to nightwatch format.

On a client project we had been manually running our selenium tests in firefox before each prod push and have been wanting to move to nightwatch so that this could be automated but didn’t find the time to convert everything over by hand. So I created this program to do it for us and help out others who have the same issue.

It only does the most basic commands and supports the most basic selectors so far but I hope to make it fully featured over time and pull requests are much appreciated!

 

Switching to Ghost

A few months ago my web host was hacked and the villains that broke in began using it to send spam email. I didn’t even notice until I began getting auto-replies from people I’ve never emailed quoting the spam messages my server was sending.

Now trying to track down a break in and find any backdoors installed on your server is a painful experience when running just one site. This server was running over 50 sites.

I’m trying to cut my life to the bare minimum in all areas and saying no to everything that isn’t exactly what I want to do. Soon after this hack took place I realized I haden’t cleaned up my online life. Back when I was doing Internet Marketing full time I purchased hundreds of domains with full intentions of turning them into niche sites each making a few dollars a day so I could achieve completely passive income (the dream.. right?). Unfortunately as what happens to most people that travel down this route it didn’t go quite as planned and 3 years later I’m left with hundreds of domains not pulling in much traffic and costing a boatload of money to maintain.

There’s little use in keeping these old sites around when they’re simply being money sinks and the rest of my life has moved onto bigger and better things. Even though I haven’t looked at them in years they still give mental and financial baggage when I’m doing tasks such as backing up my sites, switching hosts or navigating cpanel.

So I’ve gone ahead and deleted them all. I’m down to about 30 domains most if not all related to ZenTester and Tower Storm. If I don’t begin working or selling these remaining domains in the next 6 months I’ll be unregistering them too.

After this great purge I discovered there’s only one domain on this web host that I wanted to keep running, this very blog you’re reading now.

Being that I’m moving web hosts anyway I thought why not upgrade my blog to advance with my programming career. I made the switch from PHP to NodeJS coding over 2 years ago now so if I want a blog I can hack on without wanting to tear my hair out (ok WordPress isn’t that bad, but I am sick of PHP) I might as well use Ghost.

So here I am, typing in Ghost. It’s kind of nice, in an elegant zen kind of way. It doesn’t have the features of WordPress, but it doesn’t have the bloat either, and so far it’s going ok.

Update 26/02/2017 – I’m back on WordPress

 

Tower Storm and my 30 second 2 button deployment process

2 years ago I read an amazing book The Lean Startup which made me completely rethink how I developed applications and websites. In the past I’d make a change to an application, spend 30 minutes going through the main features and release. Inevitably a few days later customers would come back saying x, y and z are all broken. So I’d fix them and re-release and now 3 other completely different features would be broken. This happened consistently but with limited time and money I thought it was impossible to improve this process.

Today I can make a change to Tower Storm and within 30 seconds have this change live online with very little (soon to be none) manual testing and no old bugs are coming back to bite me. In this post I want to show you how I’ve done it.

Automated Testing

The first step is to eliminate this 30+ minutes of testing that I had to manually do after every change. There is absolutely no way you can quickly release and iterate on your app without either:

a. An army of testers who are always able to detect any bug and will happily retest everything in your application upon every single change
b. Automated tests that find regressions and issues in your application for you

Being that you’re running a lean startup here I don’t think you’ve got thousands of dollars to burn on many dedicated testers so lets explore the world of automated testing.

Automated testing is where you create code that tests the functions of your application to determine if it’s doing what it should be doing. For example if you have a function that should remove every comma from a string and you want to ensure that it works for a variety of different strings you might create a test like so:

function testRemoveComma() {
var sentence = "this,is,a,sentence";
var expectedSentence = "thisisasentence";
var actualSentence = removeComma(sentence);
assert.equals(expectedSentence, actualSentence);

}

In this javascript example we first create a sentence with commas in it, then we specify what we expect back from our function. Then we call that function and ensure that what we got back from it matches what we expect back from it.

This example is what is known as a “unit test”, it is a test that checks one function by giving it inputs and receiving outputs and it doesn’t do other tasks such as connecting to your database or reading files. It should only check one function only. If that function calls other functions you need to use a technique called ‘mocking‘ so that they don’t really get called. I’ll go into more detail on how to create unit tests and mock objects in a variety of languages in a later post.

To start unit testing you’ll need a library to run these tests. Generally there is one or two good testing libraries for most languages. For Javascript I’d recommend mocha for Node.js testing or Jasmine for client side testing, for Java JUnit with Mockito is awesome and for PHP PHPUnit works well.

Unit tests are the simplest, fastest and often most fragile tests. They aren’t the best for ensuring your app is bug free but they are perfect for eliminating existing bugs and ensuring they never occur again.

The thing I love about unit tests is because they are so fast and easy to write you can do a process known as test driven development. This is where you write unit tests for your code before you write a single line of code. So in the remove comma example above we could write an empty removeComma function, then write the above test and run it only to see it fail, then after it has failed we create our removeComma function and run the test again and when it passes it means our code is working.

When you do test driven development constantly you can save hours by not needing to run your app after each code change. You simply test then code, test then code and eventually at the end you run your app and because every function works as it should your app should (in theory) work perfectly first go. It’s amazing when you get into this flow because if you’re building a large app you can continue to code for hours and really get into the zone without having to recompile everything and boot up your app to see if it’s working as it should.

Better testing with Integration and Acceptance tests

After you’ve mastered the art of the unit test there are even more powerful tests that you can use that will allow you to deploy your application without even running it and know that all functionality is working.

You do this by creating integration and acceptance tests. Unlike unit tests Integration and Acceptance tests actually test your app in a real environment with database and network calls. Integration tests are similar to unit tests in that they run one function or a group of functions in order and check that they are all working as they should. The difference is integration tests run the code as if a real user was calling the function, so if the function creates records in the database the integration test will do that, and if your function calls another external service the integration test will do that too.

Here’s an example of a PHP integration test in ZenTester:

/**
* @test
*/
function Checklogin() {
$this->ci->load->library('userManager');

$random_key = self::$random_key;
$reg_data = ut_get_registration_data($random_key);

//logout first (check_login function in controller does this too). 
$this->assertEquals($this->ci->user->logout(), true, "logging out before logging in");        
$this->assertEquals(is_array($this->ci->usermanager->register_user($reg_data, 0)), true, "Registering User With All information");
$this->assertEquals($this->ci->user->login($reg_data['email'], $reg_data['password']), true, "logging in to this user");
$user_id = $this->ci->user->get_user_id();
$this->assertEquals($this->ci->user->is_valid_user(), true, "Checking that after login we are a valid user.");
$this->assertEquals($this->ci->user->logout(), true, "Testing logging out");
$this->assertEquals($this->ci->user->is_valid_user(), false, "Checking that we are not a valid user after logging out. ");

ut_erase_user($user_id);
}

In this integration test we first create a new user with the helper function ut_get_registration_data. Then we register and log in with that user. After logging in we ensure that the user has successfully logged in and is valid. Then we log out and check that this also worked. Finally the user is deleted at the end.

In this case we create and clean up all our data so the database isn’t littered with test data. The downside of always deleting your data at the end of the test is you may find that it’s hard to track down why an integration test is failing because you can’t see what was created and what wasn’t. At Wotif we don’t clean up our data at the end of each tests and instead re-use the test data upon every run and delete old data at the beginning of each test. This way you don’t add much test data to the database while still being able to figure out what went wrong when a test fails.

Acceptance tests are another level of abstraction, they use your app from a users perspective, loading pages, clicking on links etc and ensuring what is shown to the user after performing specific functions is correct. They are often done with tools such as selenium or curl. At Wotif we’ve been using CucumberJVM to run selenium on a remote box which loads up our app, tests that all the main features are working from a user perspective and reports if anything is broken. These are then run automatically by Team City every time we push a change.

 

Using GruntJS to build your assets 

Grunt is the second most amazing part of the deployment process. It basically takes the application and builds it so it’s ready to upload. It currently does all of the following (the grunt plugin used to do each item is in brackets):

  • Bumps the game version number (grunt-bump)
  • Checks the javascript for any errors (lint)
  • Cleans the build directory (grunt-contrib-clean)
  • Copies all the correct files to the build directory (grunt-contrib-copy)
  • Turns all the cofeescript into javascript (grunt-contrib-coffee)
  • Builds the core game javascript into a single js file (grunt-exec which runs impact.js’s build tool which is coded in php)
  • Uglify’s (minifies) the javascript along with all external libraries and config files into a single game.min.js file (grunt-contrib-uglify)
  • Compiles all the less css files (grunt-contrib-less)
  • Minifies the css and puts it into one file (grunt-contrib-cssmin)
  • Compiles the jade files into html (grunt-contrib-jade)
  • Uploads all the compressed and compiled assets to a new folder on amazon s3, the folder name is the current game version number (grunt-s3)

It’s a complicated process yet grunt.js handles most of these tasks with very little configuration needed and can do all of this in under 30 seconds.

The assets are uploaded to a new amazon s3 folder of the builds version number so that assets are never overwritten and users who are still playing the game are not interrupted. You can do this by setting the variable pkg to your package.json file then using the variable <% pkg.version %> in your s3 upload script. My s3 task looks like this:

grunt.initConfig({
bump: {},
pkg: grunt.file.readJSON('package.json'), 
s3: {
bucket: 'towerstorm',
access: 'public-read',

// Files to be uploaded.
upload: [        
{
src: 'build/public/js/lobby.min.js',
dest: 'game-server/<%= pkg.version %>/js/lobby.min.js',
gzip: true
},
{
src: 'build/public/css/lobby.min.css',
dest: 'game-server/<%= pkg.version %>/css/lobby.min.css',
gzip: true
}
]
}
});

If you’re using grunt-bump to auto bump the version number with every build you’ll also need to modify the grunt-bump/tasks/bump.js file and add the following line to the bottom of the grunt.registerTask function so that after the version is bumped the variable pkg is set to the latest version:

grunt.config.set("pkg", grunt.file.readJSON("package.json"));

In the game code it simply loads the assets for it’s current version number so even if people start games after this build process is done they will load the old game assets and it’s only when the new version is deployed and Node.js is restarted that the new assets will be loaded. This way the server code and game client code are always in sync. Lastly versioning the assets also ensures that users browser don’t cache old assets which could cause errors if gameplay changes are introduced yet clients are loading an old cached version of the game.

All the TowerStorm servers are hosted using Amazon EC2 and in the future I’m looking to implement a system where with each new version a bunch of new servers are spawned with the new game client and assets, then whenever players start new games they are all started on the new servers only and the old servers only stay alive until the last game is finished then they are powered down. This will allow us to continually release new versions of Tower Storm without ever having ‘patch downtime’.

Continuous Integration

The third step is to take this unit testing and asset building and automate it with a dedicated server that runs everything in a server like environment. This way if you have a team of developers they don’t each have to set up grunt and acceptance tests and full build environment on their machine, instead every time they commit a new change the continuous integration server downloads the new code from git, compiles it using grunt and runs all the unit tests using either a custom private server setup or running them on it’s own machine using it’s own browser or a headless browser like phantomjs.

I haven’t yet set up a continuous integration server for Tower Storm as I’m currently the only developer and it was easier to set everything up locally (especially in these very early stages) but I’ll definitely be setting on up soon. At Wotif we’ve tried out Jenkins, Bamboo and Teamcity and all were good in some ways and bad in others. I myself prefer the layout and feel of Bamboo the most however this is often personal preference as other members of our team prefer Teamcity’s layout more. Jenkins is probably the least liked in usability and layout but it is completely free and comes with tons of plugins for doing almost every task you like so if that’s what you’re looking for then it’ll work well for you.

Automated cmd files and the 2 button deploy process

To tie all these various testing, running and deploying scripts together I’ve created a few command files (yes I run windows 8, although I use Ubuntu at Wotif and the Tower Storm servers are running Linux) that make things even easier. Here’s what they do:

commitAndPush.cmd – Runs tortoisegit (my favourite git gui by far) commit screen then push’s the code after you’ve committed your changes. It looks like so:

call E:\\apps\\scripts\\tgit.cmd commit C:\\coding\\node\\towerstorm\\GameServer
call git push --all --progress  BitBucket
pause

the tgit.cmd file it refrences is a hook to tortoisegit to make it run any command from the command line. It’s contents are:

"C:\\Program Files\\TortoiseGit\\bin\\TortoiseGitProc.exe" /command:%1 /path:%2

devenv.cmd – Runs the game locally using node-dev which makes it auto restart whenever a change is made and it also runs test.cmd explained next:

set NODE_ENV=development
start cmd.exe /K "cd .. && call ./node_modules/.bin/node-dev server.coffee"
start cmd.exe /K "cd .. && call scripts/test unit"

test.cmd – This loads a cmd prompt that automatically runs all the unit tests using mocha and re-runs them whenever a test is made. It scans the test directory for all coffeescript files and runs them:

setlocal EnableDelayedExpansion
IF "%1" == "" (
SET files=C:\\cygwin\\bin\\find test -name "*.coffee"
) ELSE (
SET files=C:\\cygwin\\bin\\find test\\%1 -name "*.coffee"
)

FOR /F %%i IN (' %files% ') DO SET tests=!tests! %%i 
.\\node_modules\\.bin\\mocha --watch --reporter min --slow 10 --globals $ --compilers coffee:coffee-script --require coffee-script test\\_helper %tests%
pause

I run these scripts by binding them to the macro keys on my Logitech G15 keyboard (which I bought mainly because it had these keys). I have the dev environment setup bound to one key, grunt bound to another and commit and push bound to a third. This way I can develop in one key press and deploy a new version of Tower Storm using just 2 buttons 🙂

Hope this was informative enough and if you have any questions or are confused about any steps let me know.