HAProxy – How to run http and https on the same port

Want to have your app run on one just the one port but work in both http and https mode? It’s easily done. You’ll first have to have a normal frontend for ports 80 and 443 similar to the following:

frontend unsecured *:80
timeout client 1d
maxconn 20000

default_backend default

frontend secured
maxconn 20000
bind   0.0.0.0:443 ssl crt /etc/haproxy/proxycert.cert

default_backend default

You probably already have this setup if you’re running HAProxy, no need to change it if you do.

Now to make another port (9000 in this example) work with both http and https just do the following:

frontend newport
maxconn 20000

bind   0.0.0.0:9000

mode tcp
option tcplog

tcp-request inspect-delay 100ms
tcp-request content accept if HTTP
tcp-request content accept if { req.ssl_hello_type 1 }

use_backend forward_http if HTTP
default_backend forward_https

backend forward_http
mode tcp
server serverhttp 127.0.0.1:80

backend forward_https
mode tcp
server serverhttps 127.0.0.1:443

It simply takes 100ms (this could be lowered but I didn’t want things to accidentally break) to detect what mode the connection is in. If it’s in HTTP it forwards the request to itself on port 80 and if not it forwards to itself on port 443.

 

How to uninstall / delete OSX mail app

Unfortunately Apple love forcing their software on users and providing no way to uninstall. If you’re sick of apple mail popping up every time you click an email link or pressing cmd + shift + i (as I often do trying to get into the web dev tools in chrome) do the following in your terminal to remove it:

sudo -i 
mkdir /dump
mv /Applications/Mail.app /dump
mv /usr/bin/mail /dump
chmod 000 /dump

You could also rm -rf the files but I like to keep them around just in case OSX breaks somehow. If you don’t run the chmod command OSX will actually detect the file has moved and ask you to set it up again.

 

Docker / lxc set memory limit invalid argument

When trying to resize a docker or LXC container by changing the value in /sys/fs/cgroup/memory/docker//memory.limit_in_bytes if you come across the error

write error: Invalid argument

When going above a certain amount (for me it was 1024MB).

It’s most likely due to your memory limit being higher than your memory + swap limit. You need to edit the file memory.memsw.limit_in_bytes and increase it’s value.

 

How to test memory limits of a docker container

I’ve been playing around with docker containers recently including setting memory limits. But Google had wasn’t much help. So posting a simple method I discovered here. You do need php installed in your container.

truncate -s 1G /tmp/1G
php -r 'file("/tmp/1G");

All it does is create a 1 gigabyte file in the /tmp directory then attempts to read it in using PHP. PHP being PHP tries to load it all at once and if your memory limits are working correctly should give a Fatal error.

 

Teamcity stop double building the same project

It is annoying when Teamcity builds the same project twice at the same time. Especially when some of the steps are to decomission old servers and deploy new ones and the two builds end up trampling all over each other.

The option to disable this is tricky to find. First go to your project build configuration. Then click on general settings on the left hand side. Then click the ‘show advanced options’ just above the save button in the main column. When you click this it now has an option to “Limit the number of simultaneously running builds (0 — unlimited)”. Enter 1 in this box. Hit save and you’ll no longer have 2 builds of the same project at the same time.

This is with teamcity version 8.1.3. It may be in a different spot in future versions.

 

Ubuntu logstash default install directory

When you install logstash via apt-get it installs to /opt/logstash. This is valid as of logstash v1.4 ubuntu v14.04.

 

Puppet how to force apt-get update

It seems the puppet apt module doesn’t runs apt-get update more than once even when you explicitly define it if you haven’t changes your sources.list. Because it thinks nothing has changed.

This is an issue we encountered with Tower Storm where we released new versions of private packages but our servers didn’t run apt-get update before trying to install them and so they’d try and fetch an old version and get a 404 error.

To fix it you simply need to add these lines to your manifest:

class { 'apt':
always_apt_update => true,
}

Then install your packages like so:

exec { 'apt-get-update':
command     => '/usr/bin/apt-get update',
refreshonly => true,
}

$packages = ["nodejs", "npm"]
package {
$packages: ensure => installed,
require    => Exec['apt-get-update'],
}

When your packages install they will first call apt-get update every time just to make sure they have the latest version.

 

Tracking down stray open connections in rethinkdb

Over the past few days rethinkdb has been giving me errors about handshake timeouts due to too many open connections. If you’ve had similar ‘handshake timeout’ errors you’ve probably got the same problem.

Somewhere in the Tower Storm codebase connections were being made and not closed properly. Unfortunately the code base is huge and database calls are made in many places. Also when rethinkdb errors out it doesn’t give a stack trace or any indication of where connections are being tied up.

But I figured out a way to find connections that were not being closed properly.

Here’s my original connection code. This code is based off the example on the rethinkdb site.

### rethinkdb-client.coffee ###

r = require('rethinkdb')
netconfig = require('../config/netconfig')

db = r
db.onConnect = (callback) ->
r.connect {host: netconfig.db.host, port: netconfig.db.port, db: 'towerstorm'}, (err, conn) ->
if err then throw new Error(err)
if !conn then throw new Error("No RethinkDB connection returned")
callback(err, conn)

module.exports = db

And here’s how I modified the onConnect function to find the connections that were not being closed:

db.onConnect = (callback) ->
stack = new Error().stack
r.connect {host: netconfig.db.host, port: netconfig.db.port, db: 'towerstorm'}, (err, conn) ->
if err then throw new Error(err)
if !conn then throw new Error("No RethinkDB connection returned")
setTimeout ->
if conn && conn.open
console.log("Connection created was not closed in 5 seconds. Stack: ", stack)
, 5000
callback(err, conn)

Firstly the line:

stack = new Error().stack

Gets a stack trace of how we reached this db.onConnect function.

Then just before returning the connection I setup a callback to check the connection in 5 seconds. If it detects the connection is still open it gives me a stack trace showing exactly where it was opened and I can add a conn.close() in the appropriate spot.

And easy as that you can find and kill all your stray rethinkdb connections.

 

If you’re unit testing Javascript use Sinon.js, it’s more useful than you expect

For a long time I didn’t use any unit testing libraries with Javascript. After all unlike Java you can do anything with your objects. If you want your car to be a cat that can walk you simply modify the object directly. So what’s the point of having a mocking library?

I discovered sinon.js a few months ago and immediately fell in love. It made me realize how much useless boilerplate code I had in my unit tests and immediately helped me write cleaner, more elegant code.

Here’s a very basic example of how I used to mock functions before and after sinon:

/** Before sinon.js **/

getAnimationSheetArgs = null;
impactMock.game.cache.getAnimationSheet = function() {
getAnimationSheetArgs = arguments
};
bullet.loadAnimations();
assert(getAnimationSheetArgs != null);
assert.equal(getAnimationSheetArgs[0], "img/bullets/awesome.png");
assert.equal(getAnimationSheetArgs[1], 5);
assert.equal(getAnimationSheetArgs[2], 15);
/** After sinon.js **/

impactMock.game.cache.getAnimationSheet = sinon.spy();
bullet.loadAnimations();
assert(impactMock.game.cache.getAnimationSheet.calledWith("img/bullets/awesome.png", 5, 15));

Before Sinon I had to declare a variable to hold the arguments passed to each function I wanted to mock. After using sinon this became one line and verifying the correct arguments were passed to the function can be done in one function call.

It may not look like much, but when you have over 1000 unit tests (as Tower Storm now has) it adds up to a lot of time saved.

Sinon also provides tons of functionality to stub out functions. You can make a method automatically return certain values or even call a callback with specific arguments (for testing async code).

Let’s say you want to test a render function to ensure user details are displayed. It looks like this:

var AdminController = {
userInfo: function (req, res) {
userId = req.param('id');
User.findById(userId, function (err, user) {
if (err) return res.send(500);
res.jsonp(200, user.data);
});
}
}

Now we only want to test that user.data is being sent to the browser. We don’t want to actually hit the database and find a user with findById so we need to mock it out.

it("Should send user.data to the browser", function () {
var mockUser = {data: {name: 'test'}};
sinon.stub(User, 'findById').callsArgWith(1, null, mockUser);
req = {param: sinon.stub().returns(123)};
res = {jsonp: sinon.spy()};
AdminController.userInfo(req, res);
assert(res.jsonp.calledWith(200, {name: 'test'}));
User.findById.restore()
});

On line 1 we create a mock user which we want to display. Then we stub the User.findById method to instantly call argument 1 (the callback) with the 2 arguments null and mockUser (for it’s err and user arguments).
On line 3 we create req as an object with just the param method. We set param to a sinon stub and make it instantly return 123 (the user’s id, although it can return anything as User.findById doesn’t even use it).
On line 4 we create res as an object that only has a jsonp method. We set this method to be a sinon spy as it doesn’t need to return anything, it only needs to record what it was called with.
On lines 5 and 6 we call the method and check that res.jsonp was successfully called with the users data using sinons handy calledWith function.
Finally on line 7 we call restore on User.findById to remove the stub and restore it’s original functionality. This is so if we have tests in the future that want to use the original function they won’t break unexpectedly.

This is by far the easiest way I’ve found to mock and unit test javascript though if you know of a better way let me know. I’m always trying to be as efficient as possible.

 

Hundreds of robots in every home

I used to have hundreds dreams and ideas I wanted to pursue. So many that I jumped from task to task like mad trying to make something happen and only ended up scratching the surface of a few. Recently I’ve realized there only 2 major ideas I keep coming back to that I wish to pursue more than anything else:

  • Building Tower Storm into a big successful game that millions of people play and enjoy and I earn enough off to never have to worry about money again.
  • Building a robotics company that makes producing your own food automatically a possibility for everyone on the planet.

Tower Storm is already in progress, so in this post I want to talk about my vision for the future of robots.

I don’t believe there will be human like robots in every home as in bionical man, but I do think there will be hundreds of machines that will do most menial chores automatically. They’re already being made, in a crude unrecognisable form, via 3D Printers and open source circuitboards.

This reminds me of computers in the 70’s, they were toys that you had to solder and program yourself, there was no way everyone was going to have one let alone use it all the time. I feel that robots are at a similar stage.

What is needed now is a standard platform that all robots can build upon to work together to accomplish larger tasks. The robots would be modular and accomplish one function in a simple way on their own and they could then be combined in thousands of ways or sold in pre-packed sets to the non geeks out there.

Lets take cooking for an example. Making a spaghetti is somewhat complicated, but each step and microstep is pretty simple. Lets break down the steps:

  • Cook Sauce
    • Put pot on hotplate
    • Turn hotplate on
    • Cook Onions
      • Peel onion
      • Slice onion
      • Transfer onion to pot
      • Stir for 2 minutes
    • Add Mince
      • Defrost mince
      • Transfer to pot
      • Stir for 2 minutes
    • Add jar of sauce
      • Open jar
      • Pour jar into pot
    • Stir for 10 Minutes
  • Cook Pasta
    • Open pasta packet
  • Put sauce on pasta
  • Add cheese on sauce

When you lay it out like that you can see there are a lot of small steps, but none are too complicated for a robot to do. So why don’t they do it? Well I believe they will 10 – 20 years from now, we simply need to build a system for robots to work together in a neat way (like unix pipes) so they can collaborate and make tasks like preparing dinner completely automated.

We’ve already got the infrastructure with 3D Printers and arduinos to make this happen, if the community created open source designs for robots that can do each of these tasks then anyone, anywhere could build robots that automate many of their cooking tasks for them, eventually not needing to cook at all.

What has me even more excited than cooking is producing food automatically. It seems even more complicated than cooking, but if we break it down into separate components we could build robots that create and maintain tiny at home vegetable farms or fruit trees.

Then if these designs are open sourced anyone anywhere in the world can live effortlessly off the grid with all their food made for them by a team of robots. Once we’ve automated much of the first worlds food supply we can even help out those less fortunate in the world. Everyone in the world could use them to have their own completely automated unlimited production farm.

This is what excites me most about 3D Printing and open source circuitboards. Not gadgets and toys but machines that can bring complete automation to everything we don’t want to do in our lives. so everyone in the world is free to focus on what they enjoy, creating amazing things and learning and giving back to society.

Sure it’s insanely complicated, but it’s doable,