Git create new branches basing them off master

For the longest time I’d had the frustration of working on one branch, needing to create another branch for a bug fix, then creating a PR with this bug fix only to discover all the commits from the original branch are now in the PR.

Today I discovered you can actually specify the base branch in the git checkout command. Simply do this:

git checkout -b newbranch master

It will create and switch to the new branch basing it off master. I’ve created a new script called ‘gnb’ (for ‘git new branch’) to do this all the time for me now. You can see it here

Also if you’ve already submitted the PR and now want to remove those additional commits you can easily do this by rebasing your new branch off master.

git rebase --onto master originalbranch newbranch
git push origin +newbranch # Force push as we're re-writing history here

This will rebase all the commits you did in newbranch onto master skipping commits that are in originalbranch.

 

Simple script to use / run a fake process on a port

While testing some shell scripts I needed a simple way to tie up a port on the server to cause the shell scripts to error out.

I did some googling and couldn’t find any way to do this in pure bash but if you have NodeJS installed on your server it’s as simple as this:

node -e 'require("http").createServer(function(){}).listen(PORT);'

Simply replace PORT with the port number you wish to use. If anyone knows a way to do this in pure bash please let me know.

 

HAProxy – How to run http and https on the same port

Want to have your app run on one just the one port but work in both http and https mode? It’s easily done. You’ll first have to have a normal frontend for ports 80 and 443 similar to the following:

frontend unsecured *:80
timeout client 1d
maxconn 20000

default_backend default

frontend secured
maxconn 20000
bind   0.0.0.0:443 ssl crt /etc/haproxy/proxycert.cert

default_backend default

You probably already have this setup if you’re running HAProxy, no need to change it if you do.

Now to make another port (9000 in this example) work with both http and https just do the following:

frontend newport
maxconn 20000

bind   0.0.0.0:9000

mode tcp
option tcplog

tcp-request inspect-delay 100ms
tcp-request content accept if HTTP
tcp-request content accept if { req.ssl_hello_type 1 }

use_backend forward_http if HTTP
default_backend forward_https

backend forward_http
mode tcp
server serverhttp 127.0.0.1:80

backend forward_https
mode tcp
server serverhttps 127.0.0.1:443

It simply takes 100ms (this could be lowered but I didn’t want things to accidentally break) to detect what mode the connection is in. If it’s in HTTP it forwards the request to itself on port 80 and if not it forwards to itself on port 443.

 

How to uninstall / delete OSX mail app

Unfortunately Apple love forcing their software on users and providing no way to uninstall. If you’re sick of apple mail popping up every time you click an email link or pressing cmd + shift + i (as I often do trying to get into the web dev tools in chrome) do the following in your terminal to remove it:

sudo -i 
mkdir /dump
mv /Applications/Mail.app /dump
mv /usr/bin/mail /dump
chmod 000 /dump

You could also rm -rf the files but I like to keep them around just in case OSX breaks somehow. If you don’t run the chmod command OSX will actually detect the file has moved and ask you to set it up again.

 

Docker / lxc set memory limit invalid argument

When trying to resize a docker or LXC container by changing the value in /sys/fs/cgroup/memory/docker//memory.limit_in_bytes if you come across the error

write error: Invalid argument

When going above a certain amount (for me it was 1024MB).

It’s most likely due to your memory limit being higher than your memory + swap limit. You need to edit the file memory.memsw.limit_in_bytes and increase it’s value.

 

How to test memory limits of a docker container

I’ve been playing around with docker containers recently including setting memory limits. But Google had wasn’t much help. So posting a simple method I discovered here. You do need php installed in your container.

truncate -s 1G /tmp/1G
php -r 'file("/tmp/1G");

All it does is create a 1 gigabyte file in the /tmp directory then attempts to read it in using PHP. PHP being PHP tries to load it all at once and if your memory limits are working correctly should give a Fatal error.

 

Teamcity stop double building the same project

It is annoying when Teamcity builds the same project twice at the same time. Especially when some of the steps are to decomission old servers and deploy new ones and the two builds end up trampling all over each other.

The option to disable this is tricky to find. First go to your project build configuration. Then click on general settings on the left hand side. Then click the ‘show advanced options’ just above the save button in the main column. When you click this it now has an option to “Limit the number of simultaneously running builds (0 — unlimited)”. Enter 1 in this box. Hit save and you’ll no longer have 2 builds of the same project at the same time.

This is with teamcity version 8.1.3. It may be in a different spot in future versions.

 

Puppet how to force apt-get update

It seems the puppet apt module doesn’t runs apt-get update more than once even when you explicitly define it if you haven’t changes your sources.list. Because it thinks nothing has changed.

This is an issue we encountered with Tower Storm where we released new versions of private packages but our servers didn’t run apt-get update before trying to install them and so they’d try and fetch an old version and get a 404 error.

To fix it you simply need to add these lines to your manifest:

class { 'apt':
always_apt_update => true,
}

Then install your packages like so:

exec { 'apt-get-update':
command     => '/usr/bin/apt-get update',
refreshonly => true,
}

$packages = ["nodejs", "npm"]
package {
$packages: ensure => installed,
require    => Exec['apt-get-update'],
}

When your packages install they will first call apt-get update every time just to make sure they have the latest version.

 

Tracking down stray open connections in rethinkdb

Over the past few days rethinkdb has been giving me errors about handshake timeouts due to too many open connections. If you’ve had similar ‘handshake timeout’ errors you’ve probably got the same problem.

Somewhere in the Tower Storm codebase connections were being made and not closed properly. Unfortunately the code base is huge and database calls are made in many places. Also when rethinkdb errors out it doesn’t give a stack trace or any indication of where connections are being tied up.

But I figured out a way to find connections that were not being closed properly.

Here’s my original connection code. This code is based off the example on the rethinkdb site.

### rethinkdb-client.coffee ###

r = require('rethinkdb')
netconfig = require('../config/netconfig')

db = r
db.onConnect = (callback) ->
r.connect {host: netconfig.db.host, port: netconfig.db.port, db: 'towerstorm'}, (err, conn) ->
if err then throw new Error(err)
if !conn then throw new Error("No RethinkDB connection returned")
callback(err, conn)

module.exports = db

And here’s how I modified the onConnect function to find the connections that were not being closed:

db.onConnect = (callback) ->
stack = new Error().stack
r.connect {host: netconfig.db.host, port: netconfig.db.port, db: 'towerstorm'}, (err, conn) ->
if err then throw new Error(err)
if !conn then throw new Error("No RethinkDB connection returned")
setTimeout ->
if conn && conn.open
console.log("Connection created was not closed in 5 seconds. Stack: ", stack)
, 5000
callback(err, conn)

Firstly the line:

stack = new Error().stack

Gets a stack trace of how we reached this db.onConnect function.

Then just before returning the connection I setup a callback to check the connection in 5 seconds. If it detects the connection is still open it gives me a stack trace showing exactly where it was opened and I can add a conn.close() in the appropriate spot.

And easy as that you can find and kill all your stray rethinkdb connections.