Hibri Marzook Musings on technology and photography

Continuous Delivery - Going from rags to riches

My last year and a half has been spent guiding a team, working on a legacy codebase, to a level of maturity where they can deliver reliably.

What we took were a series of small steps, working within the constraints we had, and slowly working our way through to a higher degree of continuous delivery maturity.

My aim is to show the first few small steps we took, and to show that applying CD practices can be done in an iterative manner. With each iteration building on what was done before.


The team was in a chaotic stage. They were responsible for delivering an API for the main mobile app, building on the Microsoft .Net stack. They had just migrated their code away from TFS, to Git, and were starting to use JIRA for rudimentary tracking.

There was no CI, or automation. One of the things that I observed during the early days was the chaos around deployments. Every other deployment was rolled back. The testing team would take a couple of days to test a release. The definition of done was when a release was thrown over the wall to the testing team.

Introduce a basic level of monitoring

Introducing monitoring early on helps the team quantify the response to outages. Not every outage needs the whole team to drop what they are doing and work on the outage. The response can be quantified. The team becomes proactive in dealing with outages. This gives the team a bit more breathing room.

We introduced NewRelic, which gave us out of the box performance and error monitoring and alerting. We hooked it up to PagerDuty to get alerted as soon as issues occur.

Before the introduction of NewRelic we had no visibility of how the API was performing in production, and didn’t even know if was working properly. The only visibility we had of production issues was when customer support calls increased above the usual level.

This gave us a tiny start on CD. The team knew what was going on in production, and were eager to take ownership, without waiting for someone to assign a task to them. We were able to quantify what we were dealing with. We added more monitoring tools later on.

Visibility and limiting work in progress.

Get the team to work on only one thing at a time. A team in a chaotic state needs to get in to the rhythm of finishing work in progress and delivering it. It’s important to stress on a “Ship it” mindset early on and increase the WIP limits when the team can deliver reliably.

When a team is in a chaotic stage, they are already juggling multiple priorities, and don’t have the time to focus on doing one thing well. Limiting work in process helps the team to focus on what they are doing. I still recommend keeping WIP limits very low even when the team is doing well.

Visualise the work the team is doing. Put up the classic card wall, use electronic tracking tools such as JIRA purely for tracking.

A physical Kanban board in the team area empowers the team to take ownership of what they do. They can show what they are working on, and the act of simply working generates tangible artefacts.

Electronic task boards are a hinderance at this stage of a team’s maturity. Usually the electronic task boards are owned by someone else outside the team. The team doesn’t have the sense of ownership of their own process.

Visibility and limiting work in progress allows the team to be clear to everyone else involved on what they are dealing with.

Continuous Integration

Introduce CI as soon as possible. It’s not necessary to build the whole pipeline and an automated pipeline can co-exist with manual steps. The aim should be to convert existing manual steps into automated steps run by a CI server, whilst pulling code from version control.

The first step on our CI pipeline was only a compile step. Getting to this stage was tough. It involved chasing down dependencies that were not checked in or documented. We then used the artefacts generated by the build server to do manual deploys.

We did this first because, even though the team was using version control, deployment artefacts were built on a developer’s machine. Moreover, the artefacts could only be built on a couple of key developers’ machines. Builds had to wait for someone to generate them.

When you don’t have anything else put everyone together

Communication is key during the early stages of helping a team. Good communication builds trust between different members of the team. Focus on building trust between team members. A team in a dysfunctional state, has low trust communication between team members.

Encourage pairs between testers and developers as this creates a tight feedback loop between a developer and a tester, and helps test code even before it is committed. Keep this tight feedback loop till an automated suite of tests is in place. I recommend keeping this close collaboration even after.

What helped us in the early days was sitting together in the same area. We had a tester on the team, who had deep knowledge about the product, and knew all the quirks to test for when doing regression tests. We didn’t have the communication overhead of waiting for someone outside of the team to do a task. I encouraged the team to talk to each other, and move away from using JIRA tickets as the primary communication channel.

Amplify the good things

Even when a team is in a chaotic state there are good practices. Learn to leverage these good practices as a building block for better things. They may not be perfect, but it’s a foundation to build on.

We were lucky to have a meticulous tester, who had built his own suite of tests, even when the developers did not have any reliable tests. The tester used to run his suite of tests after every release. We used these tests as the basis for our first automated regression test suite.

We converted the rudimentary tests into a simple suite of BDD style tests. The tests weren’t perfect, but these were the tests that gave us a little bit more confidence that our system wasn’t broken after a release.

Focus on learning

The practices above, should serve one purpose. To give enough slack time for a team to learn. This is where the real change towards Continuous Delivery maturity happens. Encourage pair programming early. Talk about books, show examples of how things can be done better.

It’s easy to fall into the trap of focussing on the code, but keep in mind that the code is an expression of the thought process of how the individuals on a team think.

Give the team space to experiment and support them even when experiments fail.


Starting with these small steps, almost a year later, we were able to deliver reliably. We didn’t fix all the problems in the code. It was still gnarly in places.

We had an automated regression test suite which covered all our key scenarios that ran on every commit. We were able to commit to master and have that change deployed to production within the hour and we rarely had a broken build.

Think of all the CD practices as a toolbox. You can’t have a CD pipeline from day one, nor should the teams focus should be on building the perfect CD pipeline. Focus on educating the team to have a quality and delivery focussed mindset.

Iterate on what you already have. The small initial steps can be force multipliers.

Here are a selection of books that have helped me.

Fearless Change : Patterns for introducing new ideas - Linda Rising and Mary Lynn Manns

The Five Dysfunctions of a Team - Patrick Lencioni

Creativity Inc.: Overcoming the Unseen Forces That Stand in the Way of True Inspiration - Ed Catmull

The Nature of Software Development: Keep It Simple, Make It Valuable, Build It Piece by Piece - Ron Jefferies

The Goal: A Process of Ongoing Improvement - Eliyahu M Goldratt

Getting started with Windows Containers

Earlier this year Microsoft announced Windows 2016 Server TP-5 with support for Windows Containers.

Windows Containers allows a server to act as  container host for containers that can be managed with tools like Docker.  However, a Windows container host can run only Windows containers, and not Linux containers.

I work on a Mac, and I want to use the Docker client on OSX to build Windows Containers. Here is what I went through to set up my environment to start playing with it.

Step 1

Build a virtual machine with the latest Windows 2016 Technical Preview (TP5 at the time of writing).

The usual way is to download, mount the iso in VirtualBox or VMware Fusion and install. After the installation, follow the quick start instructions to configure Windows Containers.

The quick start guide is at https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/quick_start_configure_host

My preferred method is to create a Vagrant box, so that I can have a repeatable base build to experiment on.

Packer is the tool to build Vagrant boxes. There exists a Packer Windows project to build vagrant boxes. The main Packer Windows doesn’t contain templates for Windows Server 2016 yet. Stefan Scherer has been working on supporting it, and has built templates to provision container hosts as well. Clone the Packer templates from https://github.com/StefanScherer/packer-windows, and run

packer build windows_2016_docker.json

This tells packer to build a Vagrant box with Windows 2016 TP5 as a container host.

Once the box has been built, copy it to a place where it can be reused. My preferred place is a private Dropbox. You’ve now got a Vagrant box acting as a Windows Container Host, ready to experiment with.

Step 2

Spin up the Vagrant, Windows Container Host box by creating a Vagrant file

vagrant init -mf  windows_2016_docker <url to your vagrant box>

Start the Vagrant machine by running

vagrant up

Wait till the Vagrant machine starts up.

Step 3

Connect the Docker client to the Vagrant machine running the Windows Container Host.

When the Vagrant machine starts up, it will display the IP address of the Vagrant machine. Use this IP address to set the DOCKER_HOST environment variable to tcp://:2375

In my environment it’s done by running,

export DOCKER_HOST=tcp://
Then run  docker version and look at the output. It should be something like the following.
 Version: 1.11.1
 API version: 1.23
 Go version: go1.5.4
 Git commit: 5604cbe
 Built: Wed Apr 27 00:34:20 2016
 OS/Arch: darwin/amd64

 Version: 1.12.0-dev
 API version: 1.24
 Go version: go1.5.3
 Git commit: 2b97201
 Built: Tue Apr 26 23:51:36 2016
 OS/Arch: windows/amd64


The OS/Arch value should tell you that the Docker client on OSX, is connects to a Windows Host.

Step 4

Create a Dockerfile, with the following content.

FROM windowsservercore
RUN powershell.exe Install-WindowsFeature web-server

This creates a Windows Docker Container image, using Windows Server Core as the base image, installs IIS, and exposes port 80

Build the Dockerfile.

docker build -t iis .

And run the image with

docker run --name iisdemo -d -p 80:80 iis

That’s it. You now have a container running IIS. Visit http:// to see the familiar IIS start page.


You’ve now got an environment to start experimenting with Windows Containers and Docker. You can start writing Docker files for your Windows only applications, and even start  porting .Net services to run on Windows Containers.


Coney Island – Dead and Alive

View of Coney Island

It was my first trip across the Atlantic to the States and it was exciting to be in New York. The storms had passed, and the autumnal sunshine was in it’s peak. I wanted to capture the palette of New York, just as Saul Leiter would have. Yet, 3 days later on a quiet Halloween morning, I find myself on Coney Island. I see a different palette of colours. Bright yellows, and eye catching reds. Not the fast fleeting yellows of the NYC taxi cabs, but a more sauntering yellow, of a slow pace of life.


Coney Island is an anachronism. It seems to want to go back to the post World War II era, of it’s first decline, resisting any attempts to bring it into the modern age. These tensions between dying and living again, are seen, whilst sauntering along the boardwalk.


The old folks, from the neighbouring senior homes, greeting each other during their morning walks and the old man showing off on his classic Schwinn bike. Coney Island felt like a place that resisted youth, even when a gang of 6 year olds, dressed in Halloween outfits stormed Nathan’s Hotdogs.




All photos taken with a Fuji X100T

On Donuts

Donuts. The Krispy Kreme assorted box of donuts. They’ve been a recurring theme during the last decade of my career. Once or twice a month I’ll bring two or more boxes to the office on a Friday  morning. I’ll put them close to the area where my team sits. I’ll go to my desk and compose an email to the staff mailing list, with one word on the subject line.


Many would ask, while munching on a vanilla cream donut, if it was my birthday, and I’d say no. My reason for bringing in donuts, I said “Just because I wanted to”.

However, there is a reason. The tyre shaped treats were another way of building my people skills, and getting to collaborate with others outside of my team. Mostly the rest of the company.  There is an official name for  this pattern of providing donuts. Coined by Linda Rising in her book, Fearless Change It’s the Do Food pattern.

Sharing food, made it easier for me to discuss difficult topics, and approach people to get things done. I didn’t have to appeal to their rational mind. I didn’t have to make convincing arguments. Didn’t have to use data. Minds are already made up.

” I’ll listen to him, He’s the guy who brings me donuts”.

This post is inspired by a former colleague’s post on Emotional Intelligence for teams. It made me think of my own path to improving my people skills. I’m the archetypical techie. People don’t compute. Humans aren’t programmable. Yet, this thing called “Collaboration” is pretty damn important if you want to build the right thing, and change people into adopting ways of working that makes us happier.

It’s pretty damn important if you want to enjoy building software. I’ve seen it too many times. Frustrated developers, unsatisfied customers, stressed managers and flaky software. I had to learn more tricks to make changes. Here are a few.

Have a bag of tasty treats on your desk.

Like the donuts, I normally have a box of chocolates or some other candy on my desk (I also have healthier options like cashew nuts and fruit).

Why ? It’s easier to have a conversation whilst eating something nice. People see the treats and come over to ask what  you are working on. You can then pitch your idea and start evangelising your vision. People listen while munching.

Even more, take said bag of treats over when you need to get something done, especially when working with sysadmins/ops folk.

No candy ? “Did you raise a ticket for that?” .

With candy, “Let me get that done for you after this”.

Most likely to do with the brain experiencing pleasure making difficult decisions more enjoyable. There is some science to this, but let someone else figure it out. It works.

This is your only option since buying someone an alcoholic  drink during working hours, in the office is frowned upon in certain uptight cultures.

Lead your own retrospectives.

Officially, a retrospective is supposed to be lead by a neutral facilitator. I get quizzical looks when I say I’m leading the retrospective for my own team, because that is not the format.  Why do I do it ?

It’s a good way to observe your team, in a relaxed setting. In the early stages of forming a team, badly run retrospectives can have a negative impact and It’s important to have effective retrospectives early on. When introducing a collaborative culture and it’s not always easy to find good retrospective facilitators. This is sometimes where agile adoption fails, because of ineffective retrospectives.

Leading a retrospective, as a team/tech lead allows you to show leadership in non-technical areas. The team will feel like they can talk to you about the issues they are facing, instead of an external facilitator.  You can also try innovative retrospective ideas that are tuned to your team’s current situation.

Read Maverick by Ricardo Semler.

This is the book that convinced me that there is a better way of working than what is experienced in most places. Trust, transparency, diversity and giving the space to do what they want, brings out the best in everyone.  The book is full of ideas on how to improve working practices, and to deal with the complex issues that arise with a work place democracy.

Observe first, reflect and then talk.

Take all the time you need to observe and reflect. I’ve learned to let the conversation continue without having to say anything.  To be a catalyst,  you’ll have to observe the whole system and figure out there to start nudging people towards the change you want to see.

Observation enables you to develop empathy, since you are not reacting to what they say immediately.  Have them explain, till you understand. Cluelessness is ok. Ask questions till you understand. Harness your inner introvert.

It’s hard to avoid office politics.

At its core, an organisation is a complex system, which behaves the way it does because of the emergent behaviour of all the chaotic interactions with the people (influenced by what they had for breakfast/what side of the bed they woke up on) inside it and the entities outside of it. The goal of making money somehow guides this katamari-damacy-esque blob along.

Everyone around you has their own agendas, goals and what they want as rewards. To achieve your goal, you’ll have to channel your inner “Henry Kissinger”.  People will be amenable to changes as long as they get what they need out of it. Be it a position of power, a sympathetic ear or a reward.  You’ll have to navigate these complex networks of people through Machiavellian manoeuvring and seduction.

Lately I’ve been reading Henry Kissinger’s World Order and some of Robert Greene’s writings. I recommend reading the Art of Seduction and the 48 Laws of Power.


Next ?

All of this is pretty new to me and extremely interesting. I love reading about how techies move out of their “cubicles” and and the tools they use to address human problems.

Thoughts ?

My NodeJS development workflow

I’m building a NodeJS + Angular application and there are a few things I want a smooth dev workflow and improve developer happiness. Yay !!

  1. Run the node server process and restart automatically when I make changes
  2. Concatenate my client side JS code, and merge libraries such as angular, jquery and bootstrap.
  3. I don’t want to check in concatenated files.
  4. I don’t want to check in modules or libraries, even for client side scripts
  5. I want to catch typos and style violations.
  6. I want tests to run in the background.

Grunt is the task runner I’m using to automate the tasks. Let’s look at each of these tasks.

Automate the NodeJS server process with nodemon.

npm install --save-dev nodemon

Now, run


Eventually, you’ll want to trigger nodemon through grunt. Get the grunt-nodemon plugin. The plugin registration and the Grunt config is below.


nodemon: {
    dev: {
        script: 'bin/www',
        ignore:  ['node_modules/**','bower_components/**','public/**']

I’ve configured nodemon to ignore javascript that is not my code, and ignore changes to client side javascript. The path to the node server script is set in the script property. Since I’ve used express to create an app, this is ‘bin/www’.

Great, now executing grunt nodemon  will watch for changes and restart the server.

Next, I want to avoid all those pesky typos, misplaced semi-colons and get into the habit of writing good Javascript. I’ll use JSHint.

npm install --save-dev grunt-contrib-jshint

Configure JSHint as below,


jshint: {
    all: ['Gruntfile.js', 'lib/**/*.js', 'test/**/*.js','public/javascripts/app/**/*.js']

This configures JSHint to run on my tests, client side javascript and server code.

As with nodemon, I want JSHint running in the background and checking my code as I type and save files. This gives me quick feedback. Because JSHint doesn’t do this, I’ll introduce another plugin to the Gruntfile.

This is the grunt watch plugin, which will watch for changes in my source directory and run other Grunt tasks.

npm install --save-dev grunt-contrib-watch

Configure grunt-watch as below,

watch: {
 scripts: {
 files: ['**/*.js','!node_modules/**','!bower_components/**','!public/javascripts/dist/**'],
 tasks: ['jshint']

This sets up grunt-watch to watch over all of my code. Ignore directories by placing an exclamation mark before the path spec.

Great, now if I execute grunt watch on the command line, it will start watching for changes and run jshint when a change is detected. Sweet.

There is a catch though. I want to run nodemon and watch at the same time. I could run both in separate shells, but that would mean switching back and forth.

Let’s introduce grunt-concurrent. This executes grunt tasks concurrently. Now I can run nodemon and watch at the same time.

npm install --save-dev grunt-concurrent

Configure as below;


concurrent: {
    dev: {
        tasks: ['nodemon', 'watch'],
        options: {
            logConcurrentOutput: true

The config above, tells grunt-concurrent to run nodemon and watch. Both tasks are grouped logically under a “dev” subtask.

If I execute grunt concurrent on the command line, I’ll have my server restarting automatically and JSHint watching my code. To reduce what I need to type even more, I’ll set up grunt to run this task as default

grunt.registerTask('default', ['concurrent']);

All I need to do is type “grunt” and have it do all the work done for me.

This gives me a  basic framework to add other tasks. Grunt allows me to group tasks. For example, I want to group all of my build time tasks as one. My build time tasks are to run jshint, run tests, merge libraries like angular and concatenate client side javascript.

grunt.registerTask('build', ['jshint','karma','concat','bower_concat']);

Instead of watch running JSHint only, let’s change it to run the build task.

watch: {
 scripts: {
 files: ['**/*.js','!node_modules/**','!bower_components/**','!public/javascripts/dist/**'],
 tasks: ['build']

Here is the complete Gruntfile https://gist.github.com/hibri/6ccd0aef805353dc8260

In addition to the tasks described above, I’ve also used the plugins listed below

  • grunt-bower-concat : merges all the dependencies installed via bower
  • grunt-contrib-concat : merges all the client side javascript I’ve written. This includes all my angularjs code.
  • grunt-karma : To run tests
  • grunt-bower-task : To run bower install through grunt.

You’ll also notice I’ve got the following task configured.

grunt.registerTask('heroku:production', ['bower:install','concat', 'bower_concat']);

The deployment to Heroku uses the task. I want the concatenation to run when I push to production, so I don’t have to check in any build artefacts to source control.

Heroku doesn’t run Grunt natively during a deploy, but they do support build packs. I’ve configured Heroku to use a nodejs buildpack with grunt support.

See https://github.com/mbuchetics/heroku-buildpack-nodejs-grunt for more.

This is the build framework I’m starting with, and most certainly will grow with the project. A way of sharing ignored files between tasks would be nice. There is some duplication already.

What’s your workflow ?