Hibri Marzook

The software philosopher

March 29, 2015

On Donuts

Donuts. The Krispy Kreme assorted box of donuts. They’ve been a recurring theme during the last decade of my career. Once or twice a month I’ll bring two or more boxes to the office on a Friday  morning. I’ll put them close to the area where my team sits. I’ll go to my desk and compose an email to the staff mailing list, with one word on the subject line.


Many would ask, while munching on a vanilla cream donut, if it was my birthday, and I’d say no. My reason for bringing in donuts, I said “Just because I wanted to”.

However, there is a reason. The tyre shaped treats were another way of building my people skills, and getting to collaborate with others outside of my team. Mostly the rest of the company.  There is an official name for  this pattern of providing donuts. Coined by Linda Rising in her book, Fearless Change It’s the Do Food pattern.

Sharing food, made it easier for me to discuss difficult topics, and approach people to get things done. I didn’t have to appeal to their rational mind. I didn’t have to make convincing arguments. Didn’t have to use data. Minds are already made up.

” I’ll listen to him, He’s the guy who brings me donuts”.

This post is inspired by a former colleague’s post on Emotional Intelligence for teams. It made me think of my own path to improving my people skills. I’m the archetypical techie. People don’t compute. Humans aren’t programmable. Yet, this thing called “Collaboration” is pretty damn important if you want to build the right thing, and change people into adopting ways of working that makes us happier.

It’s pretty damn important if you want to enjoy building software. I’ve seen it too many times. Frustrated developers, unsatisfied customers, stressed managers and flaky software. I had to learn more tricks to make changes. Here are a few.

Have a bag of tasty treats on your desk.

Like the donuts, I normally have a box of chocolates or some other candy on my desk (I also have healthier options like cashew nuts and fruit).

Why ? It’s easier to have a conversation whilst eating something nice. People see the treats and come over to ask what  you are working on. You can then pitch your idea and start evangelising your vision. People listen while munching.

Even more, take said bag of treats over when you need to get something done, especially when working with sysadmins/ops folk.

No candy ? “Did you raise a ticket for that?” .

With candy, “Let me get that done for you after this”.

Most likely to do with the brain experiencing pleasure making difficult decisions more enjoyable. There is some science to this, but let someone else figure it out. It works.

This is your only option since buying someone an alcoholic  drink during working hours, in the office is frowned upon in certain uptight cultures.

Lead your own retrospectives.

Officially, a retrospective is supposed to be lead by a neutral facilitator. I get quizzical looks when I say I’m leading the retrospective for my own team, because that is not the format.  Why do I do it ?

It’s a good way to observe your team, in a relaxed setting. In the early stages of forming a team, badly run retrospectives can have a negative impact and It’s important to have effective retrospectives early on. When introducing a collaborative culture and it’s not always easy to find good retrospective facilitators. This is sometimes where agile adoption fails, because of ineffective retrospectives.

Leading a retrospective, as a team/tech lead allows you to show leadership in non-technical areas. The team will feel like they can talk to you about the issues they are facing, instead of an external facilitator.  You can also try innovative retrospective ideas that are tuned to your team’s current situation.

Read Maverick by Ricardo Semler.

This is the book that convinced me that there is a better way of working than what is experienced in most places. Trust, transparency, diversity and giving the space to do what they want, brings out the best in everyone.  The book is full of ideas on how to improve working practices, and to deal with the complex issues that arise with a work place democracy.

Observe first, reflect and then talk.

Take all the time you need to observe and reflect. I’ve learned to let the conversation continue without having to say anything.  To be a catalyst,  you’ll have to observe the whole system and figure out there to start nudging people towards the change you want to see.

Observation enables you to develop empathy, since you are not reacting to what they say immediately.  Have them explain, till you understand. Cluelessness is ok. Ask questions till you understand. Harness your inner introvert.

It’s hard to avoid office politics.

At its core, an organisation is a complex system, which behaves the way it does because of the emergent behaviour of all the chaotic interactions with the people (influenced by what they had for breakfast/what side of the bed they woke up on) inside it and the entities outside of it. The goal of making money somehow guides this katamari-damacy-esque blob along.

Everyone around you has their own agendas, goals and what they want as rewards. To achieve your goal, you’ll have to channel your inner “Henry Kissinger”.  People will be amenable to changes as long as they get what they need out of it. Be it a position of power, a sympathetic ear or a reward.  You’ll have to navigate these complex networks of people through Machiavellian manoeuvring and seduction.

Lately I’ve been reading Henry Kissinger’s World Order and some of Robert Greene’s writings. I recommend reading the Art of Seduction and the 48 Laws of Power.


Next ?

All of this is pretty new to me and extremely interesting. I love reading about how techies move out of their “cubicles” and and the tools they use to address human problems.

Thoughts ?

December 10, 2014

My NodeJS development workflow

I’m building a NodeJS + Angular application and there are a few things I want a smooth dev workflow and improve developer happiness. Yay !!

  1. Run the node server process and restart automatically when I make changes
  2. Concatenate my client side JS code, and merge libraries such as angular, jquery and bootstrap.
  3. I don’t want to check in concatenated files.
  4. I don’t want to check in modules or libraries, even for client side scripts
  5. I want to catch typos and style violations.
  6. I want tests to run in the background.

Grunt is the task runner I’m using to automate the tasks. Let’s look at each of these tasks.

Automate the NodeJS server process with nodemon.

npm install --save-dev nodemon

Now, run


Eventually, you’ll want to trigger nodemon through grunt. Get the grunt-nodemon plugin. The plugin registration and the Grunt config is below.


nodemon: {
    dev: {
        script: 'bin/www',
        ignore:  ['node_modules/**','bower_components/**','public/**']

I’ve configured nodemon to ignore javascript that is not my code, and ignore changes to client side javascript. The path to the node server script is set in the script property. Since I’ve used express to create an app, this is ‘bin/www’.

Great, now executing grunt nodemon  will watch for changes and restart the server.

Next, I want to avoid all those pesky typos, misplaced semi-colons and get into the habit of writing good Javascript. I’ll use JSHint.

npm install --save-dev grunt-contrib-jshint

Configure JSHint as below,


jshint: {
    all: ['Gruntfile.js', 'lib/**/*.js', 'test/**/*.js','public/javascripts/app/**/*.js']

This configures JSHint to run on my tests, client side javascript and server code.

As with nodemon, I want JSHint running in the background and checking my code as I type and save files. This gives me quick feedback. Because JSHint doesn’t do this, I’ll introduce another plugin to the Gruntfile.

This is the grunt watch plugin, which will watch for changes in my source directory and run other Grunt tasks.

npm install --save-dev grunt-contrib-watch

Configure grunt-watch as below,

watch: {
 scripts: {
 files: ['**/*.js','!node_modules/**','!bower_components/**','!public/javascripts/dist/**'],
 tasks: ['jshint']

This sets up grunt-watch to watch over all of my code. Ignore directories by placing an exclamation mark before the path spec.

Great, now if I execute grunt watch on the command line, it will start watching for changes and run jshint when a change is detected. Sweet.

There is a catch though. I want to run nodemon and watch at the same time. I could run both in separate shells, but that would mean switching back and forth.

Let’s introduce grunt-concurrent. This executes grunt tasks concurrently. Now I can run nodemon and watch at the same time.

npm install --save-dev grunt-concurrent

Configure as below;


concurrent: {
    dev: {
        tasks: ['nodemon', 'watch'],
        options: {
            logConcurrentOutput: true

The config above, tells grunt-concurrent to run nodemon and watch. Both tasks are grouped logically under a “dev” subtask.

If I execute grunt concurrent on the command line, I’ll have my server restarting automatically and JSHint watching my code. To reduce what I need to type even more, I’ll set up grunt to run this task as default

grunt.registerTask('default', ['concurrent']);

All I need to do is type “grunt” and have it do all the work done for me.

This gives me a  basic framework to add other tasks. Grunt allows me to group tasks. For example, I want to group all of my build time tasks as one. My build time tasks are to run jshint, run tests, merge libraries like angular and concatenate client side javascript.

grunt.registerTask('build', ['jshint','karma','concat','bower_concat']);

Instead of watch running JSHint only, let’s change it to run the build task.

watch: {
 scripts: {
 files: ['**/*.js','!node_modules/**','!bower_components/**','!public/javascripts/dist/**'],
 tasks: ['build']

Here is the complete Gruntfile https://gist.github.com/hibri/6ccd0aef805353dc8260

In addition to the tasks described above, I’ve also used the plugins listed below

  • grunt-bower-concat : merges all the dependencies installed via bower
  • grunt-contrib-concat : merges all the client side javascript I’ve written. This includes all my angularjs code.
  • grunt-karma : To run tests
  • grunt-bower-task : To run bower install through grunt.

You’ll also notice I’ve got the following task configured.

grunt.registerTask('heroku:production', ['bower:install','concat', 'bower_concat']);

The deployment to Heroku uses the task. I want the concatenation to run when I push to production, so I don’t have to check in any build artefacts to source control.

Heroku doesn’t run Grunt natively during a deploy, but they do support build packs. I’ve configured Heroku to use a nodejs buildpack with grunt support.

See https://github.com/mbuchetics/heroku-buildpack-nodejs-grunt for more.

This is the build framework I’m starting with, and most certainly will grow with the project. A way of sharing ignored files between tasks would be nice. There is some duplication already.

What’s your workflow ?

December 7, 2014

The Fuji X100T – First Impressions

I love gadgets. Cameras even more. Over the past decade, I’ve gone through quite a collection of photography devices. Starting with a Canon EOS 350 film camera, which I still have and up to a 5D, and even an old Polaroid SX-70.

However, none of these cameras created an emotional bond, or felt like that they were made for me. Heck, I think the 5D is a brute who can make great picture of anything I throw at it.

That changed when I got the Fuji X100. I bought it as a lightweight travel camera. It changed how I approached photography. I wasn’t worried about the perfect settings anymore. The tactile controls of the X100 gave a physical element to creating photographs.

The X100 was a difficult camera though. It was moody and needed coaxing to make the best of the available light. The auto focus was slow, and the manual focus was, unusable. The camera did deliver when I’ve had the patience.

In Arles, France, earlier this summer, I learned the art of shooting from the hip. Inspired by the excellent book, with the same name by Johnny Stilletto. I needed a camera that is responsive as a natural reflex, to capture the scene. A camera that encourages a minimalistic style of shooting.

The Fuji X100T shows maturity of the X100 line. The X100 and the X100s were brave steps towards making cameras beautiful again. Naturally, I placed a pre-order for the X100T and when it arrived took it out for a little spin this weekend.

The auto focus is fast. The start up is fast. The camera is ready by the time I lift it up to my eye. The optical view finder (OVF) has wider coverage compared to the X100. The electronic view finder (EVF) is responsive with a crisper view. It doesn’t lag. Previously the tiny bit of lag was enough to mess my composition. The view finder has a tidy layout, with all the indicators outside of the frame. There is not much that gets in the way of composing a scene. In addition I’ve been turning off all but the essential indicators.

The manual focus is usable. The focus ring turns smoothly but still as not as smooth as on a EF lens. It still has little bit of a tactile disconnect between what I do and what I see. The 1/3 aperture step change on the control ring is a welcome change. Previously,I didn’t like using the command dial to change to f3.5. The buttons on the back give the same solid feedback as the rest of the control dials. Control consistency is great. On the X100, the dial was a tad hypersensitive and I’ve switched to the wrong shooting mode accidentally, on many occasions.

I love the Wifi connectivity and remote shooting capabilities. I’m using an iPad for my editing workflow, and this will be excellent while travelling. I can make do with fewer SD cards now.

I’m not entirely sold on the film simulations, and have rarely used it. I’m impressed with how little post processing I’ve had to do with what I’ve shot on the X100, even in RAW mode. The X100T can shoot in a square aspect ratio, in camera. This is great for photos that are posted to Instagram later.

I doubt the reliability of the battery level indicator on the X100T. It showed that the battery was almost empty, but after turning it off and on again it went back to showing that it’s 75% full. I expect the battery to last at-least for a whole day’s worth of shooting. I have to question why Fuji doesn’t include an adaptor ring and a lens hood with the camera, considering it’s a premium compact.

Overall, I love how the X100T responds to my instinct and doesn’t stand in the way of snapping up what’s happening around me. I’ve never been this excited to use a camera. Well, at least till the next one comes out.



August 3, 2014

Accepting Uncertainty


This is one of those things that should be filed under continuous delivery patterns. One of the anti-patterns I see in continuous deployment, is the need to make sure that the software is flawless before it’s released. This anti-pattern manifests itself, in the need to have several stakeholders test and approve a release. The release dance takes days, when something is good enough to be on a production system.

There is a large suite of long running tests that must be run, before a release is approved. There is a release management dance that must be done before a release. Cue the constant conversations around the office about the impending release.

Instead, let’s accept the inherent uncertainty in building software. Any non-trivial system is complex. We can’t predict it’s behaviour to any accuracy. We can only observe and react to any unexpected behaviour.

This is the key tenet in building a continuous deployment pipeline. The ability to react to uncertainty.

Chasing certainty with more automated tests will only give diminishing returns. There should be enough tests to increase the confidence level. That’s it. Nothing more.

The rest comes from observing how the software behaves. By monitoring and gathering data. Use that,  to react. Add a few more tests. Rinse, repeat. Iterate.

The ability to iterate and react will give better quality software in the long term, than a stranglehold with tests and testers.


April 9, 2014

Continuous Delivery culture, not tools – Notes from an Open Space session

Pyramid of skills

I facilitated an open space session at the Pipeline Conference in London, to discuss focussing on a  culture of Continuous Delivery (CD) than the tools. We listed a few of the anti-patterns seen in the wild

Culture not tools 

  • The CD guy.
  • Devops person
  • The team not owning CD.
  • Too much standardisation. One person writes the rulebook and forced on every team.
  • No culture of change. The delivery pipeline and code is fixed. No one touches it.
  • No culture of learning.
  • Too much focus on the technology.
  • Cherry picking practices and missing the connection between practices.

We then discussed how to communicate CD, without using  technical terminology, and use language that the rest of the organisation understands. Especially in terms that senior management can relate to.

Techies do a bad job of communicating the process of building software. Don’t mention TDD, CD, Devops when talking to management. Learn to talk in terms of business goals.  Adaptive thinking,  story telling and impact mapping are good tools to have.


Anthony Green described a pyramid of skills/terms to use when talking about CD.

Pyramid of skills

Techies start at the apex of the pyramid when talking about CD and doing a CD transformation. Instead we should take ideas from the human sciences to involve non-technical people in the process.

Individuals learn, but organisations are bad at learning. How to build a culture of learning across the organisation ? How does the organisation learn ?  In most organisations failure is penalised.


There were many suggestions to improve organisational learning.

  • Empowering, remove shackles.
  • No job titles. Titles restrict employees. However, titles are useful for HR. Is HR useful ?
  • Culture interviews.
  • Get better at recruitment. Pair programming interviews. Grow people.

We discussed a few techniques to learn agile practices without the tools and technology.  Agile games such as the Lean Lego Game and the Kanban Pizza Game help introduce the CD thinking without getting mired in technical discussions. Matthew Skelton is doing interesting work in this area, with a workshop to experience devops and a collaborative culture at http://web.experiencedevops.org/ .

Everyone should read Maverick by Ricardo Semler.

Matthew also highlighted how we are good at spotting bad software architecture, but don’t spot queues and bottlenecks in organisational culture.  The sketch below would be recognised as having a bottleneck if it was a software system, but can we spot the bottleneck if this was an org chart ?



At the end, there was consensus that it all comes down to having good people.

Thanks to everyone who attended the open space session. Most of all to the conference organisers for putting together a well organised, and very thought-provoking event.

April 19, 2013

Richard Feynman’s observations on the reliability of the Space Shuttle.

I’ve been reading a lot on Richard Feynman lately. I find his character and his unique approach to learning appealing.

In the book “What Do You Care What Other People Think”, he reminisces about his time on the Rogers Commission investigation into  the Space Shuttle Challenger disaster. The book contains his appendix to the report. These are Feynman’s personal observations : Appendix  F – Personal observations on the reliability of the Shuttle

A few key points stood out to me that are relevant to how we build software.
  1. Becoming immune to small failures. NASA ignored minor errors, and modified their quality certification to account for these errors. NASA did this without investigating the systemic failures behind the errors.
  2. It didn’t fail in the past, therefore it will keep on working.
  3. Difference in culture. During the Apollo program, there was shared responsibility. If there was a problem with an astronaut’s suit, everyone was involved till the problem was solved. In the Space Shuttle era, someone else designed the engines, another contractor built the engines and someone else was responsible for installing the engines. They lost sight of the common goal. It was someone else’s problem.
  4. The Space Shuttle was built in a top down manner (big design up front ?). There was constant firefighting to keep it all working. The engines were rebuilt each time. Instead of a bottom up manner, using parts that were known and proven to work.
  5. His observations appreciates the efforts of the software engineering team though. Their testing was rigorous  and he wonders why other teams were not as good as them
March 4, 2013

What do tests tell us ?

Back in September 2012, I gave a talk at Dev’s in the Ditch on what tests tell us.  The talk resonated with a post by Matt Wynne today on fixing slow builds.

Optimising a slow build ? You are solving the wrong problem.

Slow builds and test suites with a large number of tests are an architecture smell. An indicator that there is something wrong with how you have built the application. I covered a few other things that tests or the lack of them tell us, in my talk.

April 28, 2012

How we do deployments at 7digital.

At 7digital, deployments to a production environment are a non-event. On a given day, there can be at least 10 releases to production during working hours. On some days even more. Specially on Thursdays before the 4pm cut off, as we don’t deploy to production after 4pm and on Fridays.  

Deployments to our internal testing environments happen constantly, on every commit. I’m able to deploy a quick fix, or patch something in production without having to make changes on a live server. We rarely roll back, instead roll forward by fixing the issue. This is made possible due to our investment in build and deployment tools. We attempt to treat these with the same care as our production code.

This post is about how it works.

A little bit about our stack.

Our services run on the .Net stack, with SQL server back ends mostly. We use IIS 7and IIS 6, load balanced  behind HAproxy.

We use Teamcity, to trigger Rake scripts, to do our deployments. The Albacore gem is used for the majority of tasks. We use code derived from Dolphin deploy to configure IIS.


In the beginning.

We used MSBuild for building our solutions and deploying software. However, this was very painful, and led me on a personal crusade to get rid of Msbuild for deployments.  XML based build frameworks, limit what you can do to what is defined in the particular framework’s vocabulary. A big pain was having to store configuration and code in the same msbuild xml files. It wasn’t possible to separate the two without writing custom tasks.

A build framework, in a programming language, allows you to be much more fluent and write readable scripts. You have the power to use all the libraries,  at your disposal to write deployment code instead of being limited to a XML schema definition. In addition to Ruby, we have  a couple of projects using Powershell and psake.


The current setup.




The diagram above shows the major parts of our deployment pipeline.

We keep the build and deployment code along with the project code, to maintain a self contained package, with minimal dependencies on anything else.

A project has a master rake file, named rakefile.rb in the root directory of the project. This rake file references all the other shared rake scripts and ruby libraries needed for build and deployment.

These libraries and scripts are kept in a sub directory named build. A typical project structure is like;


The conf directory contains the configuration settings for IIS, including the host headers, app pool settings and .net framework version settings.

The Albacore build gem has everything that is needed to build a .Net solution. We use it to compile our code on Teamcity and to run our tests.

When something is checked into VCS (git), Teamcity triggers off a build and compiles the code. This build process will package the deployment scripts and the web site package, which will be used for deployment. Teamcity stores these as artefacts, and this allows us to reuse them without building again.

To deploy a website, a Teamcity build agent, retrieves all necessary zipped packages, un-compresses them to the current working directory.

The build agent calls a rake task, with the parameters;

   rake deploy[environment, version_to_deploy, list_of_servers, build_number]

An example

   rake deploy[“live”,”1.2”,”server1;server2;server3”,”123”]

The environment parameter specifies which deployment settings to use.  Deployment settings are stored in  YAML files, that the rake scripts read.  A YAML file for IIS settings looks like;

 site_name: xxx.dns.uat
 host_header: 80:xxx.dns.uat:*
 dot_net_version: v4.0

 site_name: xxx.dns.com
 host_header: 80:xxx.dns.com:*
 dot_net_version: v4.0


We can add a new environment, and change settings for an existing environment by changing a configuration .yml file, without having to change deployment scripts.

The version_to_deploy parameter, loosely translates to a virtual directory. This is ignored for websites that deploy to the root. The list of servers is an arbitrary list of servers that we deploy to. This allows us to deploy to a single server or a cluster.

The rake deploy task, calls two other rake tasks, for each server in the list of servers. The first task is to copy all deployment scripts and the web package to the target server. The second is to trigger a remote shell command to do the actual installation process.

In pseudo code


       foreach(server in servers)

             copy scripts and packages to server

             trigger remote installation on server            

The actual installation process does not happen from the build agent, but on the target server. The build agent does not have the the necessary network access and admin rights. Our servers expose only SSH.

The deployment sequence is controlled by chaining rake tasks. This allows us to run any of the tasks individually from the command line to do a manual deployment or to test.

The remote installation task, copies all the web site binaries to the correct locations under IIS, and configures IIS. Application pools under IIS are stopped, while this happens, and the virtual directory and if needed the web site are rebuilt. The application pools are restarted after this.

The deployment  repeats the process on the next server in the list.


The future.

What we have now helps us a lot, and allow us to scale up to this point. However, to grow even more there are a few things that hold us back.

For example, a lot of infrastructure details creep into our configuration and scripts and stored in source control, which is mostly used by devs. This means that  when our operations folk make a change to the infrastructure the devs have to change our configuration settings to reflect this. I would like to have all configuration settings stored somewhere, and the scripts would call out to a service to get all the settings for a particular environment and application. This service would be maintained by devops, and will be synchronized with changes made to the infrastructure.

The same can be done for the list of servers. Instead of a developer having knowledge of what servers comprise an environment, the script could ask the same service, to give a list of servers that are in a given environment. This will allow us to scale transparently, by adding a new server to the list and doing a fresh deploy.



I’ve tried to capture an overview of how we deploy our software at 7digital. There is a lot of detail I haven’t gone into. Especially the nitty gritty of setting up IIS host headers, ports and app pool settings. A build and deployment framework is something we do from day one of any new project. We make sure that we have a skeleton application deployed all the way to production before any new code is written.

Feel free to get in touch if you have any specific questions.




February 17, 2012

Getting started with web applications on Mono


I’ve started to explore mono, with a view to moving some of our web applications to Linux. Used MonoDevelop  on OSX to  spike a simple HttpHandler to return a response.  I was more interested in how the hosting and deployment story worked with mono.

This is a little list of things I discovered as I went along.

http://www.mono-project.com/ASP.NET has  list of the hosting options available.  Went with the Nginx option.  Mono comes with xsp, which is useful for local testing.

Running a simple web application

To run xsp  /usr/bin/xsp –port 9090 –root  <path to your application>, and the application will be available on http://localhost:9090


To install Nginx on OSX,  get Homebrew.  And then simply  sudo brew install nginix

Follow the instructions here http://www.mono-project.com/FastCGI_Nginx to configure Nginix to work with Mono’s FastCGI server.

On OSX, the Nginix configs can be found in /usr/local/etc/nginx/nginx.conf

This is the configuration I tried for my testing,

In /usr/local/etc/nginx/nginx.conf

   listen 80;
   server_name localhost;
   access_log /var/log/nginx/localhost_mono_access.log;
   location / {
        root /Users/hibri/Projects/WebApp/;
        index default.aspx index.html;
        fastcgi_index default.aspx;
        include /usr/local/etc/nginx/fastcgi_params;

Add the following lines to /usr/local/etc/nginx/fastcgi_params

 fastcgi_param  PATH_INFO          "";
 fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;


Start Nginx.

Start the Mono FastCGI server


     /applications=localhost:/:/Users/hibri/Projects/WebApp/ /socket=tcp:

And the application is available on http://localhost

Web Frameworks

We use OpenRasta for the services I want to run on Linux. OR didn’t work out of the box. This is something I’ll be exploring in the next few days.

Tried ServiceStack too, and was able to get one our projects (https://github.com/gregsochanik/basic-servicestack-catalogue) working on Mono as is.  Nancy is next on the list.

Older Posts