Hibri Marzook

The software philosopher

April 9, 2014

Continuous Delivery culture, not tools – Notes from an Open Space session

Pyramid of skills

I facilitated an open space session at the Pipeline Conference in London, to discuss focussing on a  culture of Continuous Delivery (CD) than the tools. We listed a few of the anti-patterns seen in the wild

Culture not tools 

  • The CD guy.
  • Devops person
  • The team not owning CD.
  • Too much standardisation. One person writes the rulebook and forced on every team.
  • No culture of change. The delivery pipeline and code is fixed. No one touches it.
  • No culture of learning.
  • Too much focus on the technology.
  • Cherry picking practices and missing the connection between practices.

We then discussed how to communicate CD, without using  technical terminology, and use language that the rest of the organisation understands. Especially in terms that senior management can relate to.

Techies do a bad job of communicating the process of building software. Don’t mention TDD, CD, Devops when talking to management. Learn to talk in terms of business goals.  Adaptive thinking,  story telling and impact mapping are good tools to have.

Communication

Anthony Green described a pyramid of skills/terms to use when talking about CD.

Pyramid of skills

Techies start at the apex of the pyramid when talking about CD and doing a CD transformation. Instead we should take ideas from the human sciences to involve non-technical people in the process.

Individuals learn, but organisations are bad at learning. How to build a culture of learning across the organisation ? How does the organisation learn ?  In most organisations failure is penalised.

Learning

There were many suggestions to improve organisational learning.

  • Empowering, remove shackles.
  • No job titles. Titles restrict employees. However, titles are useful for HR. Is HR useful ?
  • Culture interviews.
  • Get better at recruitment. Pair programming interviews. Grow people.

We discussed a few techniques to learn agile practices without the tools and technology.  Agile games such as the Lean Lego Game and the Kanban Pizza Game help introduce the CD thinking without getting mired in technical discussions. Matthew Skelton is doing interesting work in this area, with a workshop to experience devops and a collaborative culture at http://web.experiencedevops.org/ .

Everyone should read Maverick by Ricardo Semler.

Matthew also highlighted how we are good at spotting bad software architecture, but don’t spot queues and bottlenecks in organisational culture.  The sketch below would be recognised as having a bottleneck if it was a software system, but can we spot the bottleneck if this was an org chart ?

Queues

 

At the end, there was consensus that it all comes down to having good people.

Thanks to everyone who attended the open space session. Most of all to the conference organisers for putting together a well organised, and very thought-provoking event.

April 19, 2013

Richard Feynman’s observations on the reliability of the Space Shuttle.

I’ve been reading a lot on Richard Feynman lately. I find his character and his unique approach to learning appealing.

In the book “What Do You Care What Other People Think”, he reminisces about his time on the Rogers Commission investigation into  the Space Shuttle Challenger disaster. The book contains his appendix to the report. These are Feynman’s personal observations : Appendix  F – Personal observations on the reliability of the Shuttle

A few key points stood out to me that are relevant to how we build software.
  1. Becoming immune to small failures. NASA ignored minor errors, and modified their quality certification to account for these errors. NASA did this without investigating the systemic failures behind the errors.
  2. It didn’t fail in the past, therefore it will keep on working.
  3. Difference in culture. During the Apollo program, there was shared responsibility. If there was a problem with an astronaut’s suit, everyone was involved till the problem was solved. In the Space Shuttle era, someone else designed the engines, another contractor built the engines and someone else was responsible for installing the engines. They lost sight of the common goal. It was someone else’s problem.
  4. The Space Shuttle was built in a top down manner (big design up front ?). There was constant firefighting to keep it all working. The engines were rebuilt each time. Instead of a bottom up manner, using parts that were known and proven to work.
  5. His observations appreciates the efforts of the software engineering team though. Their testing was rigorous  and he wonders why other teams were not as good as them
March 4, 2013

What do tests tell us ?

Back in September 2012, I gave a talk at Dev’s in the Ditch on what tests tell us.  The talk resonated with a post by Matt Wynne today on fixing slow builds.

Optimising a slow build ? You are solving the wrong problem.

Slow builds and test suites with a large number of tests are an architecture smell. An indicator that there is something wrong with how you have built the application. I covered a few other things that tests or the lack of them tell us, in my talk.

April 28, 2012

How we do deployments at 7digital.

At 7digital, deployments to a production environment are a non-event. On a given day, there can be at least 10 releases to production during working hours. On some days even more. Specially on Thursdays before the 4pm cut off, as we don’t deploy to production after 4pm and on Fridays.  

Deployments to our internal testing environments happen constantly, on every commit. I’m able to deploy a quick fix, or patch something in production without having to make changes on a live server. We rarely roll back, instead roll forward by fixing the issue. This is made possible due to our investment in build and deployment tools. We attempt to treat these with the same care as our production code.

This post is about how it works.

A little bit about our stack.

Our services run on the .Net stack, with SQL server back ends mostly. We use IIS 7and IIS 6, load balanced  behind HAproxy.

We use Teamcity, to trigger Rake scripts, to do our deployments. The Albacore gem is used for the majority of tasks. We use code derived from Dolphin deploy to configure IIS.

 

In the beginning.

We used MSBuild for building our solutions and deploying software. However, this was very painful, and led me on a personal crusade to get rid of Msbuild for deployments.  XML based build frameworks, limit what you can do to what is defined in the particular framework’s vocabulary. A big pain was having to store configuration and code in the same msbuild xml files. It wasn’t possible to separate the two without writing custom tasks.

A build framework, in a programming language, allows you to be much more fluent and write readable scripts. You have the power to use all the libraries,  at your disposal to write deployment code instead of being limited to a XML schema definition. In addition to Ruby, we have  a couple of projects using Powershell and psake.

 

The current setup.

 

deployment

 

The diagram above shows the major parts of our deployment pipeline.

We keep the build and deployment code along with the project code, to maintain a self contained package, with minimal dependencies on anything else.

A project has a master rake file, named rakefile.rb in the root directory of the project. This rake file references all the other shared rake scripts and ruby libraries needed for build and deployment.

These libraries and scripts are kept in a sub directory named build. A typical project structure is like;

root
    build
        conf
        lib
   src
       XX.Unit.Tests
       XX.Integration.Tests
       XX.Web

The conf directory contains the configuration settings for IIS, including the host headers, app pool settings and .net framework version settings.

The Albacore build gem has everything that is needed to build a .Net solution. We use it to compile our code on Teamcity and to run our tests.

When something is checked into VCS (git), Teamcity triggers off a build and compiles the code. This build process will package the deployment scripts and the web site package, which will be used for deployment. Teamcity stores these as artefacts, and this allows us to reuse them without building again.

To deploy a website, a Teamcity build agent, retrieves all necessary zipped packages, un-compresses them to the current working directory.

The build agent calls a rake task, with the parameters;

   rake deploy[environment, version_to_deploy, list_of_servers, build_number]

An example

   rake deploy[“live”,”1.2”,”server1;server2;server3”,”123”]

The environment parameter specifies which deployment settings to use.  Deployment settings are stored in  YAML files, that the rake scripts read.  A YAML file for IIS settings looks like;

uat:
 site_name: xxx.dns.uat
 host_header: 80:xxx.dns.uat:*
                        443:xxx.dns.uat:*
 dot_net_version: v4.0

live:
 site_name: xxx.dns.com
 host_header: 80:xxx.dns.com:*
                        443:xxx.dns.com:*
 dot_net_version: v4.0

 

We can add a new environment, and change settings for an existing environment by changing a configuration .yml file, without having to change deployment scripts.

The version_to_deploy parameter, loosely translates to a virtual directory. This is ignored for websites that deploy to the root. The list of servers is an arbitrary list of servers that we deploy to. This allows us to deploy to a single server or a cluster.

The rake deploy task, calls two other rake tasks, for each server in the list of servers. The first task is to copy all deployment scripts and the web package to the target server. The second is to trigger a remote shell command to do the actual installation process.

In pseudo code

  deploy

       foreach(server in servers)

             copy scripts and packages to server

             trigger remote installation on server            

The actual installation process does not happen from the build agent, but on the target server. The build agent does not have the the necessary network access and admin rights. Our servers expose only SSH.

The deployment sequence is controlled by chaining rake tasks. This allows us to run any of the tasks individually from the command line to do a manual deployment or to test.

The remote installation task, copies all the web site binaries to the correct locations under IIS, and configures IIS. Application pools under IIS are stopped, while this happens, and the virtual directory and if needed the web site are rebuilt. The application pools are restarted after this.

The deployment  repeats the process on the next server in the list.

 

The future.

What we have now helps us a lot, and allow us to scale up to this point. However, to grow even more there are a few things that hold us back.

For example, a lot of infrastructure details creep into our configuration and scripts and stored in source control, which is mostly used by devs. This means that  when our operations folk make a change to the infrastructure the devs have to change our configuration settings to reflect this. I would like to have all configuration settings stored somewhere, and the scripts would call out to a service to get all the settings for a particular environment and application. This service would be maintained by devops, and will be synchronized with changes made to the infrastructure.

The same can be done for the list of servers. Instead of a developer having knowledge of what servers comprise an environment, the script could ask the same service, to give a list of servers that are in a given environment. This will allow us to scale transparently, by adding a new server to the list and doing a fresh deploy.

 

Summary.

I’ve tried to capture an overview of how we deploy our software at 7digital. There is a lot of detail I haven’t gone into. Especially the nitty gritty of setting up IIS host headers, ports and app pool settings. A build and deployment framework is something we do from day one of any new project. We make sure that we have a skeleton application deployed all the way to production before any new code is written.

Feel free to get in touch if you have any specific questions.

Resources:

http://codebetter.com/benhall/2010/10/22/dolphin-deploy-deploying-asp-net-applications-using-ironruby/

http://albacorebuild.net/

February 17, 2012

Getting started with web applications on Mono

 

I’ve started to explore mono, with a view to moving some of our web applications to Linux. Used MonoDevelop  on OSX to  spike a simple HttpHandler to return a response.  I was more interested in how the hosting and deployment story worked with mono.

This is a little list of things I discovered as I went along.

http://www.mono-project.com/ASP.NET has  list of the hosting options available.  Went with the Nginx option.  Mono comes with xsp, which is useful for local testing.

Running a simple web application

To run xsp  /usr/bin/xsp –port 9090 –root  <path to your application>, and the application will be available on http://localhost:9090

 

To install Nginx on OSX,  get Homebrew.  And then simply  sudo brew install nginix

Follow the instructions here http://www.mono-project.com/FastCGI_Nginx to configure Nginix to work with Mono’s FastCGI server.

On OSX, the Nginix configs can be found in /usr/local/etc/nginx/nginx.conf

This is the configuration I tried for my testing,

In /usr/local/etc/nginx/nginx.conf

server{
   listen 80;
   server_name localhost;
   access_log /var/log/nginx/localhost_mono_access.log;
   location / {
        root /Users/hibri/Projects/WebApp/;
        index default.aspx index.html;
        fastcgi_index default.aspx;
        fastcgi_pass 127.0.0.1:9000;
        include /usr/local/etc/nginx/fastcgi_params;
   }
}

Add the following lines to /usr/local/etc/nginx/fastcgi_params

 fastcgi_param  PATH_INFO          "";
 fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;

 

Start Nginx.

Start the Mono FastCGI server

fastcgi-mono-server2

     /applications=localhost:/:/Users/hibri/Projects/WebApp/ /socket=tcp:127.0.0.1:9000

And the application is available on http://localhost

Web Frameworks

We use OpenRasta for the services I want to run on Linux. OR didn’t work out of the box. This is something I’ll be exploring in the next few days.

Tried ServiceStack too, and was able to get one our projects (https://github.com/gregsochanik/basic-servicestack-catalogue) working on Mono as is.  Nancy is next on the list.

June 16, 2011

SPA 2011 Roundup

A summary of my SPA (Software Practice Advancement) Conference experience.

Node.js

The session on node.js on Sunday, was my first serious introduction to server side js and node.js.

Start by downloading the source at http://nodejs.org/#download, extract the source do ./configure and make install in the source directory. Takes a few minutes to build. The build works painlessly on OSX and Linux. If you are on OSX install it via brew

http://shapeshed.com/journal/setting-up-nodejs-and-npm-on-mac-osx/

The documentation is at http://nodejs.org/docs/v0.4.8/api/

Node.js is a good way to get into event driven non-blocking programming. It’s easy to do this when you think about doing things (sending responses, rendering) only when things happen.

For example, when data arrives on a socket listening on the server an event is triggered. Code is executed when events are triggered. Instead of polling and waiting for stuff to happen, which can be very in-efficient.

This got me thinking about tiny programs running in a system and only doing things as a result of something being triggered. This could lead to us writing code that is only needed by the system. Code that is not used by the system (i.e not triggered) are culled.

Treating JavaScript as a programming language.

Going on with the js theme, the guys from Caplin Systems, showed how to build applications with js while still testing the full stack. We were shown how to use Eclipse and JSTestDriver. We were also taken thorough building the full application stack using  domain and view models in js. While using a Model View View Model pattern with knockout.js to bind the domain to client HTML.

Master of Puppets

I had mixed feelings about attending this session but in the end it was worth it. Puppet is an open source platform to manage systems, similar to Chef. Puppet and Chef use recipes to build and configure machines. It seems to work smoothly with Ubuntu, using apt get to install and configure the software as specified in a recipe. Still no good Windows support though, which is going to make it hard to use at work.

It is also possible to use Puppet to control/build virtual machines using vagrant. There is also a VMware API and a ruby gem for the API . For further reading on this please follow the links below.

http://www.jedi.be/blog/2009/11/17/controlling-virtual-machines-with-an-API/

http://rubyvmware.rubyforge.org/

Non-Technical Sessions

I enjoyed the non-techy sessions very much. To start it off there was Rachel Davies’ session on building trust within teams. The slides from this are up now http://www.agilexp.com/presentations/SPA-ImprovingTrustInTeams.pdf

Benjamin Mitchell’s session on double loop learning was insightful. It made me think about how much my perception of things don’t necessarily reflect the reality. It is better to seek knowledge than take actions based on assumptions. It became clear how easily we can fall into this trap. There is more reading on double loop learning and the work by Chris Argyris here http://bit.ly/Argyris

Developer anarchy at forward clearly illustrated that to go faster, you need great devs and ditch technology that is slowing down feedback loops. It’s not just about building feedback loops, its how fast you can react to those loops is what matters. On the Internet scale, we’d would have to respond in seconds, minutes and at least in hours. Definitely not in days, sprints or months. In the conversation afterwards, learnt that they spent about 6 months re-building the tools and infrastructure that let them deliver at the speed that the do now.

Comic Communication and Collaboration completed the SPA experience with much hilarity and fun. Think I could try my hand and xkcd style comics. More importantly, the insight learned was ,communicate by talking with peers, or communicate by producing something (i.e readable code, working software). If you have to communicate via email or worse through a 3rd party (i.e project manager) don’t bother. It’s not as effective as you think.

All in all, a very well organised conference, including the invited rants. Looking forward to next year.

Many thanks to those who organised it.

June 11, 2011

Mocking HTTP Servers

The problem

There are tests (mostly what we call acceptance tests). The system under test (SUT) works with a couple of web services to do its work. The problem I’m faced with is that, when I write these tests, the external web services have to be arranged with data for the test, or the test has to rely on existing data. Writing these tests is extremely painful. Knowledge of the magic existing data is required and in the end what we are really writing are integration tests. But we don’t want integration tests.

At 7digital, we are exposing more of our domain via HTTP APIs, in a “NotSOA” manner. To test each service by itself it becomes necessary to mock the dependencies.

Solutions.

There are a couple of solutions to this.
Set up an stub HTTP web service somewhere, and let it return canned responses. It behaves like the real web service, but only returns responses that have been arranged. The disadvantage of this approach is that I have to know about what canned responses have already been setup.

To change the response for a particular test I have to make changes to the stub server and deploy it, as it is a separate application. It takes the focus away from writing the test I’m concerned with.

Another way is to insert some sort of “switch” in production code that will return canned responses when under test. I don’t like this approach because it requires production code just for tests

My solution.

What I want to do is something similar to setting up mocks/stubs in unit tests, but to do it with an actual http server. To setup the stubbed responses in the test code itself, and not to have to make any change to production code, other than a configuration change.

So this is what I came up with

 1: [Test]
 2:         public void SUT_should_return_stubbed_response() {
 3:             IStubHttp stubHttp = HttpMockRepository
 4:                 .At("http://localhost:8080/someapp");
 5: 
 6: 
 7:             const string expected = "<xml><>response>Hello World</response></xml>";
 8:             stubHttp.Stub(x => x.Get("/someendpoint"))
 9:                 .Return(expected)
 10:                 .OK();
 11: 
 12:             string result = new SystemUnderTest().GetData();
 13: 
 14:             Assert.That(result, Is.EqualTo(expected));
 15: 
 16:         }

HttpMockRepoisitory.At creates a HTTP server listening on port 8080, and behaves as if it is process request under the /someapp path. This is the web service that the SUT will get it’s data from.

Using the object returned,  it is possible to setup stubbed responses using a fluent syntax. The stub server can return text and files. I’ve posted a few more examples on github http://github.com/hibri/HttpMock/blob/master/src/HttpMock.Integration.Tests/HttpEndPointTests.cs

 

Kayak.

I’m using Kayak, a light weight, asynchronous HTTP server written in C#. Kayak, can accept request processing delegates, and post them to the HTTP server listening on the given socket. This allows me to add stub responses at runtime.

Current status.

This is very much a work in progress. HTTP GET works. There is support for returning stubbing content types and HTTP return codes. I’ll be able to add to this while changing a very large test suite to not rely on real web services.  I’ve created a repository on github at http://github.com/hibri/HttpMock

There are no unit tests now, but I’ll be adding them soon as I wanted to prove the concept first.

Describing this as mocking is not entirely correct, but I couldn’t find a term to describe the concept. It is possible to do the same in Ruby using Sinatra.

May 28, 2011

On Learning Objective-C

Learning Objective-C has been an interesting experience, and this is how I went about it.
My motivation in learning Obj_C was most of all add another language to my toolkit. I wanted to get behind the mysteries of developing for the iOS.

Found a fairly good set of coursework to get started at http://courses.csail.mit.edu/iphonedev/ . This is a very basic introductory course and the set of presentations guide you through developing a complete iPhone application. Before this I had no clue on how to use XCode. This helped me grasp the basic language concepts. Going through the whole set is highly recommended.

There is a very handy Obj-C tutorial at http://cocoadevcentral.com/d/learn_objectivec/

Setting up tests was frustrating in XCode 3. Although XCode4 has improved on this, it is no where near Eclipse and Visual Studio. Skip the built in test framework (STAssert) in favour of OCHamcrest and you’ll be in familiar territory. There was a bit of hair pulling in figuring out how to get XCode4 to use it.

Now that I’ve figured out XCode4 , I’m going through the  iOS development videos at http://developer.apple.com/videos/iphone/ .

 

 

 

May 10, 2011

Why I don’t like web service wrappers

Martin Fowler’s post http://martinfowler.com/bliki/TolerantReader.html mirrors my thoughts on consuming web services.

What is a web service wrapper ?

A wrapper for a web service is a library, helps you deal with said service in the language programming language of your choice. It hides the details of the web service, and saves you the trouble of having to know how to parse XML or JSON. The wrapper gives you first class objects to work with.

Many web service providers provide a wrapper for their services in most programming languages.

Why I don’t like wrappers.

I strongly believe that web services should be simple to use. If you expose a web service via HTTP, your consumers should be able to use any HTTP client to consume the service.

You should be able to use a web service by simply typing the URL for the web service method in the web browser’s address bar and see the result in the browser itself.  You should be able to use a command line tool such as curl to call the web service. Using a web service should not be more complicated than this. If you were hardcore even telnet should suffice.

To consume the service in code, the bare minimum a  developer should need is a decent HTTP client library and a standard XML/JSON parser. Even a decent string library should suffice to make sense of the HTTP responses. These are pretty much available in the framework provided out of the box with the major programming languages. Of course there are situations where you’ll need more, but then this should be the exception and not the norm.

From the point of view of a web service provider, this simplicity increases adoption of your web service. Consumers don’t have to wait for you publish a new version of the wrapper library in order to start using a new service endpoint. Maintenance of the wrapper library is a non-issue, as you can focus on fixing issues with the service only, and not the wrapper library.

Avoid using wrappers internally.

When building a web service, avoid the temptation to use wrappers in your acceptance and integration tests. Strongly typed wrappers are a bad idea. I’ve seen this first hand when writing tests when building the 7digital API. Don’t even parse responses to strongly typed objects. I’ve forbidden the use of wrappers and strongly typed objects for testing the 7Digital API within my team.

The reason for this is, as a provider, you have to use the service as a consumer would. Wrappers hide the complexity of your own service, and you won’t know how complex the service has become.  When you work with bare HTTP response strings, you will see potential usability issues that consumers will face.

Publish sample code, but not wrappers.

If you are providing a web service, my recommendation is to publish sample code, and not wrappers around your service. Show developers how to consume the service in their favorite programming languages. A good idea is to give them tests as Martin Fowler recommends. The tests can serve as sample code. They can run those tests against your service and see where the problem lies.

Thoughts

In my experience, using a strongly typed language such as C# has been a bad idea.  Dynamic languages like Ruby can be used to write more tolerant wrappers. This is because with Ruby you can evaluate the API responses at run time rather than having to use an object that requires the response to be in a certain format.

Older Posts