Continuous Everything

Earlier this week, a really smart architect and I were evaluating various methods for managing software code changes, bug fixes, releases and major features. We both were in agreement with the primary direction, a popular one in nimble companies.

  1. Have a primary “trunk” or “master” branch;
  2. Any commits to “master” automatically get built and tested and ready for production (and possibly deployed);
  3. Any changes occur on “feature branches”, temporary parallels streams of development that eventually – hopefully sooner rather than later – merge into “master”, and from thence into production.

However, my colleague raised the more radical possibility: everything onto “master”.

At first, I was somewhat surprised. One mistake, one error, one complication, and all changes to production are blocked!

This may be acceptable in an old-school, “deploy every six months”, or even a slow but somewhat better “deploy every two to four weeks.” But Internet-speed companies, especially SaaS, should be deploying every day or even multiple times a day.

If a customer finds a bug in my cloud service, and we find a way to fix it in 2 hours, it should be fixed in production in… 2 hours! It is unacceptable that it wait because some other feature or release is in process.

That, indeed, is the very rationale behind the “feature branch”. To do something longer, go work on it on the side with your team, and then merge it into the primary codebase later.

And yet, the master of Continuous Development, Jez Humble, advocates precisely this workflow.

In a Twitter discussion, Jez pointed me to this blog post, where he describes the process of “branching by abstraction”, i.e. how to do major changes without blocking everyone else.

 

I won’t rehash the entire post here; for those in the business of building, delivering, operating or selling software, it is a very worthwhile read. The gist of it is:

  1. Any major product, no matter how big, can be broken down into smaller and more manageable parts.
  2. Any major change, no matter how complex, can be “fenced off”, isolated or “abstracted”, so that the changes can go on in your main code without affecting anyone.
  3. Any change can be enabled/disabled with “feature flags” so that they can go into live systems without affecting anyone until you are ready.

Solid unraveling + clean abstraction + responsible enablement = ability to work right on the mainline without affecting anyone else.

While trying to understand why Jez advocates so strongly for it, I came to a realization. The same driving force behind Continuous Delivery (CD) is also behind Continuous Merge (CM) or, if you prefer, Continuous Commit (CC).

The very reason why continuous deployment reduces risk, despite many more deployments to live running systems in the middle of the business day, is because it breaks those deployments down into tiny manageable chunks.

Smaller chunks = exponentially smaller risk.

As I have written before, combining 3 changes into a single deployment does not create three times the risk, it creates at least 32 = 9 times the risk!

  1. It is nearly impossible to know how the various changes will interact with each other.
  2. It takes longer to recognize that the post-change system is misbehaving.
  3. It takes much longer to discover which part or parts, alone or in combination, are causing the misbehaviour.

On the other hand, when a deployment is a single change, you immediately know if that change is misbehaving (or causing other parts to misbehave), and you immediately know what part to address to fix it.

In an article over a year ago, I quoted another colleague who coined the term “release spiral of death” for companies that are shaken by a release so wait longer, thus increasing risk, leading to more painful deployments, leading to longer waits, spiraling out of control.

The same risk-reducing / speed-inducing idea – small, rapid changes are far easier to manage and reason about, and therefore far less risky, even cumulatively, than one larger change, even with “all hands on deck” – applies to the pre-deployment software changes themselves.

Merges between feature branches and master are painful. They take time and effort, often among people who finished working on the relevant areas days, weeks or even months ago.

By continuously committing or merging into master on at least a daily basis, preferably more often, CC/CM gives you smaller changes to manage and a much easier time addressing smaller issues when they arise.

Summary

The right development, testing and deployment processes can:

  • Reduce your risk of service disruption;
  • Lower the impact of that disruption and time to repair;
  • Diminish your stress of deploys;
  • Increase productivity of your engineering, operations and support staff;
  • Improve customer satisfaction;
  • Raise your top and bottom lines.

How quickly do you iterate? How quickly would you like to? Ask us to evaluate your current world and bring you to a new one.

About Avi Deitcher

Avi Deitcher is a technology business consultant who lives to dramatically improve fast-moving and fast-growing companies. He writes regularly on this blog, and can be reached via Facebook, Twitter and avi@atomicinc.com.
This entry was posted in business, cloud, product, technology and tagged , , , , , . Bookmark the permalink.