DevOps Metric: Mean time to recovery (MTTR) Definition and reasoning

Important to: VP Operations, CTO
Definition:When a production failure occurs, how long does it take to recover from the issue?
How to measure: Different between systems. Common metric is production downtime avg over last ten downtimes
 

Expected outcome: MTTR should become lower and lower as DevOps maturity grows.

MTTR vs Mean time to Failure
To understand MTTR, we have to first understand its evil brother: MTTF or "Mean time to failure" which is used by many organizations' operational IT departments today.

MTTF means "How much time passes between failures of my system in production?".

Why is MTTR more valueble?

There are two arguments in favor of putting MTTR as a higher priority than MTTF (we don't want to ignore that we fail often, but it's not as important as MMTR).

Perception.
If Amazon.com was down once every three years, but it took them a whole day to recover, consumers won't care that the issue has not happened for three years. All that will be talked about is the long recovery time. But if Amazon.com was down for 3 times a day for less than one second, it would barely be noticeable.

Wrong Incentives.
Let's consider the incentive MTTF creates on an operations department: The less often failure happens in the first place, the more stable your system is, the more bonus you get at the end of the year in your paycheck.

What's the best way to keep a system stable? Don't touch it!

Avoid any changes to the system, release as little as possible, in a very controlled, waterfall-ish manner to make sure releasing is so painful that the person on the other end really really wants to release, and would go through all the trouble of doing so.

Sounds familier?

This "Stability above all else" behavior goes exactly against the common theme of DevOps: release continuously and seize opportunities as soon as you can.

The main culprit for breeding this anti-agile behavior is a systematic influence problem: What we measure influences people into doing the behavior that hurts the organization (You can read more about this in my book "Elastic Leadership" in the chapter about "Influence Forces" (here is a blog post that talks about them in more details).

Developers can't rest on their laurels and claim that operations are the only ones to blame for slowing down the continuous delivery train here. Developers have their own version of "Stable above all else" behavior which often can be seen in their reluctance to merge their changes into the main branch of source control (where the build pipelines get their main input from).

Ask a developer if they'd like to code directly on the main branch and many in enterprise situations will tell you they'd be afraid of doing so in the fear of "breaking the build". Developers are trying to keep the main branch as "stable" as possible so that the version going off to release (which is in itself a long and arduous process as we saw before), has no reason to come back with quality issues.

"Breaking the build" is the developer's version of "mean time to failure", and again, here the incentives from management are usually the culprit. If middle managers tell their developers it's wrong to break a build, then they are driving exactly the same fear as operations have. The realization that "builds are made to be broken" is a bit tough to swallow for developers who fear that they will hold up then entire pipeline and all other teams that depend on them.

Again, the same thought here applies: Failure is going to happen, so focus on the recovery aspect: how long does it take to create a fix for something that stops the build? If you have automated unit tests, acceptance tests and environments, and you're doing test-driven development, fixing an issue that stops the pipeline can usually be a matter of minutes: code the fix, along with tests, see that you didn't break anything, and check it into the main branch.

Both operations and development have the same fear: don't rock the boat. how does that fit in with seizing opportunities as quickly as possible? How does this support "mean time to change" to be as fast as possible?


My answer is that measuring and rewarding MTTF above all else absolutely does not support a more agile organization. That fear of "breaking the build" and "keep the system stable" is one of the reasons many organizations fail at adopting agile processes. They try to force "release often" down the throat of an organization that is measured and rewarded when everything stays stable instead of being rewarded and measured on how often can you change and how fast can you recover if a failure occurs.


MTTR is a very interesting case of Devops Culture taking something well established, and turing it on its head. If DevOps is about build, measure learn, and fast feedback cycles, then it should become an undeniable truth that "whatever can go wrong, will go wrong". if you embrace that mantra, you can start measuring "mean time to recovery" instead.

The MTTR incentive can drive the following behaviors:

  • Build resilience into the operations system as well as the code.
  • Build faster feedback mechanisms into the pipeline so that a fix can go through the pipeline faster
  • Creating a system and code architecture that minimizes dependencies between teams, systems and products so failures do not propagate easily and deployment can be faster and partial
  • Add and start using better logging and monitoring systems
  • Create a pipeline to drivers a deployment of an application fix into product as quickly as possible.
  • Making it just as easy and fast to deploy a fix as it is to roll back a version

With mean time to recovery, you're incentivizing people based on how much they contribute to the "concept to cash" value stream, or the "mean time to change" number.


DevOps Metric Definition: Mean time to change (MTTC) (vs change lead time)

Also called "time from concept to cash".

Important to: Everyone, especially the CEO, CIO and CTO.

Definition
How long does it take a new average feature, idea, fix or any other kind of change to get into a paying customer's hands, in production, from the moment of inception in someone's mind. MTTC is what it takes you from the moment you see an opportunity until you can actually utilize it. The faster MTTC is, the faster you can react to market changes.

How to measure:
 We start counting from moment of the change's inception in someone's head (Imagine a marketing person coming up with a competing idea to that of a competitor's product, or a bug being reported by a customer)

One way to capture and measure mean time to change is by doing a value streaming exercise, as we will touch on in a later chapter in this book.

Expected Outcome:  MTTC should become shorter and shorter as DevOps maturity grows. 


Common Misunderstandings: 
MTTC is not the same as the often cited "Change lead time" as proposed in multiple online publications.

Change Lead time (at least as far as I could see) only counts the time from the start of development of a feature, when real coding begins.

MTTC will measure everything that leads up to the coding as well, which might include design reviews, change committees, budgeting meetings, resource scheduling and everything that stands in the way of an idea as it makes its way into the development team, all the way through to production and the customer.

From a CEO, CIO and CTO view , MTTC is one of the most important key metrics to capture. Unfortunately, many organizations today do not measure this.

In many companies I've worked with, MTTC was anywhere from one month to twelve months.


What is Enterprise DevOps?

To understand Enterprise DevOps, we have to first define what "regular" DevOps is. Here is my current definition of DevOps. Let's call it "DevOps 0.1"

What is DevOps? (Version 0.1)

DevOps is a culture that drives and enables the following behaviors:

  • Treating Infrastructure and code as the same thing: Software
  • That Software is built and deployed to achieve fast feedback cycles
  • Those feedback cycles are achieved through continuous delivery pipelines
  • Those pipelines contain the automated coded policy of the organization 
  • Those pipelines are as human free as possible
  • Humans share knowledge and work creatively with work centered around pipelines 
  • Knowledge silos are broken in favor of supporting a continuous pipeline flow

DevOps is the Agile operating model of modern IT, and is centered around the idea of a continuous delivery pipeline as the main building block of an Agile organization.

DevOps Metrics

DevOps is based on lean development concepts, as well as the lean startup movement's ideas. As such, the "build, measure, learn", the scientific method mantra, is very relevant to DevOps.

That's why metrics play a very important role in a DevOps culture. They help determine gradual success over time in DevOps transformations. Here are the main metrics being used in DevOps based organizations today:

  1. Mean time to change

  2. Mean time to recovery

  3. Frequency of Release

  4. Defect Rates


What is Enterprise DevOps?

Enterprise DevOps is the application of DevOps values in an environment that contains any of the following:

  • Many inter-dependent and related systems and sub systems, software and teams that rely on each other
  • Monolithic systems and/or static software/hardware environments
  • Lengthy Approval Gates or change control processes
  • Security, financial medical or other compliance requirements
  • Lengthy waterfall based processes among multiple teams and stake holders
  • Workflows that are mostly manual and error prone across multiple teams or stakeholders

In order to create a pipeline-centered organization that is able to seize opportunities in the marketplace quickly.



BBBCTC - Branching, Binaries, Builds, Coding and Testing Culture

As part of a major drive to push for faster "time to main" you might need to change things in the following five areas (that end up affecting how long people take to get their code checked into "trunk":

 

Branching Culture

Branches means merges, and merges take time. Branching culture in many places also mean "only merge when you feel you won't break things for anyone else" which effectively means many places use branches to "promote" code to the next stage of stability. This in turn means people might spend a lot of time "stabilizing for others" code before moving it onto the next branch (system testing the code locally, really).

Less branches - less time to main!

Binaries Sharing Culture

When working with multiple teams that depend on each other's outputs, moving binaries between the teams becomes a time consuming challenge. I've seen teams use shared folders, emails and even Sharepoint pages to share their binaries across development groups.

Find a simple way for teams to share and consume binaries easily with each other, and you've saved a world of hurt and time for everyone involved (for example: dependency stash).

 

Builds

If you're going to reduce the amount of branches, you probably will need gated builds to make sure people don't check in code that breaks compilation on the main branch (trunk). This would be the first level of a build (gated build per module). the second level would possibly an hourly (or faster) integration build that also runs full systematic smoke tests and other type

Also, if an integrated build does break, you will need to put fixing a red build as high priority for those who broke it, or those who have to adapt, before moving on to new features.

One more: you will probably need a  cross-group build triage team that notices red builds as they  occur and has the ability to find out what is causing a red build, and where to redirect the request-to-fix toward.

Testing Culture

You will need to automate a bunch of the manual testing many groups so they can run it in a gated build, and for the smoke and system tests at the integrated build level.

Coding culture

Developers will need to learn and apply feature toggles (code switches) or learn how to truly work incrementally to support new features on the main trunk, that take longer than a day or so to build, without breaking everyone's code.

There are two types of breakages that can occur on the main trunk (assuming gated builds took care of compile and module level unit and smoke tests):

  •  Unintended breakages : can usually be fixed easily directly on the trunk by the group that broke them, or by another group that needs to adapt to the new code.
  •  Intended breakages.   These can be broken up into:
  • short-lived intended breakages (such as group x has a new API change, they check it into main and expect group B to get it from main, adapt to it with a simple code change and check their stuff back into Main . this should usually be within an hour's work).
  • Long lived intended breakages: you're working on a feature for the next month or so. In that case you either work fully incrementally on that new feature, or you enable feature toggles (a.k.a 'branch by abstraction').

Either way we are talking about teaching developers a new culture of coding.

Without this culture coding change , developers will always be afraid of working directly on main and you'll be back to private branches with everything they entail (a.k.a - 'long time to main').

 


Time to Main - Measuring Continuous Integration

A.k.a "Time to Trunk".

How do you measure your progress on the way to continuous integration(we're not even discussing delivery yet)? How do you know you are progressing towards the goal incrementally?

Assuming you're in an enterprise situation, showing a KPI over time can help drive a goal forward.

One of the KPIs I've used on a project was "Time to Main". 

Let's start with the basic situation we'd like to fix, assuming multiple teams with bad architecture that causes many dependencies between the teams. See below chart:

In the chart above we have four teams and one main branch. Each team also has it's own private branch(es) structure. A usual cycle of crating an application version is:

 - new NEO binaries distributed to the other teams. HAMMER is adapted and then send to the ORCA team. ORCA team produces binaries to the DAGON team which then produce ORCA+HAMMER+DRAGON components to the ADAM team.

Finally, they all merge to the main branch, in specific order.

Obviously, not the best situation in the world. There are many things we can do to make things better, but first, we need a way to measure that we ARE making things better. 

One measure I like to use is "Time to Main" . How long does  it take for a FULL version of all components that work with each other to get from developer's hands to MAIN?

In our case the answer starts with:

1 Day Hammer work + 3 days ORCA work + 3 days DRAGON work +  5 days ADAM work = 12 Days to Main.

But wait , there's more.

Many of the back and forth lines are actually branch merging actions. For large code bases, add a few hours per merge, which can add up to 2-3 days if not more. Total could be 14-16 days to Main, starting with team NEO publishing a new feature( in this case they are not in the same source control - as they are in a different company).

Note:
We are assuming that the timings mentioned here are "friction time" only. Time that is not used for code development, but the absolute necessary time developers need to feel confident to merge to MAIN . This might mean they run a bunch of manual or automated tests, debugging, installs. anything. It is time that will be spent whether there is or isn't major functionality being introduced, because the integration with the other components has a cost.
 

There are many things we can choose to do here:

  • Ask all teams to work directly on main
  • Create a "Dependency cache" from the continuous build on "Main" so everyone can access the binaries.
  • Reduce manual testing with automation
  • Etc..

No matter what we do, we can always ask ourselves "How does this affect time to main?" We can always measure that tie, and see if we are actually helping the system, or just implementing local efficiencies while Time to Main remains the same, or even increases.

From the Theory of Constraints, we can borrow the "Constraint" term. Time to main is our Constraint of actually having a shippable product increment we can demo, install or test at the system integration level. The more teams choose to use internal branches, the more time code has to wait until it sees "Main". Now we have a number that tells us if we are getting better or worse in our quest for continuous integration.


Pipeline Disintegration (Post Build Pipeline)

Other Names:

Post Build Hand off, Time Capsule, Strangler Application

Symptoms:

It is very complicated to add features to the current build process, because the current tooling deficiencies, process bureaucracy or the responsible team is reluctant to add those features for their own reasons.

Problem:

The current build process is blocking from adding features required for the business to succeed, such as increasing feedback, adding more information, or adding more actions to ease manual work.

Continuing to bang on that specific door might be a Sisyphean task, and spending too much time on it would be very unproductive and ineffective.

Forces:

You'd like to get more feedback sources to the build, but that might interfere with the team's current processes, area of responsibility, or, in regulated environments, even traceability, documentation and workload.

You and other stakeholders believe that adding those features will increase feedback, quality or other benefits, but tacking new features on the current build seems to be stepping on everyones toes.

Solution:

Create a new pipeline that receives a Time Capsule from the current build process. This Time Capsule will contain the end artifacts from the build process (usually the "release" binaries and other support files, and possibly all the source code).

Now that the time capsule is in your possession, you can put it through a new, separate pipeline that adds the steps that you might care about. Examples might include:

  • Static code analysis
  • Running unit tests (if those couldn't be run in the original build)
  • Deployments

A more explicit example:

  1. Make sure the "old" build copies or "drops" all related artifacts into a shared folder that will be accessible by the new pipeline. This should happen automatically every time the old build runs, which hopefully will be once or more a day.
  2. Install an instance of a CI server of some sort (let's say it's TeamCity for the sake of argument), and create a new project for the new pipeline. This will hopefully be on a seperate server so as not to interfere with the old build's require hardware and software resources.
  3. The first action as part of the new pipeline will be, instead of using source control, copying the time capsule binaries into the local build agent that will be invoked by the new CI tool during the run.
  4. Next steps might then include running unit tests, static code analysis or anything else your heart desires, without interfering with the old build structure.
  5. Notify stakeholders if the new pipeline fails.

Split and Parallelize

Other Names:

  • Split Work

Symptoms:

  • The build is taking too long

Problem:

  • It could be that one or more tasks or build steps that are comprised of many actions are taking too much time. For example: running 1000 regression tests (each taking 1-4 minutes) is taking 24 hours, and this is slowing down the feedback rate from the build.

Solution:

Split the step into several parallel running steps, each running a part of the task. The build step will then theoretically be able to be no longer than the longest time it takes to run just one of the split parts in parallel.

The solution requires: - having multiple build agents, or workers, to be able to run things in Parallel - The split parts need to be somewhat similar in size to gain speed improvement - the amount of split parts should be at least as the amount of how many agents will be able to run each split part.

Example

Say we have 1000 regression tests and one build step called "Run Regression Tests".

Step 1: Split the tests to runnable parts

We split the regression tests into separate runnable "ranges" or categories that can be executed separately from the command line. There are many ways to do this: Split the tests to multiple assemblies with names ending with running numbers, put separate "categories" on different tests, so you can tell the test runner to run only a specific category, and more.

It is important that the size of the "chunks" you split into will be somewhat similar, or the build speed gains will be less optimal. In our case, we have 1000 tests, and 10 agents the test chunks can run on. So we create 10 test categories named "chunk1", "chunk2" etc..

An even better scheme would have been if our tests could be split across 10 different logical ideas. Say we are testing our product with 10 different languages, approx. 100 tests per language. It would have been perfect to split them based on language and name each chunk based on that language. Later on, this also makes things more readable at the build server level. If you can only find 2 or 3 logical "categories" for the tests to split based on, (say "db tests", "ui tests" and "perf tests") you might want to go ahead and split each category into chunks to make things faster if you have more than 3 agents. For example "ui-chunk1", "ui-chunk2" until you reach at least the number of agents you have.

Step 2: Create a parallel test run hierarchy in the CI server

If you're using TeamCity, you would now create the following: - new sub project called "run regression tests" - 10 steps (assuming you have 10 build agents that can run them) , one for each test chunk:

  • Run UI Tests Chunk 1 (A command line step calling the test runner command with the name of the category to be run)
  • Run UI Tests Chunk 2 (same command line, different category)
  • Run UI Tests Chunk 2
  • Run DB Tests Chunk 1
  • ...
  • Trigger all Regression Tests

Note the last build step. This build step is merely an empty build step that has a snapshot dependency to all the listed "chunk" steps. when triggered, it should (assuming you have multiple agents enabled and ready to run these) run all the chunks in parallel, which will finish as quickly as it takes the longest running chunk to finish.


Irrelevant Build

Symptoms:

  • The team doesn't seem to care that one of the builds is failing
  • A build is red, but might not be failing. Only one person from a different team or role can tell whether it actually failed or not.

Problem:

One possible cause of is might be that the failing build is not essentially relevant to the current team, or the build result is not understandable (see "binary result" in that case).

Solution:

Remove the failing build from the visualization screen visible to all the team. By keeping the build there the team will eventually stop caring about "seeing red" on the screen. The red color has to mean something.

Example:

In one of the projects I was involved with, there was a big screen up on the wall that showed the status of all the builds. Some of the builds were green, but there were a couple of the builds that related to QA regressions and UI testing that were practically always red. Because there was a lack of good communication between the QA and dev teams, none of the developers really understood what is wrong with that build, or even cared.

What's more, the QA lead would later tell me that a red build in QA does not necessarily mean a failed build. Since UI tests were very fragile, a statistical analysis of sorts was used, so that if X builds of a certain number passed, it was considered a success.

The problem was that the developers were too used to seeing red, and when they saw red they were very used to thinking "not my problem". The first thing I did was remove the always failing builds from the dev wall, so that all the visualized builds would be relevant to the people watching them. Suddenly there was a green hue in the room as all the builds were passing.

Developers would go by and ask me in amusement, "What is that weird color?" and "what's wrong with the build? it's green!" Then, when the builds failed due to dev issues, developers had more reasons to care.

I would only show the QA builds again if they were more stable or if there was better cross team communication so that devs could help QA fix the builds. As long as there was nothing they could do about it, they should only see red when it pertains to work they can control.


Parallel Firehose

Pattern: Parallel Firehose

Other Names:

  • Build Environment Per Stakeholder

Symptoms:

You can't deploy a new version of the application to your test or staging environment because it's in use by other people in the organization such as the QA team, Product owners, or customers. So you can only deploy during the night time, or some other constraint.

Problem:

Developers are sharing the same environment (machines, physical or virtual) with other stakeholders.

Forces:

  • You want to do continuous delivery, without needing to wait for others.
  • Other stakeholders want to use or test the application without needing to stop, or be annoyed by your deployments making the application slower or crashing.

Solution:

Create a separate environment for your other stakeholders, that they can use without you interfering with it, and they can pull a new version to at will.

Example:

Assuming you have the following environment:

  • A Build Machine that runs TeamCity or Jenkins
  • A Build Agent Machine (Build1) that runs a TeamCity agent, or a Jenkins Worker
  • A Test Machine (Test1) where BUILD1 deploys your application to for demo purposes and for use by QA
  • A developer machine (DEV1) that keeps checking in code to the main branch

You want to do continuous testing by running unit and integration tests on check in, as well as smoke tests and acceptance tests by deploying the application automatically on every check in.

You can't because QA are using the machine during work hours. You can only deploy and run those tests at night, and they might actually break whatever nightly tests the QA folks are running.

Here is one way to solve this:

Setup the environment with new machines so you end up with an environment like this:

  • A Build Machine (BUILDMASTER) that runs TeamCity or Jenkins
  • A Build Agent Machine (Build1) that runs a TeamCity agent, or a Jenkins Worker
  • A Build Agent Machine (Build2) that runs a TeamCity agent, or a Jenkins Worker
  • A Test Machine (Test1) that an agent deploys your application to for dev use
  • A Test Machine (Test2) that an agent deploys your application to for QA use
  • A developer machine (DEV1) that keeps checking in code to the main branch

Then, in your CI server setup the following two workflows:

workflow 1: Fully automated Pipeline

  • Triggered on Check in to main branch
  • compile on a free agent
  • Run unit tests on a free agent
  • Run Integration tests on a free agent
  • Deploy to TEST1 on a free agent
  • Run acceptance tests only on TEST1

workflow 2: Pull Based QA Pipeline

  • Triggered by QA clicking on a button, or on a set schedule, or both
  • compile using a free agent
  • Run unit tests using a free agent
  • Run Integration tests using a free agent
  • Deploy to TEST2 using a free agent
  • run QA tests on TEST2

Summary

With this approach nobody is stepping on anyone's toes:

  • Developers can do continuous tests on TEST1 dozens of times a day (yes, some teams actually achieve this). A.k.a it's so fast it's like "drinking from a fire hose" (hence the pattern name)
  • QA get a nice stable environment they can control and pull versions into.

You can repeat this process for new stakeholders. If you're using cloud services, it's even easier to setup such new environments.


Build Pattern: Deploy by Proxy - Added to the book

It's been a couple months of other work. but there's finally new material in the book and more coming soon.

 

  • First: I've now grouped the chapters under "parts", that start telling a story, grouping the patterns under the following parts: Introduction,Separation of Concerns, Productivity, Maintainability, Team Collaboration, Stakeholders, and Branching.

 

  • I've also added a new pattern under stakeholders called "Deploy by Proxy" about dealing with Security Folks who won't let you deploy to production.

 

  • Also made slight adjustments to "Fill in the blanks" to make it clearer relating to separation of concerns topic.

 

  • On a related note, I've published a video course on beautiful builds which contains this information and more that will be added to the book plus demos with teamcity.  You can grab it at http://courses.osherove.com . it costs $25 until the end of July.

Use Version Aware Build Scripts

I am thinking of writing a small booklet about better builds. this blog post might end up as a part of it. Let me know if you’re interested by taking a look here.

While I do love to use Continuous Integration Servers such as teamcity, Jenkins and more, I try to use them as much as possible as “dumb triggers” to my build actions. They do one thing and one thing only, and that is to trigger the correct build script with the correct resources (source version, artifacts form other builds, etc), and nothing more. 

I don’t add any custom build actions such as compilation, source checking etc, because I believe that the build actions themselves have a very strong connection with the correct version of the source.

For example, at an earlier version of the source you might need to compile a specific set of projects, and in future source versions, maybe new projects were added, some were removed, or renamed etc.  If you have your build scripts as part of your source control, and versioned along with the source, they will always match the current source version.

So what do those processes trigger? A script that is inside source control, tied directly to the current project`s source committed version. It might be an xml file (ugh), or a rake file, or a finalbuilder script file (my favorite).

Then it becomes easier to revert to an older build version to recreate it, or to test differences between versions, or to build multiple different branches of the same applications in parallel for different clients.

See if you can take your current CI build, and have it run on a version of the source from 6 months ago. How many changes will that require of you to make it work? If your build is tied to source version, things should feel much simpler.

Single Responsibility

To me it is also about the single responsibility principle.  A Continuous Integration server should do one thing well – manage builds, their resources, and their scheduling.  It should not also need to worry about what is inside each build. In uncle bob’s newspaper metaphor, the builds would be like different sub headings. Each does one thing, and the caller of those functions would only call them, without caring what they do.

 

Thinking about it the other way around, if you have your whole build script running inside one large set of teamcity custom actions, all managed in the browser, and then you refactored that script to separate script files that are part of the source code, you’d be closer to what I am trying to do.

 

So when I see companies boasting about how many custom tools their CI server can do, and how you can manage it all in the browser easily, I remind myself that the more I use those custom build features without the context of a current source version they are attached to, the more work I will have to do to change them every time I need to go back and forth between versions.

 Version Aware Artifact Paths

Following that line of thinking, I might argue that version aware build script “cubes” in source control should also be internally-deciding on what artifacts the encompassing CI process that is running them will be exposed to, and allowed to share with other builds. Today such things are managed solely at the CI tool level, but it would be good if the tools enabled artifact paths to be tied to the current version of the source.


Avoid XML Facing Build Tools

Having your build scripts in XML is one of the worst things you can do for maintainability and readability.

Treat your build scripts like you treat your source code (or, how developers should treat their source code) – with respect for readability and maintainability, not just for performance.

To that end, XML is a bad candidate to have. If you have any more than 15 or 20 lines of xml in a build document:

  • It is hard to tell where things start, and what they depend on
  • It is hard to debug the build
  • It is hard to add or change things in the build logic
  • XML case sensitivity sucks
  • Creating, Adding and using  custom actions is a chore

To avoid XML, you can start using some of the many build tools out there that are either visual, or support a domain specific language for builds that is more readable, maintainable or debuggable than XML:

  • Rake – Ruby Based DSL (Domain Specific Language) for builds, that is very robust and quite readable
  • FinalBuilder and VisualBuild are two visual build scripting tools that give great readability into your build script

Start Products with an Empty Build and Deploy Pipeline

Builds can be nasty beasts when It’s finally time to organize and do them.

There are so many things you did not think about, that it can take days or weeks to come up with a build for an existing system. To avoid the nastiness, start with an empty build, on the first day of your project. Then grow it gradually.

Before you start your first feature on a product that you know needs to exist for a long time, start with creating the pipeline for deploying it.

 

  • Create an empty source project. Something that is a stand-in for the real project you are about to develop. If it is a web application, it could be a single page with “hello world” on it. This project will be what you throw through your empty pipeline to test it.
  • Create a simple CI build script that will live with the code. For now it might only compile your empty project. Make sure it is relative to the code location so you can run it anywhere.
  • Create a simple “Deploy to production” build script that lives with your code. For now this will only copy your project or put some file somewhere on a production server.
  • Oh, you should have a production server to deploy to, so you can deploy your product. that is step #1  of the first iteration!
  • Create a new project in your CI system (I use teamcity, but go ahead and use Jenkins or anything else you desire. ) In that project create a CI build configuration that triggers the CI build script, and a Deploy to Prod Configuration that triggers the Deployment build script you wrote.
  • Connect the CI server to your source code repository, and make it trigger the CI on commit.
  • Make the CI artifacts available to the Deployment configuration.
  • Make the Deployment Configuration trigger automatically on a successful CI build.
  • Run the whole thing and see a “hello world” in production

Now you are ready to nit pick:

  • You might want to create a “deploy to test” configuration and script that triggers instead of production.
  • You might want to do the same with a staging environment

Now you have an empty build and deploy pipeline.

You can start writing real code, and as real code gets created, you only need to start doing small modifications to your build scripts , or CI process variables to make things work nice and smoothly.

Now the build grows and flourishes alongside the source code, instead of as an afterthought. and it is remarkably easier to handle because the changes are much smaller.


Rolling Builds and the Plane of Confidence

When I check in code I have been working on, I feel:

1.    I want to wait for the build result
2.    I don’t want to waste time waiting for the build results, doing nothing
3.    But I fear doing something in the code until I’m sure I didn’t break anything

Supposedly, this is where a CI build configuration comes into place. CI builds are supposed to be as fast as possible so we can get feedback about what we checked in as quickly as possible and get back to work.
But the tradeoff is that CI builds also do fewer things so that they can become faster,
Thus leaving you with a sense of some risk, even if the build passed.

This is why I like to have “Rolling Builds”. I like to have builds trigger each other in the Continuous Integration Server:


•    A Check-in triggers the CI build
•    A successful CI build triggers a nightly build.
•    A successful nightly build triggers a deploy to test build.

I think of the builds now as single waves crashing on my shoreline. Each build is a slightly bigger wave. Each wave crashing on the shore brings with it a layer of confidence in the code. And because they happen serially, I can choose to just relax and watch the waves of increasing confidence crash on the shore, or I can choose to continue coding right after the first wave of confidence.


As I program, the next waves of build results hit the shoreline, and my notification tray tells me what that wave brought with it. Another “green” result wave tells me to go on about my business. A “red” wave tells me to stop and see what happened.

But I always wait for at least the first wave to come ashore and tell me what’s going on. I need that little piece of information that tell me “seems legit so far” so I can feel 50@5 good about going back to coding. If my changes were big, I might wait until the next wave to see what to do.

So my confidence after check in is not a black or white result. It is a continuous plane of increasing confidence on the code that I am writing, that is topped off when the code is deployed to production.

 

Note: This text is part of a “Beautiful Builds” Booklet I am working on.


How to make your builds FAST using DRY and SRP Principles

(this will become part of my upcoming booklet on beautiful builds)

One of the things that kept frustrating me when I was working on various types of build configurations in our CI server, was that each build in a rolling wave would take longer than the one before it.

The CI build was the fastest, the nightly build was slower, because it did all the work of CI (compile, run tests) plus all the other work a nightly needed (run slow tests, create installer etc..). The Deployment to test build would maybe just do the work of the CI and then deploy, or, in other projects, would do the work of nightly, then deploy, because wouldn’t you want to deploy something only after it has been tested?

And lastly, production deploy builds took the longest, because they had to be “just perfect”. so they ran everything before doing anything.

That sucked, because at the time, I did not discover one of the most important features that my current CI tool had : the ability to share artifacts between builds. Artifacts are the output of the build actions. these could be binary compiled files, or they could be configuration files, or maybe just a log file, or just the source files that were used in the build , from source control.

Realization

Once I realized that build configurations can “publish” artifacts when they finish successfully, and that other build configurations can then “use: those artifacts in their own work, things started falling into place.

I no longer needed to re-do all the things in all the builds. Instead, I can use the DRY (Don’t repeat yourself) principle in my build scripts (remember that build scripts are kept in source control, and are simply executed by a CI build configuration that provides them with context, such as environment parameters, or the artifacts from a previous build).

Instead, I can make each rolling wave build, only do one small thing (single responsibility principle), that gets added on top of the artifacts shared by the build before it.

For example:

  • The CI Build Only gets the latest version from source control, compiles in debug mode, and runs the fastest tests. Then it published the source and compiled binaries for other builds to use later on. Takes the same amount of time as the previously mentioned CI build.
  • The Nightly build gets the artifacts from the latest successful CI Build, and only: compiles in RELEASE mode, runs the slow tests, creates installers, and publishes the installer, the source, and the binaries for later builds. Notice how it does not even need to get anything from source control, since those files are already part of the artifacts (depending on the amount of source this might not be a good idea to publish source artifacts due to slowness, but that depends on the size of the project). takes half the time of the previously mentioned nightly build.
  • The Deploy (to test) build, gets the installer from the nightly build that last passed successfully,  and deploys it to a machine somewhere. It does not compile, or run any tests. It publishes the Installer it used. takes 30 seconds.
  • The Deploy (to Staging) will just get the latest successful installer build artifacts from (deploy to test) builds, and deploy them to staging, and also publish the installer it used. Takes 30 seconds.
  • The Deploy (to Production) will just get the latest successful installer build artifacts from (deploy to staging) builds, and deploy them to production, and also publish the installer it used. Takes 30 seconds.

Notice how with artifact reuse, I am able to reverse the time trend with builds. The more “advanced” a build is along the deployment pipeline, the faster it can become.

And Because each build in the deploy pipeline is only getting artifacts of successful builds, we can be sure that if we got all the way to this stage, then all needed steps have been taken to be able to arrive at this situation (we ran all the tests, compiled all the source…)


Pattern: Script Injection

I am slowly realizing that perhaps my concepts for the build book could be better serving as reusable patterns for creating builds, solving specific problems.

This is a test to see how well this idea holds up in reality. If you like it, and especially if you do NOT like it, please let me know in the comments why, and how you would change it.

I think this is a topic that can be of great use to many people. If we frame it right, it can be more easily distributed and understood.

I am not even sure if the sections below are what I would “need” for a pattern of sorts.

Pattern: Script Injection

Problem:

You have a continuous integration process running nicely, but sometimes you need to be able to build older versions of your product. Unfortunately, in older versions of your product in source control, the structure of files is different than the one that your build actions are set up to use. For example, a set of directories that exists in the latest version in source control, and is used in deployment, does not exist in the source control version from 3 months ago. So your build fails because it expects certain files or directories to be there, which did not exist in that source control version.

Forces:

You want the set of automated actions in your build to match exactly the current version and structure of your product files in source control.

Solution:

Separate your build script actions into two parts:

  • The CI side script, which lives in the continuous integration system.
  • The source control side script.

 

Source control side script

one or more script files that are inside the file structure of your product in source control. These scripts change based on the current product version. It is important that this is the same branch that is used within the CI system to build the product, so that the CI side scripts have access to the source control side scripts.

Developers should have full access to the source control side scripts.

Source Control side scripts contain the knowledge about the current structure of the files, and which actions are relevant for the current product version. So they get to be changed with every product version. They will usually use relative path because they will be executed by CI side scripts on a remote build server.

CI Side Scripts/Actions

These actions act as very simple “dumb” agents. They get the latest version of the source control scripts (and possibly all other product files if needed), and trigger the build scripts as a command line action.

 

Summary

By separating the version aware knowledge to a script inside source control, and triggering it via a CI side script that contains only parameters and other “context” data to invoke the source control scripts correctly, we “inject” the version aware knowledge of what the build script should do, into a higher level CI process trigger, that does not care about product file structure, but still gives us the advantages with had before with a CI process.

 

Thoughts? Comments? Does this make sense to you? is it the stupidest thing ever?

Does the fact that I now wear glasses help in any way?


Pattern: Shipping Skeleton

I am slowly realizing that perhaps my concepts for the build book could be better serving as reusable patterns for creating builds, solving specific problems.

 

Pattern: Shipping Skeleton

other names: “Hello World”, Walking Skeleton (based on XP), Tracer Bullet Build,

Problem:

Remember when you had a working product, but you could not ship it, or shipping and deploying it took a long time, or could not be estimated?. By that time, however, your product was too big and your free time was too short to start creating a working automated build and deploy process. As time went on, shipping became even more and more of a nightmarish manual task, and automating it became more and more of a problem.

Now you're at the start of a new project, and you want to avoid all that pain in your new project.

Forces:

You want to avoid the pain of automating the build and deploy cycle. But you also don’t want to spend too much time working on it.

Solution:

Before starting development on the new product, start with a shipping skeleton -- An empty solution with nothing but a “hello world” output, that has the basic automated build and deploy cycle working. You should be able to ship this empty hello world project with the click of a button, in a matter of minutes.

once the shipping skeleton is in place, you can start filling out the product with features, and growing the build scripts in small increments alongside the product.

Basic Shipping Skeleton:

1) A Build script (in source control) for compiling the current source

2) A Continuous Integration Server that has a “CI” build configuration, triggered by code check-in, that invokes build script from the previous bullet (also see Build Script injection for more on this pattern)

3) Another build configuration on the CI server for “Deploy”. This can be either deploy to test, or deploy to production. Usually you want at least a “deploy to test” succeeding before having a “deploy to production” CI build Configuration. This “Deploy to test” gets invoked automatically upon “CI” configuration passing. Later on, as the build process matures, you can change this, but for now, knowing that the product is deployable as soon as possible, while it changes so much, is important for quick feedback cycles.

4) If there isn’t a “Deploy to production”, create it. This deploys the product to a production machine. If it is a web site, it deploys the website to a web server. if it is an installer, it deploys and runs the installer on a machine that will act as a “production” machine. either mimicking a user machine, or or an enterprise machine where the product will be installed. for web servers, make them as real as possible, all the way, even by making them public (although possibly password protected). This gives you the chance to make the web server become real production by the switch of a dns setting.

 

Summary

By starting with a shipping skeleton, you give yourself several benefits:

  • You can ship at will
  • Adding features to the build and deploy cycle is a continuous action that takes a few minutes each day, if at all.
  • You can receive very quick feedback on features or mockup-features you are building into your product.

Thoughts? Comments? Does this make sense to you? is it the stupidest thing ever?


Build Pattern: Location Agnostic Script

Symptoms:

  • Running the build script on a machine where it was never run before requires extra pre-work to make the machine ready, such as mapping drives, creating a special build script folder etc.

Problem:

  • The build script uses hardcoded file and directory paths to find its dependent source files. For example, it searches for a solution file in Z:/SolutionFileName . That means the build script has specific requirements from its environment before running, which causes lots of menial, boring work to do before being able to run it.

Solution:

  • Have the script use only relative paths to find files.
  • Make the script part of source control, or deploy it into the root of the source control branch to build, so that it has access to all the files it needs