In a previous post I discussed the notion of having two seperate pipelines. I want to revisit this, with a bit more details.
When we’re talking about Continuous Delivery, we’re very interested in the speed of the feedback loop. When you’re expecting to merge into the master branch many times a day, a 5 minute detail in the run of a pipeline is crucial for making things flow productively and not waiting too much.
People tend to create two types of pipelines to deal with this issue: Fast pipelines and slow pipelines. They’ll use the slow pipeline to run long running e2e tests for example.
That really only slows down the rate of delivery. If those e2e tests are so important that you won’t delivery until you know their results, you’ll have to wait 24 hours to delivery, regardless of how small the changes are.
We can use the term Delivery Stoppers to create a logical bucket where we can put all things that we feel we cannot deliver unless they are “green”. e2e tests usually fit that bucket. So do unit tests, security tests, compilation steps etc.
What doesn’t fit that bucket? Code complexity is useful to know, but it usually doesn’t prevent us from delivering a new version when we know we need to fix some complicated code. Performance (under a specific threshold) is a useful metric, but should it break our build to know that our performance has increased by 1%? Or should it tell us that we need to add a backlog item to take care of that issue?
If those things aren’t delivery stoppers, let’s call them discoveries. We want to know about them, but they should not be a build breaker.
So now let’s consider our pipelines from that point:
We can create Delivery pipelines
that run all things that can prevent delivery if they fail.
That end with a deployment of a deliverable product
That give us confidence we didn’t screw up badly from a functional stand point.
Has to run fast (even if it has slow tests - we’ll have to deal with making it run fast enough)
We can create a discovery pipeline
that runs all the tasks that result in interesting discoveries and KPI
Gives us new things to consider as technical debt or non functional requirements
Can result in new backlog items
Does not result in deployment.
Feeds a dashboard of KPIs
Can take a long time (that’s why it’s separate)
Here’s an example of such pipelines:
I know I said that every 5 minutes on a delivery pipeline makes a huge difference - so how come I’m willing to put long running 2e2 tests there?
Because they are delivery stoppers. We have to have them running per commit.
What do we do about the fact they are running slowly?
We can run the tests in parallel on multiple environments/build agents
We can split the tests into multiple tests suits and run them in even more parallelized processes on multiple environments.
We can optimize the tests themselves
We can remove unneeded/duplicated tests (if they already exist at a lower level such as unit test, API tests etc.)
But we do not move them into the nightly pipeline. We deal with the fact that our delivery pipeline takes a long time, and don’t enable it to go further.
Delivery pipelines and discovery pipelines - I think I can live with that.