Like most engineers, I do a lot of optimizing, often just for fun. When walking to work I seek the shortest route. When folding my laundry I minimize the number of moves. And at work, of course, I optimize all day long alongside all my engineering colleagues.
Optimization, by definition, requires an objective function as the basis for measuring improvement or regression. How short is the route? How many moves does it take to fold a shirt? But what is the objective function at work around which my team and I should optimize?
I’ve worked in many software engineering organizations where the objective function is an unstated confusion that evolved on its own over time. It’s often a bit of “don’t break things” mixed with a dose of “conform to the standards”. Sometimes more destructive objectives find their way into the culture: “get yourself seen,” or worse “don’t get noticed.” And my least favorite, “horde your knowledge.”
Recently, while working with a client, I had to state my views on a good objective function for a software engineering team. It’s To predictably deliver a continuous flow of quality results while minimizing dark time — the time between when a feature is locked down for development and when a customer starts using it.
Predictable: Your process has to be stable and sustainable. It’s not about sprinting to collapse; nor is it about quick wins followed by a slog through technical debt. It’s about a steady pace over a long time. Hence the volatility measure in Pivotal Tracker; a good team has low volatility, and therefore their rate of delivery is highly predictable.
Delivery: Code is not worth anything until it is in use by customers. Calling delivery anything else often leads to spinning wheels and wasted effort.
Continuous flow: Activities that potentially disrupt the flow and would be better off if dealt with in the moment, in the normal run of things. For example, I find mandatory code reviews disruptive and demoralizing. Gatekeepering steps like these, by definition, stop the normal flow and send things back for rework. In contrast, pair programming often achieves the same quality and consistency objectives in real time and without disrupting the flow
Quality: This is a relative measure. The work needs to be done sufficiently to avoid rework (i.e. bugs) and to prevent the accumulation of technical debt. Spending more time trying to achieve “quality” beyond these measures is just waste.
Results: What it’s all about.
Minimizing dark time: Many software engineering organizations miss this one because it’s driven by the business rather than the needs and craftsmanship of the engineers themselves. And yet, minimizing dark time is perhaps the most critical contribution that an engineering team can make to a business.
Dark time is what the business experiences between when the engineers remove the businesses ability to re-spec a bit of work and when they hand back a working result. In this dark time the business can no longer refine their decision nor observe and learn from the results. They’ve ordered (and paid for) the new car, but are waiting for the keys. It’s dark because during this stage there is nothing for the business to do, with respect to that feature, but wait.
While coding I experience the same dark time when working on a slow code base or, worse yet, working with a slow test suite. My TDD cycle grinds to a crawl as I wait a minute or more (the horror!) between when I hand my code to the compiler/interpreter and when it spits back the red-green results.
If you hate twiddling your thumbs waiting for slow tests to run, think how frustrating it is for the business folks when their engineering team throws them into the dark for days, perhaps even weeks. Of course they pipeline their work and find ways to be useful, but the dark time still sucks.
When a software engineering team chops dark time down from a month to a week business folks cheer. When the engineers chop dark time down to a day or less, the business folks do what us coders do when working with a fast test suite… we fly.
About the Author