The best way to measure the impact of delays is reflected in Don Reinertsen’s mantra – “If you quantify one thing, quantify the cost of delay.” When making a decision on a workflow improvement experiment, we can gauge its potential effectiveness by anticipating what will happen to our cost of delay. We can then see what actually happens to see if we got an improvement or not.
A side note on predictability. It has been in vogue to talk about unpredictability in the Agile space. It is true that we often have a large degree of unpredictability in what is needed. This is the value of using an empirical process in determining what will provide value and delivering it. But it is a misconception to believe that our process cannot be founded on a well-defined and explicit theory. Flow thinking is such a theory. Throughout this book we have described how large batches, lack of workload management, delays in workflow, delays in feedback, poor team organization and other factors contribute to delays which create additional work and which increase our cost of delay. Being aware of these relationships provides us a basis for making usually effective decisions on what to do. Of course, there is a large degree of unpredictability in the organization itself, so these decisions should always be considered hypotheses to try and validate.
Using flow theory as a guide enables us to focus directly on our challenges. Following the established practices of frameworks hopefully reduce delays when considering how they are used across all the companies that use them. Unfortunately, what works for another organization may not work for yours. So set practices are often not effective.
What to look for
While many things can contribute to delays, the following are the primary causes:
- large batches of work
- having too much work in process
- not having a well-defined and managed intake process
- seeing work going back and forth between people or teams
- lack of visibility as to why something is important
- “ghost work” (i.e., work that’s not seen by anyone other than those doing it)
- many handoffs taking place
- requirements being defined by mostly product folks and then handing them off to developers
- developers writing code on their own and then handing it of to testers
- lack of automated testing
Looking for delays and what we can do about them
Let’s now look through the ideal value stream for delays and what’s causing them in each of the main areas. Figure 1 is presented to provide a place to identify where delays commonly are present.
Figure 1. The DA FLEX lifecycle for value streams. Click on a process blade for more information and click here for a large version of the diagram.
Strategic Planning & Lean Portfolio Management
- Too long a backlog of things to work on. If it’s more than one year to get it all done, there’s a good chance that half of it will be obsolete by the time you get to it
- Is there visibility of how things are identified, selected and sequenced throughout the organization?
- Are MBIs, MVPs and MVRs being used?
- Is the corporate planning and budgeting cycle more than three months long?
Lean Product Management and Intake Process
- Are MBIs, MVPs and MVRs being used?
- Is there a well-defined intake process?
- Is there acceptance criteria for the backlog items?
- How teams are coordinating with each other
- Is there a focus on building MBIs, MVPs and MVRs?
- Are all four types of dependencies (technical, business, architectural, communication) being identified, made visible and tracked?
- Implementation and Integration
- How many handoffs are present?
- How many interruptions are present and what/who causes them?
- Is work being pushed to the teams?
- Are teams being directed by more than one product owner?
- Are teams working on too many things?
- Is upstream work visible to the teams?
- Is upstream work visible to the shared services?
- Are test-first methods being used?
- Is there automated testing?
- Is ops being blindsided?
- When things get released are they ready to provide value?