The current orthodoxy does not focus on understanding deeper economic relationships. Instead, it is, at best, based on observing correlations between pairs of proxy variables. For example, it observes that late design changes have higher costs than early design changes and prescribes front-loading problem solving. This ignores the fact that late changes can also create enormous economic value. The economic effect of a late change can only be evaluated by considering its complete economic impact.
Reducing risk, which is the primary mission of testing, clearly creates economic value for product developers. In fact, reducing risk is so centrally important to product development that it is indispensable for us to quantify its economic impact.
this is a blind spot for many modern managers who are heavily influenced by the concept of the Pareto Principle, which observes that 80 percent of the leverage lies in 20 percent of problems. The dark side of the Pareto Principle is that we tend to focus excessively on the high payoff 20 percent. We overmanage this 20 percent, and undermanage the other 80 percent. This leads to what we might call the Pareto Paradox: There is usually more actual opportunity in the undermanaged 80 percent than the overmanaged 20 percent.
Unhappy with late deliveries, a project manager decides he can reduce variability by inserting a safety margin or buffer in his schedule. He reduces uncertainty in the schedule by committing to an 80 percent confidence schedule. But, what is the cost of this buffer? The project manager is actually trading cycle time for variability. We can only know if this is a good trade-off if we quantify both the value of cycle time and the economic benefit of reduced variability.
We need COD to evaluate the cost of queues, the value of excess capacity, the benefit of smaller batch sizes, and value of variability reduction. Cost of delay is the golden key that unlocks many doors.
For simple single variable decisions, we only need to know the direction of the change. For multivariable decisions, we also need to know the magnitude of the change, and most importantly, we need a method to express all changes, in all variables, in the same unit of measure. This is the only way we can evaluate the overall economic consequences of changing multiple proxy variables simultaneously.
we commonly use five key economic objectives as measures of performance for a project. We vary each measure independently and assess its influence on life-cycle profits. In effect, we are trying to determine the transfer function between each measure of performance and life-cycle profitability. This method is known as sensitivity analysis.
WIP constraints are a powerful way to gain control over cycle time in the presence of variability. This is particularly important for systems where variability accumulates, such as in product development. WIP constraints exploit the direct relationship between cycle time and inventory, which is known as Little’s Formula.
Other companies prioritize on the basis of project profitability measures like return on investment (ROI). On the surface, this appears to be an economic approach, but this is just an illusion. By prioritizing, we choose to service one project before another. In general, it is best to delay the project with a low cost of delay. This suggests that we should not prioritize on the basis of project profitability, but rather on how this profitability is affected by delay. Of course, this can only be done when we know the cost of delay, information that 85 percent of developers do not have.
Since high capacity utilization simultaneously raises efficiency and increases delay cost, we need to look at the combined impact of these two factors. We can only do so if we express both factors in the same unit of measure, life-cycle profits. If we do this, we will always conclude that operating a product development process near full utilization is an economic disaster.
A granular timeline subdivides time intervals into very small buckets. When we do this, the coefficient of variation for each of these buckets becomes very high. This makes variance very high and conformance unlikely. Even worse, if we incentivize conformance, people will insert contingency reserves to prevent their tasks from missing the schedule. The more granular the schedule, the larger the schedule reserves. And these reserves aggregate into even longer timelines. The more we increase planning detail and the harder we try to incentivize performance, the worse our problem becomes.
when we emphasize flow, we focus on queues rather than timelines. Queues are a far better control variable than cycle time because, as you shall see, queues are leading indicators of future cycle-time problems. By controlling queue size, we automatically achieve control over timelines.
there can be strong diseconomies associated with large batches. Furthermore, the modest reduction in variability due to the pooling of variances will be completely overwhelmed by the geometric increase in uncertainty caused by the longer planning horizons associated with large batches.
how do we prevent all these small review meetings from driving up overhead? We conduct these review meetings on a regular time-based cadence. Every Wednesday afternoon at 1:00 pm, we review all the drawings completed in the last week. There is no need for a meeting announcement and no need to coordinate schedules. Meetings that are synchronized to a regular and predictable cadence have very low set-up costs. They contribute very little excess overhead.
We must recognize that our original plan was based on noisy data, viewed from a long time-horizon. For example, we may have started development believing a feature would take 1 week of effort and it would be valued by 50 percent of our customers. As we progressed through development, we may have discovered that this feature will require 10 weeks of effort and it will only be valued by 5 percent of our customers. This is a factor of 100 change in its cost-to-benefit ratio. This emergent information completely changes the economics of our original choice. In such cases, blindly insisting on conformance to the original plan destroys economic value.
To manage product development effectively, we must recognize that valuable new information is constantly arriving throughout the development cycle. Rather than remaining frozen in time, locked to our original plan, we must learn to make good economic choices using this emerging information.
This leads them to load their processes to dangerously high levels of utilization. How high? Executives coming to my product development classes report operating at 98.5 percent utilization in the precourse surveys. What will this do? Chapter 3 will explain why large queues form when processes with variability are operated at high levels of capacity utilization. In reality, the misguided pursuit of efficiency creates enormous costs in the unmeasured, invisible portion of the product development process, its queues.