Created on 2017-12-27 19:34
Published on 2017-12-28 16:10
When we began our Continuous Improvement journey in 2000, it was all about solving tactical problems to improve quality and ultimately save money. We spent a great deal to select our best people and train them in sophisticated problem-solving software and methodologies. I led a group of 15 Black Belts and 1 Master Black Belt charged with improving the engineering department of 1,500 people. Unfortunately, our biggest problem was a lack of clear problems. Without good performance metrics, we did not know where to find the biggest problems. We certainly had no clue where to find the biggest opportunities. The bottom of the iceberg was completely unknown.
Every Six Sigma project started with guessing the opportunity and creating a new metric. Our old Red/Yellow/Green scorecards were statistically useless. When metrics definitions "evolved" over time, projects were much more difficult. There was never enough historical data to see if we really made a difference. After two years, the company gave up on the whole idea of applying Six Sigma to transactional functions, but kept it alive in manufacturing. In 2006, we started a more strategic process called Best Business Practices that drove more engineering savings and improvements than we ever imagined. And it all started with painstakingly defining and implementing true Key Performance Indicators. Before going into true KPIs, we will review some of the common problems with "metrics". My experience and examples are mostly in product development, but the process has been (and can be) applied to all major functions.
Companies use metrics all the time in lots of places. Some metrics are temporary for reaching a short term goal. Others may be around for a long time. When new leaders arrive on the scene, often their priorities are 1) rearrange the org chart and 2) revise the metrics. This is normal in many companies, but it can cause several problems.
For example, if we choose our own metrics, then we may want to look good instead of exposing "improvement opportunities". Most people get really quiet when you ask about their biggest "improvement opportunities". In other words, "Can I cut your budget, please?"
It’s easy to lose focus on your customers’ needs. Did we ask them what is important for us to deliver to them? Did we set challenging metrics in place to reveal the raw truth about our performance in our customers eyes?
It’s tempting to include metrics that don’t really affect the bottom line. Just because the data is readily available does not make it important. For example, “Days to Process Change Requests” may be easy to get, but it ignores the size and value of different changes and is disconnected from pleasing our customers who just want high value products quickly and don’t care about our internal processes.
It’s easy to focus strictly on Cost Reduction at the expense of other metrics like Quality or Timing. One company slashed its Advanced Manufacturing and Advanced Quality departments because they “cost too much” and their value was not understood by management at the time. New product timelines started slipping because of manufacturing issues. As these products launched, the Cost of Scrap and Containment skyrocketed. True Story.
It’s easy to use categories or measure against comfortable standards instead of exposing the raw data and revealing an opportunity. Let’s take Up-Time as an example. Some might like to view the data in colors for visual effect.
The data is hidden by the specification of 85%. Why is 85% the right answer? If someone reports Up-time is 92%, this can lead to great discussions about how they achieved it and how high it could go, instead of just reporting Green. Also, 20% uptime could high among the other red’s.
If our metrics change all the time, then how do we know if we’re getting better year-over-year? Will we be able to see if our improvement initiatives impacted the right metrics? Will we see the impact of new leaders? Do they even want us to know? Even tweaking definitions of metrics over time blurs our historical view. Some metrics experts encourage constantly changing metrics "as business needs change". How often do our fundamental needs and long-term strategies really change that much? The same consultants are often paid to help design new metrics, so listen to them with caution.
Have you heard the phrase, "what gets measured gets done"? Just by declaring our metrics, we communicate our priorities and how performance will be viewed. Most metrics will improve simply because we start to report them.
On the other hand, we must always be wary of getting our wishes. For example, we want to reduce Days to process Change Requests. Some smart people could improve the result by: breaking large changes into smaller, faster ones; pre-approving changes before the clock starts ticking; or even cancelling slow changes and re-entering them to re-start the clock (yes, this all really happened). None of this made us more responsive to Change Requests in our customers' eyes, but the numbers got better! Good metrics must be carefully constructed to encourage good behaviors and discourage bad behaviours.
I gave examples how "metrics" can cause big problems. Now, the latest buzzword is Key Performance Indicators, but many people say KPIs or metrics interchangeably. This causes problems when we really need true Key Performance Indicators, not just more metrics. The following are ways to separate "True" Key Performance Indicators from other metrics.
True KPIs are aligned to the company's long-term objectives and what Customers Care About. If it does not pass this test, then it's just a metric.
Having just a critical few helps us keep our focus. If a metric is not mission critical, it's just a metric. Our ability to process data is limited, so we need just a few KPIs. Psychological studies have shown that our short-term memory can hold 7 things, but I still can't find my car keys! Less is more for KPIs.
True KPIs should have the same definitions and terms for all levels of the organization. If we’re all singing the same tune, it’s much easier to join in. KPI data is meant to be shared. Employees at all levels should be able to engage with our CEO in conversations about the same metrics. A CEO should be comfortable reading the same metrics as their employees.
True KPIs should focus on outcomes and show us if we are effective and efficient. We can always work to find and improve the most critical upstream factors and measure those once we know what gives the best outcomes. Some prefer to talk about leading indicators of success vs lagging indicators. Lagging just sounds unattractive, doesn’t it? Instead of Lagging, I prefer to call them (Key) Performance Indicators.
Below are some typical product development Inputs, Processes, and Outputs in an IPO diagram. Looking at Outputs first will ground us in results. We need to measure them to ensure our customers are happy. Then, we can measure or at least monitor the Processes and Inputs.
A balanced scorecard will show performance in Cost, Quality, Timing, People and Customers. Obviously, we care about financials, but looking only at financials can drive the wrong long-term decisions as shown earlier. Focusing only on one main factors can affect others. Examples:
All non-financial metrics should still relate to the bottom line as much as possible. For example, if we measure timing against internal due dates, then the financial connection is not clear unless we pay a contractual penalty for being late. We could improve Days by stretching out our timelines or breaking large projects into smaller, faster ones, but it will not increase profits. However, if we measure timing in terms of how fast we can deliver more new products or features, then this relates directly to being first-to-market and the associated financial rewards.
True KPIs should be long term (5+ years), not the current fiscal year/quarter/month. There are usually plenty of monthly financial reports to keep everyone busy, but our best opportunities for big improvements are in the long-term strategy. Should we invest in more technical centers in developing countries? Is our new product development process delivering better results than before? What improvements would affect results in the most positive way?
True KPIs should be based on objective, raw data with a meaningful denominator. No surveys. No Red-Yellow-Green. No categories A-B-C-D. No Pass/Fail against specifications or standards. If everyone is at or near 100% of a standard, then how do we find the very best? Maybe the standard is not a big challenge? It’s only human nature to want to know when we are “doing good enough”, so we can rest. But "continuous improvement" means there are always more opportunities and we never stop pursuing them.
A good metric is based on direct measurement with results like 1.24, where 2.48 is exactly 2 times 1.24. This is called a Ratio Scale of Measurement in statistics and this provides the most useful data for analysis. This enables much more meaningful analysis with smaller data sets compared to the no-no's above.
A meaningful denominator provides context to the measurement. Measuring Days to deliver new products leads to discussion about how bigger projects need more Days. It’s more meaningful to measure Days per EQU (equivalent units) for throughput. This can be balanced with (work) Hours per EQU. We could flood a project with resources to improve Days per EQU, but it would show badly on Hours per EQU.
True KPIs allow us to establish benchmarks across the company We need to clearly see where the best performance is happening, so we can go there and share best practices with each other. For example, if we can't compare Labor Cost per EQU because labor rates and exchange rates are driven by factors we do not control. However, if we compare Hours per EQU, then it's much easier to compare directly and talk about best practices.
Continuous Improvement works so much better when we can clearly see our starting point, our progress, and the remaining opportunity. Using true KPIs over the long term enables a Continuous Improvement process to start AND continue over time. The diagram below shows how the process starts and ends with KPIs (aka BBP Metrics). True KPIs enable Gap Analysis where we compare best-in-class performers with our overall average. Benchmarking events are held to share best practices. Continuous Improvement projects track the implementation through management reviews and the executive change process. The benchmark is reset every year and the process starts over again.
The chart below shows how the same KPI can show continuous improvement that occured with several cycles of the process above. This chart is impossible if the metrics change constantly.
A database is clearly the best way to collect and report on KPIs for a large organization, but in the early stages it's ok to model and test them in a spreadsheet. Below is an example KPI form. The input cells are color-coded. In this KPI, the only real data entry is the number of First Time Requirements tested and First Time Passes by month. First Time Quality % is calculated and compared to Target and Prior Year. Results are shown by Month and Year-to-Date. Targets are set based on Benchmark and Gap Closure Targets (far right). Detailed definitions are at the bottom.
The example Enterprise Engineering KPI Scorecard below summarizes results from the individual KPI worksheets. You can see the connection in the First Time Quality row. Yes, I used red-yellow-green to highlight performance, but the raw data is always there.
Developing true Key Performance Indicators and using them to drive continuous improvement is best done with help from professionals with the right experience and background. This article shares as much as possible in a short article format, but there is much more to executing this properly in the real world.
Please connect with me if you'd like more information at: email@example.com
Go to the top of the page to Share, Comment, or Like if you found this interesting.
This is part of a series of articles about applying continuous improvement processes to measure and improve the performance of product development projects. Here is the complete list in recommended reading order.
Copyright 2018 Richard Crayne