We have looked at many lean assessment tools over the past 10 years or more – some for our parent and some for external clients. Many of these were “off the shelf” and available within the wider marketplace, some more tailored to specific areas like manufacturing, supply chain, logistics and even customer-facing roles like sales and marketing.

The one thing that struck us most was that although they covered most or all of the areas we wanted to assess, they often covered lots of other areas as well. This meant we were either buying something over-specified for our needs, or worse, that we would be tempted to assess areas that didn’t need assessing because the tool gave us the capability to do so.

This might seem a small detail, but it helped us to identify a larger problem – unless you apply lean principles to your lean assessment activity, you can end up doing lots of non-value adding activity. This seemed a little counter-productive and led us to step back and think about what we really wanted our assessment tool to do.

“It is important to remember that these assessment tools are applied by people, and the ratings they give for each area being assessed will depend not only on the assessment criteria but on the ability of the person to accurately rate it.”

A significant recurring criteria that emerged was “flexible to meet the needs of various stakeholders and applications”, which proved rather interesting. We help our parent, their supply chain and lots of non-automotive organisations with their lean journeys. Each of these parties is looking for different things, both across different organisations and within the different area/divisions that make up each individual one. For example, we have one user of our lean assessment tool who wants to improve the lean maturity of their customer support operations. They have only recently started on this journey, and therefore want to know what level of maturity their operations are at. In this instance, a one to five scale works best, as not everything needs to be at four or five to achieve their mid-term goals. Conversely, in their manufacturing operations that started on its lean journey many years ago, rather than a scaled approach, they apply “yes/no” criteria to the assessment.

You might argue that for “yes/no”, everything just needs to be a five or one on the scaled version. But when you are analysing the outcomes on a radar chart, you would see two sets of “threes” side by side that meant totally different things.

In the end, we decided to develop our own assessment tool. Ironically, we have actually developed two versions of it – one with a “yes/no” response and one with a one to five maturity rating. We have also subdivided it into the different modules/question sets that suit manufacturing, service and support environments (although the lean principles are the same, the terminology and tasks being assessed often vary significantly, and if you want valuable data on which to make robust decisions we have found there isn’t one specific approach that works).

A key element of all these modules, however, is the alignment with the lean strategy and goals of the organisation/area being assessed. You have to tailor the assessment criteria to the current and desired levels of lean maturity. This can then be evolved as the maturity of the area/organisation being assessed improves (after all, today’s current state is tomorrow’s baseline for measuring how much you’ve improved).

Once we had the tool’s requirements nailed, we started to look at the application of it. This is where the value delivered by a lean assessment tool often lives or dies. It is important to remember that these assessment tools are applied by people, and the ratings they give for each area being assessed will depend not only on the assessment criteria but on the ability of the person to accurately rate it.

The tool therefore needs to be applied by people who know what they’re doing in a consistent way (if two people assess differently you will get two different outcomes, so we applied some “Gauge R&R” logic to this problem). What we found is that we had to make the assessment mistake-proofed (poka-yoke) otherwise the assessments conducted, either of two areas at the same time or the same area some months later, would be meaningless. One of the classic lean fundamentals came to the rescue here – standard work. By standardising the way the tool is used, coupled with error-proofed assessment questions and a skills matrix for the assessors, we found the consistency of assessment greatly increased. It is still not perfect, but it is a lot closer to genuine “like for like” than if we left the scores to a subjective view.

Which brings me to my last point – your assessment tool must align with your strategic goals. In reality, you are assessing your organisation’s capability to deliver those goals through your lean assessment activity. The outcomes should tell you where to focus effort to deliver value more efficiently and effectively. You can then deploy your valuable, and mosttimes limited, improvement resources on the areas that will accelerate the achievement of your goals.

So to summarise, your lean assessment tool needs to assess the right elements of your business, do this in a consistent and high-quality way and enable you to focus improvement on increasing the delivery of value, aligned to your strategic intent. That does not reduce the number of tools that are available, but it does significantly tighten the range of them that are tailored to meet your organisation’s needs. And that, in turn, reduces the non-value adding activity that is needed to turn your assessment findings into meaningful action plans. Which, after all, is why you’re doing it.