Max Cardew, business productivity improvement manager at the Commonwealth Bank of Australia and lean consultant to financial services analyses the incorporation of lean to the world of professionals.
Contrary to the successful process improvements gained in back office environment, lean six sigma (LSS) continues to struggle delivering substantial results within the professional departments. This isn’t the result of a failed methodology, instead it shows LSS practitioners are ill equipped to take on these processes due to the shift in emphasis when developing and delivering LSS training curriculums – the art of mastering process thinking and the ability to measure uncertainties.
It’s an encouraging position when a LSS deployment program in the financial industry is mature enough to shift its focus from the structured transactional back-office process world, and begins to infiltrate some of the last bastions within the professionals space.
This is not a sign that institutions such as legal, credit and risk are intentionally laggard or purposely recalcitrant, it’s because we have failed to demonstrate value through creatively adapting and adopting our tools and techniques to something these professions can relate to.
We often attempt to streamline some of the peripheral processes and claim we have cracked this departmental nut, when we have only utilised some of our basic skills to project manage a solution – typically, one the business owners had already identified. This exercise alone has the potential to undermine all expectations and benefits of a group-wide efficiency drive through confirmation that process improvement adds little value. It also supports the premise that any significant improvement can only come from within the respective doctrines i.e. legal will identify a legal solution, risk will identify a risk solution. Until we bridge this caveat, LSS will struggle to make headway in liberating these processes.
This division stems from LSS practitioners building their business case around a measurement system and helping businesses to learn to see where process improvements are possible. By identifying how the process widget is performing with respect to how a customer or stakeholder may see it. However, in the professionals’ world, this widget can remain elusive.
Therefore, the first challenge is to help professionals see their world from a different perspective – moving away from perspectives arising from past models. This exercise is provocative as it forces them to reassess their role.
To process practitioners, it’s simple, we agree within a process we have a widget that gets worked on resulting in a more valued item for the intended recipient. This could be something material as in a manufacturing product line, or immaterial such as a mortgage being processed from front to back office.
However, identifying widgets within the professionals’ world proves difficult and is not obviously a single concept. They see their world being full of probabilities and complexities beyond simplification and codification. It has taken years of perfecting a vocabulary that qualitatively describes these complexities in a manner that minimises the reference to any standard. Therefore, any methodology which expresses itself as a tool to identify process improvement opportunities by measuring its performance against customer expectations has little chance of gaining traction within age-old institutions.
Smoking out our widgets
Apart from the ambiguity of being able to describe what they do and perhaps some reluctance from professionals, why are these widgets so hard to identify?
Two reasons; firstly, these latent widgets may appear to be unquantifiable. Certainly, they are incorporated within the language of our professionals, employed every day, although positioned as a principle only with multiple interpretations depending on how they are used. This subjectivity presents the first challenge to our practitioner – especially as the techniques required are generally not included in the standard LSS training kit.
The second reason for this illusiveness is these latent widgets are a means to an end and not the end itself. To demonstrate, the goal in collections is to minimise Net Bad Debt. In order to achieve this, the department needs to foster and deliver a Kept Promise. In the world of Operational Risk, the goal of the department is to minimise the risk exposure to minimise losses. This is delivered through developing risk management plans providing a reasonable assurance back to the business that is generating the risks.
To the professionals, these widgets can appear to be entering the unknown. Teasing these will require creativity as there is no prescribed formula to do this, only high level guides:
- You must stay within your process beliefs. Professionals are convincing when it comes to describing how their world is not process driven. Take note of the behaviour that contravene LSS. While navigating through uncertainties, throwing radical ideas of zero defects will not win you friends.
- Do your research. Crystalise what clients/stakeholders expect of the profession. Be careful when broaching concepts such as the voice of customer (VOC) and be prepared to debate its requirement/value to the overall project objectives.
- Resist getting involved in the details for as long as possible. Defining their job is a complex discussion and you will get immersed in too much detail to process it. Get them to describe the work in layman’s terms.
- Map the blocks of work chronologically. Make sure you represent those blocks of activities in the sequence that builds value, rather than those that look and feel like rework i.e. wasted work practices, from a process purist perspective. This is where to expect opposition, as clients may not be in the right mind frame and will defend the practice as necessary.
- Identify the point on the map that polarises the work stream into two distinct characteristics such as proactive and reactive. Craft a definition of this point of interest using the language of the business. (Hint: sometimes it’s easier to describe what that point in the process is not. E.g reasonable assurance does not imply internal control systems will frequently fail.)
Developing process eyes
These maps prove a process does exist and a view of an all going well scenario. This helps to position the wasted practices argument.
Measuring: the right tools for the right job
If something can be observed, it can be measured. LSS training syllabus’ needs to be commensurate with the type of project opportunities. Over the years of productivity improvement, we have evolved training programs to accommodate the time pressure demands from the businesses to minimise the amount of down-time practitioners will experience due to training. As projects have focused on the low hanging fruit, tool kits have been stripped back to their bare minimum.
The maturity of LSS deployments has shifted considerably over the last decade and the blocks of activities described are hard to pin down with the standard tools and techniques available today. Yes, we can compare how many accounts successfully progressed versus not progressed to the next phase in collections, but how do we quantify assurance? This is where we need the LSS stats courses from the 1990s, as probabilities has dropped right off the contemporary training radar.
We dabble with probabilities when we deal in relative numbers such as percentages, maintain a p-chart – a probability statistic of how significant the statistics are – but the tools I refer to review probable data from multiple angles rather than taking a one dimensional view of a relationship. These tools lift understanding of an event through looking at the data using a different lens such as observed as opposed to expected values and conditional probabilities – providing a pragmatic, holistic perspective.
Two tools modern process practitioners are denied are the kappa Assessment and Bayes theorem. As with many of our LSS borrowed tools and techniques, the philosophical underpinnings of both the kappa and Bayesian theorems are rich and their mathematics stunningly simple. In their most basic form, they are both algebraic expressions that can fashion subjective or unknown information leading to valuable insights into decision quality and reasonable predictability.
Kappa is a ratio tool, measuring the relationship between how one subject matter expert (SME) may interpret a situation (observed judgment response) to that of how they should have or how much variation there is within the whole team (expected value).
By example, assessment and rating of risks is purely at the discretion of our risk practitioner i.e. freedom to exercise judgement and to make decisions without referring to a specific rule. There are guides practitioners must refer to, but this discretion challenges any process improvement approach wishing to describe the performance of a process by comparing outputs against a standard.
Kappa provides a measure of the degree to which judges concur in their respective assessments to a scenario such as a certain risk, which controls are best and how to test them. The tool can even go one step further, to identify and help prioritise which field requires more attention when developing a training plan as an attempt to get more alignment – in engineering terms, recalibration and providing more consistency with rating and assessments.
The computation of a Kappa study is not dependant on access to statistical software packages such as Minitab as a basic spreadsheet will provide the analysis.
Bayesian is concerned with conditional probability. It tells us the probability you may experience an outcome (either wanted or unwanted), given some other event (directly or indirectly) has happened. Unlike a static prediction that relies on an event happening in the past, this is a learning tool which requires us to think through problems more carefully, even employing gut-level approximations which on their own are too crude to use. It provides the risk practitioner a way to factor subjective judgements into the objective equations.
To compute a Bayesian prediction all you require are three simple pieces of information. First, we rely purely on historical information (called our prior probability), which is accessible i.e. there are known risk drivers within the industry and these are standard within the risk community.
Secondly, the information is employed in any risk evaluation and is, given your understanding about the way the business manages this risk, what is the probability of an incident materialising?
The last piece forces predictors to rationalise their thought process by estimating the probability of this event not happening given the circumstances the predictor visualised in estimating the event may happen – the other side of the equation. This is not an automatic inverse of the original prediction as this is more about giving the benefit of doubt.
Depending on probability, our resultant prediction can appear counterintuitive with respect to the methods normally employed, however, what we now end up with is a closer approximation to the truth as we begin to remove biases as well as update the formula as circumstances change.
Employing the probabilistic approach has the potential to provide insights through rolling up your probable findings from each of the blocks of work to the point which polarises your mapping exercise performed earlier. And quantitatively represent what has only been speculation in the past. Measuring your latent widget using either of the above tools is well suited to serving two masters at the same time – the knowledge and the process worlds.
LSS practitioners shouldn’t re-enrol in statistics programs and immerse themselves in academic papers to employ these tools, the LSS community has just started to make gain in assimilating with the business. Adopting a theoretical approach will drive program backwards when trying to translate the statistical findings into something the business needs to understand.
The introduction and removal of tools and techniques in the LSS curriculum needs consideration. Take the removal of design of experiment (DOE) within the transactional world; this has now left the black belt (BB) less informed and deprived the practitioner of the edge as someone who can offer alternate practical approaches. As the transactional world is more dependent on discretion and long lead times before any observations can be made and therefore doesn’t lend itself to a crisp decisive tool such as DOE, but it was the lesson within the concept of dealing with multiple factors at a time, and the ability to identify optimal solutions within factor intercepts, that armed our productivity practitioner – providing them with more depth when considering the optimal solution.
This same approach can be folded into BB training with respect to Kappa and Bayes theorems i.e. to not only tease out what the immediate data is telling us and also what it’s not. Constructing a set of basic questions and applying a simple structure around both objective and subjective information available will provide a more indicative view of not only how the latent widget is behaving from a process perspective, but also the complexities our professionals’ view their world, and presenting it in succinct way that gets all the heads nodding – learning to see!
Being products of two emerging worlds, latent widgets can be difficult to pin down. The professional world will tend to describe task outcomes along a process, whereas the process practitioner will try to construct an end to end process with fragmented components. The identification of latent widgets is not provided within any LSS training program nor do any of the multitude of LSS reference manuals offer any guide. It requires the process practitioner to enter the professionals’ world, listen attentively, and translate those professional milestones into process milestones using the vocabulary of the business.
It is one thing to gain common agreement on the widget going through the process, it is another to measure it. The professional world lends itself to dealing with uncertainty and the only way of measuring this uncertainty is through the use of probabilities. By broadening our appreciation of this topic we will equip our process practitioners with new tools and techniques and delight our clients with insights into where improvement can be made.