Malcolm Jones, of SMMT, has been involved in lean manufacturing research and development for over 20 years and is co-author of Learning from World Class Manufacturers. In this article he explores Mike Rother’s idea of value stream mapping and what we can learn.

Value stream mapping is ubiquitous in current lean implementation, despite not being part of the original Toyota production system as developed by Taiichi Ohno and colleagues. Instead the strategy was invented by Mike Rother, a US researcher who studied Toyota and particularly the practices of their supply chain development engineers. Rother noted that these engineers drew maps of two types of flow, material flow and information flow and also distinguished between value adding process time and non-value adding time in production. His rather elegant solution to integrate and explain these concepts is what is now known as a value stream map, as described in his book Learning to See.

In order to calculate the value adding percentage and leadtime on his value stream map, Rother used Little’s law of queues. Little’s Law was originally developed in retail banking to calculate the length of queues and the number of counters needed to be opened to keep the customer waiting time at an acceptable level. One particular early application to manufacturing of Little’s law, in a consistent way, was in what Ford Motor called the dock to dock calculation, DTD (receiving dock to shipping dock). Before the application of DTD we would simply use an average empirical leadtime to generate a value adding percentage. The DTD calculation is a method to convert inventory to time, and specifically to leadtime. This can then be compared to the value adding processing time for an item to generate a value adding percentage. As a lean calculation it is very instructive as it focuses on leadtime and the relationship between inventory and leadtime, both key lean concepts.

This is where Little’s law of queues comes in. Little’s law states that leadtime is equal to work in process divided by the end of line rate – the rate at which the queue diminishes. The end of line rate is averaged over time, so that if we ship 5,000 assemblies per month; that is 250 per day (assuming 20 production days). If we have 4,000 pieces of inventory in our production system, then the leadtime is 4,000/250 = 16 days. This definition of leadtime is based on an assumption of FIFO and basically says that if new raw material is introduced and queues behind all the other material in process, then it will take 16 days to appear in finished goods inventory – again, regardless of any processing time. If all our current WIP is black and a customer orders a white one, then the white material will have to queue for 16 days behind all the black ones until it can leave the line, unless expedited.

Ford used this calculation to define the dock to dock time for a particular control part, such as an engine block. The procedure is to count all the engine blocks in the factory, in whatever configuration they appear, and then use the rate at which engines leave the plant to calculate the length of the queue of engine block inventory in the factory. If, as in the example above, we have 4,000 engine blocks and the end of line rate is 250 per day we have a 16 day leadtime. Comparing this to the processing time for an engine, say eight hours, for example, we can then derive a value adding percentage of 8/(16*16 hours) = 3.125%. This assumes two eight hour shifts per day, i.e. a day is equal to 16 hours of opportunity to add value.

This is a perfectly reasonable calculation and shows us the amount of waste in the production system based on the amount of inventory, but problems arise when this total leadtime is divided up on the timeline of a value stream map. The leadtime is totally independent of the cycle times or process times, it is solely derived from Little’s law. The confusion arises because in his mapping process Rother identifies piles of inventory and then uses Little’s law and a single end of line rate to convert these piles of inventory into time, listing these on the stepped timeline at the bottom of the map. Again this is useful in illustrating that inventory is a source of extended leadtimes, but the calculation itself is invalid as the leadtime for each pile of inventory cannot be derived in this way as it is a stock and flow calculation, like the time taken to empty a bath at a given rate, not an additive calculation at each stage of the process.

This has led people to introduce an alternative calculation as illustrated in the example below, taken from commonly available lean training material. Realising that one single end of line rate cannot be allocated to different inventories, it has become practice in some circles to calculate the leadtime for each inventory pile based on the processing time of the subsequent process. In this example, the 430 items before mixing are converted into 4300 minutes of leadtime as the mixing process has a cycle time of ten minutes.

Factory data

A similar calculation is performed at each stage (there is an arithmetic or typographical error before the finishing stage, but no matter) and these are then added to give a total leadtime of 31,677 minutes. The shipping calculation is based on 35 units shipped every day and converted using 1440 minutes in a day, so one unit is shipped every 41 minutes. The problem is that this leadtime is meaningless. The total time generated is simply the sum of the times it will take to process the current inventory to the next stage in the process i.e. to move the piles of inventory one stage to the right – it is not a total elapsed time or leadtime through the process. This can be seen because the times are running in parallel – the inventory at each process is being depleted simultaneously, not in sequence.

If we use Little’s law as recommended by Rother, then the calculation is simply the total inventory divided by the end of line rate, in this case 35 units shipped per day. This gives 2,130 units divided by 35 = 61 days. If this is converted into minutes we get either 61 x 1440 = 87,840 or 61 x 460 = 28,060. The former is based on a 24 hour day, the latter on the 460 minute shift time available. It is preferable to use the latter if calculating a value adding percentage, as this relates to the 460 minutes of opportunity to add value each day, so we are comparing actual value adding with opportunity to add value. The latter calculation is in the same ballpark as the 31,677 minutes given in the example as the total leadtime, but this has been calculated using a 24 hour working day for the shipping process, which is dubious in itself, so this is more by luck than judgement, and is still out by a factor of about 13%.

It should be emphasised that the error is not Rother’s. Rother uses the DTD calculation in a perfectly reasonable way, except he causes confusion by allocating leadtime to individual inventory piles on the VSM, which is a theoretical construct, and then draws a timeline which unwitting readers assume to be additive.

If we are to generate a valid value adding percentage, the only proper calculation is to apply Little’s law using the available value adding time – the available time used in an overall equipment effectiveness (OEE) calculation – comparing this to the actual value adding processing time. In the example above this would give 35 minutes divided by 28,060 minutes, 0.13%.

What then, is the solution to this confusion? The easiest solution is to be very explicit about the application of Little’s Law in deriving the VSM leadtime and not divide it up on a timeline. The map will still indicate the location and quantity of inventories and provide focus for inventory and hence leadtime reduction, but will stop introducing potential for error. Similarly we can add the processing value adding times (which are additive) without the need for a timeline on the map. The leadtime and value adding percentage can still be indicated on the map as they are now, and current and future states can be compared. All we have lost by ditching the stepped timeline is a source of confusion and some dubious mathematics.

Learning to See is an appropriate title for Rother’s book and he has done the lean community a great service in developing the value stream mapping tool. Unfortunately the artificial and unnecessary stepped timeline he introduced on the bottom of the map obscures our sight and the meaning and value of the technique.  Our value stream maps would be better off without it.