It is All Taylor’s Fault

The Industrial Revolution brought about mass production of parts and products. The concept behind mass production is: break the job into a series of well defined components (interchangeable parts), and set up to produce those parts in large quantities to get economy of scale. Millions of identical parts can bring the price down of a completed product. The cost of setting up a factory is high, but is recouped through small savings multiplied by many instances.

Fredrick Winslow Taylor applied these mass production ideas to work and called it “Scientific Management“. He performed time and motion studies to determine exactly the most efficient way to perform a particular piece of work, and got workers to do the same thing over and over very repeatably.  Again, economy from large runs of identical work.

Workflow and BPM (and software applications in general) comes from this school of Scientific Management. The idea is for process discovery (time and motion studies) to determine the best and most efficient way to do one specific process, and then implement a program that enforces that process, which recoups costs over a number of instances.

Scientific management can only be applied to processes which are very repeatable / predictable. A process or job that is done only once, or is unpredictable, would never see any benefit. Clearly the up-front analysis cost is high, and can only be recovered if the resulting process is repeated a number of times. This is the same as setting up a factory to produce a single unit: it would be more effective to simply custom build the single unit, because you would not incur the overhead of setting up the factory.

This leads to an interesting “blindness”: As people analyze their workplace with scientific management, they look for predictable / repeatable processes to improve. Non repeated or non predictable activities are often ignoredfor the same reason that you would never consider building a factory for a one-off custom job.  Human activities that can not be analyzed with scientific management are often called “overhead” or “putting out fires”.

These kinds of activities are not even considered “work” to some people.  Does an executive do “work”? Many would say that the executive spends all their time making decisions, not doing work. It is not that people don’t think that making decisions takes effort, it is just that that kind of effort does not fall into the category of “work”. OK, I have stretched the idea a bit far. We do call this “Knowledge Work” and that is a category of work, but I have found that there is a bias among people who specialize in “work processes” to be blind to considering knowledge work to be a work process, and thus falls outside of the category of work.

Since mass production, we have seen a lot of movement to “mass customization” and “lean production”. Toyota has shown that small lot sizes and the ability to change the production quickly and often can be very effective.  The “Just In Time” movement is a rejection of mass delivery and mass production in favor of producing or delivering just what you need for the immediate time period.For this to work, the set-up costs have to be suitably modest, so that you can pay for the setup with a smaller run.

I believe it is fair to say that current process technology (Workflow and BPM) is based on mass production Taylorist Scientific Management principles.  This is not bad.  It will work well for processes which are predictable and repeatable, and companies today are seeing significant payback from these investments.

However, there is a lot of work which is not predictable, and not repeated.  For example the doctor’s job of diagnosing a rare disease; the negotiation of a treaty or a corporate merger; the investigation of a crime; or the prosecution of a court case.  These are jobs that can not predicted at the time the job is started.  It is not simply that we have not gone to the trouble of mapping the process, but the process is not knowable, because the details that effect the course are not yet discovered.  A “super process” which encompasses all possible outcomes, and uses branches at the times that the details become clear, is not possible because of the “butterfly effect”:  the number of different possible effecting factors is so huge that it would never be economically feasible to track them all. For the example of the doctor, the number of treatments is always expanding, and information about the success of treatment is always expanding, that the doctor’s own experience and intuiting becomes critically important, and could never be externalized as a process.  We call this “Knowledge Work” and it is distinct from “Routine Work” which is predictable and repeatable.

All hope is not lost.  There are techniques which organizations can use to support knowledge work.  Those techniques are a radical departure from process technology today.  This non-repeated, non-predictable work can not be effectively analyzed with Scientific Management. It will not be effectively supported with technology based on Scientific Management.  Knowledge work requires an approach that does not assume that there is one optimal process, but instead assumes that every case will have a different process.  Instead of  “building a factory” for identical processes, the use will be empowered to extend and adapt the process as a normal part of work in small lots.  Processes are designed “just in time” when they are needed, and not before hand.

This is the concept behind Adaptive Case Management.  It is not simply a new kind of BPM or an extension of BPM.  It comes from a completely different theory of supporting work.  It is a “Non-Taylorist” process technology.

28 thoughts on “It is All Taylor’s Fault

  1. Pingback: Tweets that mention It is All Taylor’s Fault « Thoughts on Collaborative Planning -- Topsy.com

  2. On another blog an interesting comment was placed that relates to this saying that unpredictable processes do not exist.

    http://www.bp-3.com/blogs/2009/12/process-trends-from-keith-swenson/

    I put a response there that included this:

    When I started studying Physics there was a belief that if you could measure the preconditions well enough, you could predict the weather arbitrarily far into the future. It would simply be a matter of measuring the initial conditions to sufficient precision. This belief is based on the assumption that small errors in measure will become less significant in time.

    Chaos theory introduces the concept of “sensitive dependence upon initial conditions”. Small errors in measurement “blow up” to make an overwhelming effect over time. The idea that a measurement error as small as the flapping of a butterfly wing, might built in importance to the point that weather a few weeks into the future becomes unpredictable.

    For process lets call this “sensitive dependence on EXTERNAL conditions”. Some processes are sensitive in this way.

    The process for a patient being admitted to a hospital is not predictable. The patient, for instance, could have a condition for which a treatment has just been discovered, and could never have been anticipated by a process. More likely, the doctor just found out about it, or maybe was persuaded based on other cases he personally has seen. No matter how good the doctor’s diagnosis is, it can’t be known perfectly, and it is not unusual for treatment to be cancelled or changed mid course due to “complicating factor” which were not predictable. Even when the diagnosis is perfect, such as infection by a particular germ, the microbes/viruses themselves mutate and produce unpredictable effects.

  3. I really liked your post on Taylorism. You may want to revisit these ideas with a Make-to-Stock versus a Make-to-Order analogy. Standardized repeatable processes are Make-to-Stock while Knowledge Based Processes are Make-to-Order. I still believe they are all processes. Just viewed from a different perspective: from the Control perspective versus from the Goal perspective. Knowledge based processes tend to captured and managed from a desired goal state point of view. The participants simply apply actions and procedures to the “case” to achieve that desired end state. The current state (context) is always changing influenced and disrupted by external events. We have studied these equivalent problems for many years in AI. The procedural vs declarative approach debate was all the rage when I was studying. Various Goal Oriented and/or Context Sensitive Planning solutions were proposed….anyway just some food for thoughts.

  4. Keith:

    yes, you pinpoint the problem of BPM, but in reality, most (ad hoc) projects are actually managed and the goal of the project manager is precisely to scout the states that need to be reached and which activities will transition the project to these states.

    Companies have even developed SDLCs/PMLCs (agile or otherwise) to help define repeatable states that need to be reached with the corresponding activities, roles…. A goal is simply a particular kind of state.

    JJ-

  5. I wrote about the Tayloristic aspect of SOA/BPM in 2007 in http://isismjpucher.wordpress.com/about/why-soa-does-not-deliver-1/ and therefore I do agree wholeheartedly on Taylor.

    Yes, one can group processes like manufacturing into A) structured make-to-stock and B) unstructured make-to-order, but I propose that this is a fallacy. Why? Because manufacturing physical goods or dealing with classical physics is statistically predictable, while thermically chaotic or complex adaptive systems are not. The problem with this grouping is that A) is considered solved with BPM and B) with Case or Project Management and Enterprise 2.0 tools. So nothing new is needed …

    Businesses above a certain size are however social systems of self-organizing entities with nonlinear feedback and thus inherently unpredictable. They are not controllable or foreseeable. Making a complex system do what you want it to do can be at best achieved temporarily until things change. Business processes destroy the natural resilience that a business has to environment changes. The problem can not be solved with tuning parameters but only by adapting processes. You can drive a system crazy by imposing artificial parameter information streams. Real-time information flow will put any system into wild over-control gyrations. Less is more.

    The control freaks mess with assumed leverage points and encounter policy resistance due to goal conflicts and much more. The key is goals, but if it is no more than a checkmark set by the process owner then it is worthless. A goal is defined by one or more rules related to multiple entities in the case. To reach a goal one has to consider that the process is a timeline series of events that can and will be executed arbitrarily.

    What we work for is more than simple BPM and more than simple CM or PM. Advanced Case Management focuses on providing information transparency between process owners, employees and customers and on extracting reusable process knowledge during execution.

  6. Pingback: BPM HOJE » BPM e Processos Flexíveis

  7. Pingback: Chasing Rabbits with BPM « Thoughts on Collaborative Planning

  8. You make a good point in separating processes into “Knowledge Work” and “Routine Work” and matching these with appropriate BP approaches. I wonder if there are other categories – for instance, to use the phrase coined by James Taylor and Neil Raden, ‘smart work’. The difference I am proposing is inserting ‘decision making’ intelligence as ‘content’ into any one of the process models you propose. By inserting decision making as content into a process, whether a more rigid and long lived process as implied by routine work, or a short-lived, adaptive process as implied by knowledge work, we can allow the process to service a greater universe of requirements. This is especially important when the process crosses organizational boundaries, and so must meet the individual needs of more than one party concurrently.

    By relegating decision making to a content layer we allow the factory to service a greater variety of participants and needs. Counter-intuitively, the factory is also usually cheaper to build, because the inherent separation of interests between the decision making ‘knowledge providers’ and the process designers removes a tension found in many projects that increases project complexity, cost, and risk.

    Before microchips were invented, my mother had a sewing machine that did completely different stitching depending on what plastic bobbin was inserted. The machine was a multi-use machine by virtue of the fact that my mother could change the bobbin at will – in effect re-programming the factory.

    Smart processes do not fit the predictable routine work, nor the unknowable knowledge process patterns. What they do is execute process steps from a known palette of possibilities that can be made available by the factory, with the user driving the actual process – the sequence and exact configuration of each step – by decisions that are loaded as content and replaceable at will.

    If we build factories today using this principle, we could consider them to be ‘smart’ factories, aka smart processes.

  9. I am not happy with trying to fragment what a business does this way. The sewing machine is truly a bad example but it is commonly made. There is a great danger in comparing mechanical production work with human interaction. Production work is an animated system where the parts have no inner purpose but follow the purpose of the designer (to build a certain thing), while human business interaction inside and outside the organisation is a social system of independent acting agents who have their own purpose and chose to collaborate towards common one. Such a system is complex adaptive.
    Let me put it this way: In a social system ALL processes are unknown from the outset, regardless whether they are later fragmented into routine, structured, smart, knowledge driven or adaptive! The choices to make are
    a) how we a discover or design the processes that have emerged,
    b) how we execute them in terms of control,
    c) how we learn from them,
    d) how we monitor goal fulfilment, and
    e) how we motivate the independent agent to work towards the common goal.

    Proposing to industrialize human business interaction with the goal to optimize it for lower cost and higher quality is the biggest fallacy of BPM!

    • If I read Max’s response correctly, he divides the world of processes into ‘production work’ and ‘human business interaction’. Then human business interactions are clarified to mean ‘complex adaptive’ social systems. From my perspective this neatly excludes virtually all processes that are optimal candidates for decision automation – how are these to be classified. For instance this URL bit.ly/97iKlv describes a policy for managing participants in a cancer screening program – mechanical or social? Similarly, national guidelines for selecting a recipient for a given donor kidney – mechanical or social? Or even a simple health insurance policy – mechanical or social? I would have described automating these example processes more or less as “proposing to industrialize human business interaction with the goal to optimize it for lower cost and higher quality”. Having achieved it I would now describe it as ‘industrializing human business interaction for lower cost and higher quality’. Where is the fallacy in that?

      My point was to propose an intermediate, third classification – that is, stable, known processes that require decision making across a range of conditions and participants, and possibly involving multiple decision makers. It may or may not be a ‘mechanical’ process or a ‘social’ process. A mechanical process for which the decision making is managed independently from the process itself might qualify as the proposed smart process (hence the sewing machine analogy). Similarly, a social interaction for which a range of outcomes were prescribed to facilitate automated decision making might also qualify.

      While I applaud those who are trying to automate the ‘complex adaptive social systems’ most business managers that I communicate with would be happy to have simple automation of their known policies and procedures. When Information Systems as an industry can reliably deliver this very simple goal, we will contribute hugely to global wealth and happiness – successful automation of simple rules in health administration, insurance and finance will deliver a multi-hundred billion benefit and better outcomes. But at present, simple decision automation is bypassing whole industries.

  10. Mark, to be more clear, I don’t divide the work at all. It is all social, we just make up silly rules from nowhere. I was making amends, trying to find some sense in the BPM reasoning. I see automation – with or without decisions – that has to be created by a lengthy analysis, design, modelling, simulation, and then be verified by monitoring and optimized as a fallacy of how our work/life can be improved. Yes, there may be some small percentage of rarely changing processes/decisions where using a mechanistic sewing machine approach makes sense. Fine.

    I really don’t get it how you can pull a hundred billion benefit claim from nowhere. If it would be so easy and so beneficial it would simply happen one way or the other. But the reality of automating these things is simply a different one.

    In terms of cancer screening, donor selection and other healthcare issues: the mechanistic selection of humans by completely senseless rules is one of the worst human disasters I know of. A close friend of mine died because of it. That is the problem when you put money over humanity. To claim that running our life by rigid processes will improve wealth and happiness (when the two are certainly not connected) is simply ignorant of how this world and nature really works. Not that I know how, but I know for certain it’s not process managed …

    I propose some reading from John Holland or Stuart Kauffman. That might help.

  11. Max, you are right – I did pull a big number without references, so to deal with the billions first – Todd Elyer, a Forrester analyst claimed in a 2002 report (which I only have in hardcopy sorry) that increased claims process automation would save 3% of US insurance claim costs. Add to this the 3-10% of unnecessary claims losses from ‘leakage’ (leakage is a side effect of broken claims process) – this number comes first hand from my time working within Insurance companies – and apply it to the world insurance market of $4.5 trillion (http://www.swissre.com/pws/media%20centre/news/news_releases_2009/sigma%203,%202009%20press%20release.html) and you have identified the first multi-hundred billion pool that could be harvested from better process from just one part of the insurance cycle. Health is also a candidate for significant savings – an estimated 15% of health costs are wound up in ‘labor-intensive, paper-based processing of healthcare claims’ (http://www.insurance-canada.ca/claims/announce/IBMCL200206.php) – providing a $300billion plus pool of funds from the $2trillian US health spend (http://www.globalenvision.org/2008/07/02/costs-health-care) to be improved upon.
    Then there are the benefits of improvement in market efficiency to consider. One of our partners in the last 12 months has implemented a new process linking insurers and doctors – this new process reduces the time to approve health insurance from a maximum of 6 weeks to a maximum of 2 days. This process improvement has been so successful that in 12 months it has been taken up by 100% of the national market. More people insured, faster and more cost effectively.
    More importantly, there are also positive human outcomes to consider. For instance, the economic malaise of the world’s poor is aggravated because they can’t easily insure against risk – there are capital providers available, but a dearth of available insurance distribution processes. That is why Idiom provided a free Idiom license to MicroEnsure – to help build processes to bring insurance to the poor. For comment on the economic and human value that easier access to risk management can bring visit http://www.microensure.com.
    Moving into health, the examples I gave were real and do positively affect people’s lives. In fact, two projects that we have helped to deliver were government funded because people died as a result of broken processes, being cancer screening and automated referrals.
    These are first-hand examples where basic process automation is improving financial and human outcomes.
    As for your comment that ‘If it would be so easy and so beneficial it would simply happen one way or the other’ – I can only say that this is the tragedy of IT – it could deliver so much, but continues to fail so spectacularly. Roger Sessions has some interesting comments on this subject (http://www.objectwatch.com/white_papers.htm).
    You say the ‘reality of automating these things is simply a different one’. We don’t see it that way – our reality is that we apply proven technology using innovative approaches to real-world problems, and that the process improvements that we deliver do improve financial and human outcomes.

  12. Thanks for elaborating on the number. I am not doubting that supporting all those processes you mention can reduce elapsed time, time spent and errors and thus save money.

    I am doubting that it is neither possible nor efficient and also not long-term effective to hardcode these processes. Also claims handling is something that is different for each customer and not a step-by-step or rule-by-rule process. Just to do it with ad-hoc processes or case management is a little better but it does not provide the learning feedback.

    The tragedy of IT (and often business) is the kind of thinking that does not take the adaptive complexity of social interaction into account. You apply proven (only in the sense that the engineering works) technology to a problem that can’t be solved sensibly with the technology being used. And that’s why it fails so spectacularly. I absolutely doubt the soundness of improvements that current BPM solutions are providing. I have been saying so for over ten years, but with amounts of marketing budgets that IT vendors use, who am I to be heard.

    There is no single independent study that proves the longterm benefits of BPM for anything. It is all shortterm cost reduction by firing people and in reality reducing the quality of service.

    I would really like to know what the true ‘innovation’ is supposed to be, it is just a buzzword, like agile. Rigidizing processes that can only be changed with more bureucracy than before does not make a business agile but FRAGILE.

  13. Gentlemen,

    I thank you all for the elaborate discussion, very informative indeed. I cover BPM (as an industry analyst with Ovum IT), and here is what I feel:

    1. Max, I don’t think embedding decision making capabilities into a process compromises anything.

    2. Regarding your point about efficiency and cost savings for the organisation through automation, I believe that it is not the only angle. By saving time, the insurance organisation can also process more such applications, and extend their reach to a greater number of people. There is no denying the fact that it is good for business, however it is also good for the masses at large.

    3. Returning to the decision making aspect: lets take for example an organisation where the decision making is manual and process is paper intensive. Considering that the person responsible for taking the decision is not clinically insane, or his decisions aren’t random, it is safe to say that there is a method to it. And the factors that contribute to the decision can be identified, again I don’t mean to say that they will not vary from case to case. As long as the decision isn’t based on compassion, I don’t see how it can’t be automated. If you have a rules based decision support framework in place, it will work more often than not.

  14. Chandranshu Singh, thanks for the response.

    Yup, that is exactly the fallacy of BPM and rule systems analysis and realted process automation. Humans NEVER decide rationally. It is always emotional. It does not matter how much you believe you decide rationally, it is an illusion. It is quite certain (Damasio, 1994) that humans without an emotional center can’t make decisions. Gigerenzer/Selten (1999) make it clear in ‘Bounded Rationality’ that decision making is not rational but always intuitive and that this is not a drawback, but it enables humans to make decisions effectively (which also means efficiently in terms of time needed) with limited information. We never have all the information and actually less information is usually better than more. There is a long list of research starting with Kahneman/Tversky on decision biases to Deci, Edward and Ryan on motivation that show (to me at least) that the human mind is far superior to rule and process automation in its ability to decide positively in areas of expertize.

    Lets not forget that BPM also does not properly consider the huge cost analysis and the substantial reduction in adaptability and resilience in relationship to outside change. Once the automation cost reduction is consumed in terms of people reduction, the ability of the business to adapt is substantially reduced. Who accounts for that in terms of value? Processes are not a business asset, people are.

    The rule decision engine will work well according to the model thesis of what the decisions are about. It will be outdated the minute it is switched on because the world has already changed. Who adapts the model? the process is so long that once the model is updated the world has changed again. And once the project is finished no-one will be able to maintain a complex decision framework, especially if it is INSIDE a complex rule set linked to a complex process set wired together by a complex set of canonical event driven data interfaces. NO ONE understand these complex interrelationships. You just KILLED the last bit of agility the business had! It is a tested-to-death-frozen-process-rules-data iceberg. No agility, and no amount of methodology can bring it back.

    A process perspective for a business is good as long it is meant to create a common understanding of capabilities and outcomes for customers. Any step beyond that hurts the business in the long run. If that is not so, please point me to the independent longterm studies of a large number of large businesses who are BPM modelled and executed. That would be great!

    Despite my long search I have neither found the businesses nor the studies that would cover them! You are promoting a concept (BPM) that replaces the proven result with theoretical intent.

  15. Pingback: Launching “Mastering the Unpredictable” « Thoughts on Collaborative Planning

  16. Pingback: Can BPM meet Enterprise 2.0 over Adaptive Case Management? | Content Perspective

  17. Pingback: ACM Links for 8-4-2009 « Thoughts on Collaborative Planning

  18. Pingback: Links « Fujitsu Interstage Blog

  19. Pingback: Structure is in the Eye of the Beholder | On Collaborative Planning

  20. Pingback: It’s all Newton’s Fault | Collaborative Planning & Social Business

  21. Pingback: Root Cause Analysis and Adaptive Case Management « Jacob Ukelson's Blog

  22. Pingback: Agility through Business Process Automation? « Efficient technology for the enterprise

  23. Pingback: Business Process and Adaptive Case Management News and Information » Agility through Business Process Automation? « Efficient technology …

  24. Pingback: Two Languages Divide but don’t Conquer | Collaborative Planning & Social Business

  25. Pingback: Wirearchy – a pattern for an adaptive organization? | Collaborative Planning & Social Business

  26. Pingback: Sociocracy | Thinking Matters

Leave a comment