Tag: Decision Making

  • Designing for humans: Why most enterprise adoptions of AI fail

    Originally published at https://www.cio.com/article/4028051/designing-for-humans-why-most-enterprise-adoptions-of-ai-fail.html

    Building technology has always been a messy business. We are constantly regaled with stories of project failures, wasted money and even the disappearance of whole industries. It’s safe to say that we have some work to do as an industry. Adding AI to this mix is like pouring petrol on a smouldering flame — there is a real danger that we may burn our businesses to the ground.

    At its very core, people build technology for people. Unfortunately, we allow technology fads and fashions to lead us astray. I’ve shipped AI products for more than a decade — at Workhuman and earlier in financial services. In this piece, I will take you through hard-earned lessons I’ve learned through my journey. I have laid out five principles to help decision-makers — some are technical, most are about humans, their fears, and how they work.

    5 principles to help decision makers

    The path to excellence lies in the following maturity path: Trust → Federated innovation →  Concrete tasks → Implementation metrics → Build for change.

    1. Trust over performance

    Companies have a raft of different ways to measure success when implementing new solutions. Performance, cost and security are all factors that need to be measured. We rarely measure trust. Unfortunate, then, that a user’s trust in the systems is a major factor for the success of AI programs. A superb black-box solution dies on arrival if nobody believes in the results.

    I once ran an AI prediction system for US consumer finance at a world-leading bank. Our storage costs were enormous. This wasn’t helped by our credit card model, which spat out 5 TB of data every single day. To mitigate this, we found an alternative solution, which pre-processed the results using a black-box model. This solution used 95% less storage (with a cost reduction to match). When I presented this idea to senior stakeholders in the business, they killed it instantly. Regulators wouldn’t trust a system where they couldn’t fully explain the outputs. If they couldn’t see how each calculation was performed every step of the way, they couldn’t trust the result.

    One recommendation here is to draft a clear ethics policy. There needs to be an open and transparent mechanism for staff and users to submit feedback on AI results. Without this, users may feel they cannot understand how results are generated. If they don’t have a voice in changing ‘wrong’ outputs, then any transformation is unlikely to win the hearts and minds needed across the organisation.

    2. Federated innovation over central control

    AI has the potential to deliver innovation at previously unimaginable speeds. It lowers the cost of experiments and acts as an idea generator — a sounding board for novel approaches. It allows people to generate multiple solutions in minutes. A great way to slow down all innovation is to funnel it through some central body/committee/approval mechanism. Bureaucracy is where ideas go to die.

    Nobel-winning philosopher F. A. Hayek once said, “There exist orderly structures which are the product of the action of many men but are not the result of human design.” He argued against central planning, where an individual is accountable for outcomes. Instead, he favoured “spontaneous order,” where systems emerge from individual actions with no central control. This, he argues, is where innovations such as language, the law and economic markets emerge.

    The path between control and anarchy is difficult to navigate. Companies need to find a way to “hold the bird of innovation in their hand”. Hold too tight — kill the bird; hold too loose — the bird flies away. Unfortunately, many companies hold too tight. They do this by relying too heavily on a command-and-control structure — particularly groups like legal, security and procurement. I’ve watched them crush promising AI pilots with a single, risk-averse pronouncement. For creative individuals innovating at the edges, even the prospect of having to present their idea to a committee can have a chilling effect. It’s easier to do nothing and stay away from the ‘large hand of bureaucracy’. This kills the bird — and kills the delicate spirit of innovation.

    AI can supercharge innovation capabilities for every individual. For this reason, we must federate innovation across the company. We need to encourage the most senior executives to state in plain language what the appetite is for risk in the world of AI and to explain what the guardrails are. Then let teams experiment unencumbered by bureaucracy. Central functions shift from gatekeepers to stewards, enforcing only the non-negotiables. This allows us to plant seeds throughout the organisation, and harvest the best returns for the benefit of all.

    3. Concrete tasks over abstract work

    Early AI pioneer Herbert Simon is the father of behavioral science, a Nobel and Turing Prize winner. He also invented the idea of bounded rationality. This idea explains that humans settle for “good enough” when options grow beyond a certain number. Generative AI follows this approach (possibly because it is trained on human data, it mimics human behaviour). Generative AI is stochastic — every time we give the same input, we get a different output — a “good enough” answer. This is very different from the classical model we are used to — given the same input, we get the same output every time.

    This stochastic model, where the result is unpredictable, makes modelling top-down use cases even more difficult. In my experience, projects only clicked once we sat with the users and really understood how they worked. Early in our development of the Workhuman AI assistant, generic high-level requirements gave us very odd behaviors and was unpredictable. We needed to rewrite the use cases as more detailed, low-level requirements, with a thorough understanding of the behaviour and tolerances built in. We also logged every interaction and used this to refine the model behaviour. In this world, general high-level solution design is guesswork.

    Leaders at all levels should get closer to the details of how work is done. Top-down general pronouncements are off the table. Instead, teams must define ultra-specific use cases and design confidence intervals (e.g., “90 % of AI-produced code must pass unit tests on first run”). In the world of Generative AI, clarity beats abstraction every time.

    4. Adoption over implementation

    Buying a tool is easy; changing behaviour is brutal. A top-down edict can help people take the first step. But measuring adoption is the wrong way to drive change – instead, it gives box-ticked “adoption” but shallow, half-implemented usage.

    Executives are every bit as much the victims of fads and fashions as any online shopping addict (once you substitute management methods, sparkling new technologies and FOMO for the latest styles from Paris). And it doesn’t take artificial general intelligence to notice that the trend for AI is hot, hot, hot! Executives need to tell an AI story and show benefits, as they are under pressure from shareholders, investors and the market at large. Through my network in IASA, I have broadly seen this result in edicts to measure “AI adoption”. Unfortunately, this has had very mixed results so far.

    Human nature abhors change. A good manager has a myriad of competing concerns, including running a group, meeting business challenges, hiring and retaining talent and so on. When a new program to adopt an AI strategy comes down from executives, the manager — who is trying to protect their team, meet the needs of the business and keep their head above water — will often compromise by adopting the tooling, but failing to implement it thoroughly.
    At Workhuman, we have found that measuring adoption (and not only for AI) is not the right way to begin a transformation. It measures the start of the race, but ignores the podium entirely. Instead of vanity metrics, when we measure success, we measure outcome metrics (e.g. changed work process, manual steps retired and business drivers impacted). By measuring implementation and impact, we avoid the ‘box-ticking’ trap that so many companies fall into.

    From our decade-plus experience in AI, we have also understood that AI transformation is part of a bigger support system, including education, tooling and a supportive internal community. We partnered with an Irish university to run diploma programs in AI internally, and provide AI tooling to all staff, whatever their role. We have also fostered internal communities at all levels to help drive understanding. This has helped us as we deliver AI solutions, both internally and externally, as shown by the release of our AI Assistant, a transformational AI solution for the HR community.

    5. Change over choice

    The AI landscape shifts monthly, with a continual flow of new models and vendors locked in a constant race. A choice that locks you into a single technology stack could have your company resembling a horse and buggy clip-clopping through the center of a modern city in the near future.

    When we began looking at models for our new AI assistant, we faced several challenges. First off, what can each model do? There were few useful benchmarks, and those that existed offered little in the way of business capability insights. We also struggled to measure how the various strengths weighed up against other models’ weaknesses and vice versa.

    Eventually, we agreed on one core architectural principle — everything we design must be swappable. In particular, we must be able to change the core foundation models that underlie the solution. This has allowed us to adjust continually over the last year. We test each new model after release, and work out how each one can be best used to give a great experience to our customers.

    Because models are changing so fast, leaders must have the ability to swap AI models as a core principle. Companies should abstract model calls behind a thin layer, while versioning prompts and evaluation harnesses so new models can drop in overnight. The ability to swap horses mid-race may be the competitive advantage necessary to win in a market today.

    AI for leaders

    Technology choices are leadership choices. Who decides what to automate? Which ethical red lines are immovable? How do we protect every human who works with us? Adopting AI is a leadership challenge that can’t be delegated to consultants or individual contributors. How we implement AI now will define the future successes and failures of the business world. It’s a challenge that must be driven by thoughtful leadership. Every leader must dive in and deeply understand the AI landscape and figure out how best to enable their teams to build the companies of tomorrow.

  • Technical Debt Is Not Debt; It’s Not Even Technical

    Co-authored with Dr Paidi O’Raghallaigh and Dr Stephen McCarthy at Cork University Business School as part of my PhD studies, and originally published by Cutter Consortium’s Business Agility & Software Engineering Excellence practice on 22nd of July 2021

    Take a minute and write an answer to the question, “What is technical debt?” Then read this article and reread your answer — and see if it still makes sense.

    Technical debt is a common term in technology departments at every company where we’ve worked. Nobody explained technical debt; we assumed it was a fundamental property of the work. We never questioned our understanding of it until we discovered a paper by Edith Tom et al. entitled “An Exploration of Technical Debt.” Turns out, we didn’t understand it at all.

    One major concern in academia is rigor. Academics like to get deep into a topic, examine the nuances, and bring clarity. After thoroughly reviewing over 100 seminal papers on technical debt, we saw it as an amorphous ghost, with enormous differences and inconsistencies in its use. Next, we began looking at it in practice, asking colleagues, ex-colleagues, and working technologists, but couldn’t find a satisfactory explanation for it there either. Ultimately, we went back to the original source to figure out the history — and get a sense of its evolution.

    One thing that is agreed on: the term technical debt came from Ward Cunningham. Cunningham is the inventor of the wiki and a tech legend. In the early 1990s, his team was building a piece of financial software, and he used a metaphor from the world of finance to explain to his manager how the team was working. As he later explained in a paper at the OOPSLA conference in 1992:

    A little debt speeds development so long as it is paid back promptly with a rewrite. Objects make the cost of this transaction tolerable. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.

    The metaphor quickly became part of standard technology discourse. Because the conference focused on object-oriented development, it took hold in that community. Popular tech authors such as Martin Fowler and Steve McConnell soon took it on, helping it become part of the broader language in software development. Today, the use of “technical debt” has become commonplace, from a mention in a single paper in 1992 to over 320 million results from a Google search as of July 2021.

    Over time, Cunningham saw the term shift to signify taking a shortcut to achieve a goal more quickly, while intending to do a better job in the future. In 2009, dissatisfied with how the metaphor had mutated, he clarified the use of technical debt in a YouTube video. Cunningham disliked the notion that technical debt signified “doing a poor job now and a better one later.” This was never his intention. He stated:

    I’m never in favor of writing code poorly, but I am in favor of writing code to reflect your current understanding of a problem, even if that understanding is partial.

    But it was too late. By that time, the metaphor had outgrown his initial intent. It was out in the wild, excusing terrible decisions all over the globe. Technical debt now represented both debt taken on intentionally and the more insidious form, hidden or unintentional debt — debt taken on without the knowledge of the team. It had also moved past code and spread to areas as varied as technology architecture, infrastructure, documentation, testing, versioning, build, and usability.

    Technical debt allows practitioners to look at tech delivery through the lens of debt. Is this an appropriate lens? Debt repayment has one vital characteristic: it is easy to understand. Debt repayment has three properties that are straightforward to grasp — principal amount, interest rate, and term (i.e., length of time to repay). But when comparing technical debt, there is no agreement on the principal, no agreement on the sum owed. There is no concept of an interest rate for technical debt because technologists individually evaluate each project as a unique artifact. Finally, term length isn’t a fixed concept in technical debt — in fact, Klaus Schmid even argues that future development should be part of the evaluation of technical debt.

    Enormous effort and energy have gone into trying to calculate an accurate number for technical debt across many technology and academic departments. Unfortunately, trying to glue a direct mathematical representation to a metaphor seems to have failed. The idea of technical debt as a type of debt doesn’t hold up well in this light.

    So is it technical? This depends on whether we consider only the originating action, or the consequences that follow. If an aggressor punches a bystander in the face, we consider not only the action of the aggressor (the originating action) but also the injury to the bystander (the impact of that originating action). Through this lens, technical debt can only be technical if we consider where it originates, as opposed to where it has an impact. Technologists take on the originating action; the business suffers the impacts of those decisions. Technical debt affects:

    • Competitiveness by slowing/speeding up new product development
    • Costs (short-term decrease/long-term increases in development cycles)
    • Customer satisfaction
    • Whether a company can survive

    Once we realize that technical debt is a company-wide concern, we can no longer consider it technical. This label is too narrow and doesn’t communicate its significance. In fact, our current ongoing research shows that technical debt may even have an impact beyond the company, and we need to take an even broader view (its effect on society as one example). 

    The most important takeaway: we must broaden our awareness of technical debt. In the same way that company executives examine financial cash flows and sales pipelines, we must communicate the consequences of taking on technical debt to this audience. Our most important challenge is to find a shared language to help business stakeholders understand the importance of unknown decisions made in technology departments.

    Finally, look back at how you defined technical debt at the beginning of this article. Do you communicate the action or the impact? Is it suitable for a business audience? What is?

  • To maximise or to satisfice, that is the question: the 3 lies beneath rational decision making

    “Still a man hears what he wants to hear

    And disregards the rest”

    Paul Simon — The Boxer

    I’ve yet to meet a person who claims to be irrational. Everyone is convinced that they make rational decisions — this idea is at the core of theories in economics, organisations, and technology. When faced with a decision, a rational decision maker:

    1. Defines the opportunity/problem
    2. Lists all the constraints (time, budget, resources etc)
    3. Searches for all solutions
    4. Chooses the solution that gives the maximum benefit

    According to Stanford Professor James G. March’s A Primer on Decision Making, this model of decision making is called a maximisation

    The idea of maximisation, the concept of a rational decision maker is based on 3 lies.

    The first lie is that we can predict the future, that we can know every possible solution in advance. This is absurd — no-one can see into the future.

    The second lie — we can predict how we will feel in the future about a benefit or consequence. The feeling we get after an event is rarely the feeling we had expected beforehand. To quote tennis great Andre Agassi from his autobiography.

    ‘Now that I’ve won a slam, I know something very few people on earth are permitted to know. A win doesn’t feel as good as a loss feels bad, and the good feeling doesn’t last long as the bad. Not even close.’

    The last lie is the biggest one of all — that we have the time and brainpower to search for every potential solution that exists. The first problem here is a lack of time. If we fully worked out each decision we had to make in our lives, we would have no time for anything else. The courses of action are infinite. A minor change at one level can unleash a butterfly effect of consequences for every level below. The second problem concerns our brainpower. We are incapable of comparing complex outcomes because we suffer from problems of:

    1. attention — too much noise
    2. memory — our limited capacity to store information
    3. comprehension — difficulties in organising, summarising and using information
    4. communication — different people (cultures/generations/professions) communicate information in different ways.

    Because of these limitations, we simplify decisions by:

    1. replacing the problem we face with a simpler one
    2. decomposing problems to their component parts and solve these, hoping to solve the full problem by doing so.
    3. seeking patterns and then following rules we have previously established instead of looking for new possibilities
    4. narrowing the problem to a ‘frame’ — narrowing the decision choices available for selection. Frames come from early individual experience, recent frames used come from friends, consultants, and writers.

    The legendary Herbert Simon tells us that instead of using a rational strategy, most decision makers ‘satisfice’. This means we compare alternatives until a ‘good enough’ solution is found, then we choose that option. If there is a better alternative, we rarely chose it because we stop thinking about that decision and move on with life. We often fool ourselves into thinking we are maximisers — finding the best solution after an exhaustive search. In reality, we are more likely to satisfice, and move onto the next item on our agenda.

    In organisations, situations become more complex. A decision may involve a group of people. The process may continue for a predetermined time, rather than stop when a satisfactory outcome is reached. There may be situations (usually simpler decisions) where the organisation maximise.

    To quote March:

    “Decision makers look for information, but they see what they expect to see and overlook unexpected things. Their memories are less recollections of history than constructions based on what they thought might happen and reconstructions based on what they now think must have happened, given their present beliefs.”

    We think we make sound decisions, but in reality our ability to be rational is bounded by the constraints of time and cognition. We are not rational creatures.


    This piece has also been published by the Cutter Journal here under the title ‘The 3 Lies of Maximization’

  • Are you foolish enough to innovate?

    “Stay hungry, stay foolish” — Steve Jobs

    How can you make a great breakthrough? How can you start the next era-defining business, write the next great song, create the next political movement? To do any of these, you must be more innovative. So where do you start?

    Legendary Finnish architect Eero Saarinen sat one morning staring at a broken grapefruit peel, the remains of his breakfast. He was in the middle of an enormous project, designing a new airport for one of the world’s great cities — New York. Staring at the fruit peel, a vision suddenly grabbed him. He would design the airport shaped like a split grapefruit peel. One of the most groundbreaking pieces of architecture — the TWA terminal at JFK airport — was the result.

    By Roland Arhelger — Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=46423333

    So, how did Saarinen do it? He took on an idea that would have seemed ludicrous to most — something outside the ordinary. How do we follow Saarinen’s lead?

    In the seminal book ‘A Primer on Decision Making‘ — Stanford Professor James G. March tells us that there are the three ingredients in a company needed for innovation: slack, luck and foolishness.

    Slack is the difference between the outcome chosen, and the best possible outcome (say a successful product sold 1 million, but could have sold 10 million with a different decision). In an organisation, more slack means more innovation. How do you know if there is slack in your company? If you continually exceed targets, you are in the right place. Beware, slack can change. When performance in an organisation exceeds aspirations, slack accumulates, and when performance decreases, slack decreases. 

    Luck is necessary for successful innovation. Luck can come in many guises, the right timing, a breakthrough in a related industry, new staff creating the ‘right’ chemistry in a team. Because innovations which provide breakthroughs are difficult to identify in advance — a very promising idea can fail miserably — some level of luck is necessary for an innovation to take hold.

    Foolishness produces major innovations. It is the most important ingredient. Ordinary ideas don’t make great breakthroughs, an ordinary idea preserves the existing state of affairs. Organisations need to support foolish ideas even if they have a high probability of failure. This requires a high tolerance for risk and a culture that promotes innovation over conformity.

    As someone in an organisation, what can you do to be more innovative? There are three things — you must:

    1. favour high-quality information over social validity
    2. understand your risk tolerance
    3. be foolish.

    People in an organisation seek social validity over high-quality knowledge, according to March. Organisations are social systems, all social systems require a shared understanding of the world. In any organisation, there are ambiguities in meaning exists between people. Different people have different interpretations of reality. These ambiguities and interpretations threaten the social system. To combat this threat, mechanisms emerge to create a shared understanding among all participants. We

    • edit our experience to remove contradictions. 
    • seek information to confirm decisions.
    • forget disconfirming data.
    • remember previous experience as more consistent with present beliefs than was the case.
    • change beliefs to become consistent with actions.

    A preference for vivid stories emerges, with lots of detail (which is often irrelevant). This allows people to process lots of extra information. The amount of information processed increases confidence, it does not increase accuracy. Beware of detailed stories — these stories give the decision makers what they want — to see the world with more confidence rather than more accuracy.

    Every innovator takes risks. To get more innovative results, you must take some level of risk. It’s important to understand your risk appetite when making any decision — it is driven by:

    1. Personality — your natural trait towards risk taking
    2. Reaction — your ability to take variable levels of risk depending on the situation. Decision makers are more risk averse when the outcome involves gains, and more open to risk when it involves losses.
    3. Reasoned choices — you may make a reasoned choice depending on the situation. For example, if you need to finish first in a contest it requires a very different approach than creating an internal partnership with another department.
    4. Reliability — risks taken are affected unconsciously because of unreliability. The situation may suffer from a lack of knowledge, breakdown in communication, trust, or structure. The greater the ignorance, the greater the variability of outcome, the greater the risk.

    Finally, for innovation to occur, you need to be foolish. So, what does foolish mean in this context? Innovative ideas take a concept that flies in the face of ‘common knowledge’ and transforms everything around it. However, there is great social pressure in organisations to create a feeling of safety for all and proceed with ideas that give a ‘comfort level’ across the whole group. Anything outside this is considered foolish. A simple check on your idea is — if it doesn’t seem foolish to others, chances are that it’s not likely to be a bold enough vision. A true innovator is treated as a fool when they propose a breakthrough — as the following famous examples show:

    • “Drill for oil? You mean drill into the ground to try and find oil? You’re crazy.” — workers whom Edwin L. Drake tried to enlist in his project to drill for oil in 1859
    • “The wireless music box has no imaginable commercial value. Who would pay for a message sent to nobody in particular?” — David Sarnoff’s associates response to his urgings for investment in the 1920s
    • “I think there is a world market for maybe five computers.” — Thomas Watson, chairman of IBM, 1943.
    • “There’s no chance that the iPhone is going to get any significant market share.” — Steve Ballmer, Microsoft CEO

    It is easy to be a cynic and costs nothing to criticise. It is hard to be an innovator. First off — you need to be conscious of how you make decisions. You must be in a supportive organisation. You must take risks and appear foolish to some people. Pray for some luck.

    But what is life for, if not to try? Life is to be lived, and every brilliant innovation comes from a person just like you. How many innovations have we lost because it was easier not to rock the boat, because it was easier to listen to the crowd, easier to do what we have always done before. We must grasp the nettle and fight the urge to be safe.

    As Irish playwright Samuel Beckett famously said — “Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.”


  • How Technology Architects make decisions

    Or why you might spend a fortune on a red car

    Chinese translation available here

    The remarkable Herbert Simon won the Nobel Prize in Economics in 1978 and the Turing Medal in 1975. Reading about his life gives me a panic attack when I consider how little I have achieved in comparison. He published ‘Administrative Behaviour’ in 1947, and I started reading it in 2021. I started by treating it as a relic of World War II era business, a history book. It quickly filled me with horror as Simon explained business, thinking and decision making in ways which seemed obvious after reading them, but I had never even thought of. I immediately felt weak. I felt like a total imposter. How had I never read Herbert Simon before? Why had nobody told me? It panicked me for days. I dropped a reference to the book into every meeting for weeks. That practice soon calmed me down. It turns out almost no-one I know had read it either.  

    Early in the book, Simon talks about how each department in an organisation has one job. They take in information and turn it into decisions which are executed (either by them or another department). He introduces the concept of Bounded Rationality – how it is impossible to evaluate an almost infinite set of possibilities when making a decision. Instead, we must choose a smaller ‘bounded’ set of assumptions to work within. 

    Back in the actual world of architecture, I have always boiled the job down to either a) making decisions or b) provide information to help others make decisions. I’ve only ever had a vague sense of how architects make decisions, even though it’s been my job for the majority of my career.

    In a fantastic paper published in 2014, “EA Anamnesis: An Approach for Decision Making Analysis in Enterprise Architecture”, Georgios Plataniotis, Sybren de Kinderen and Henderik Proper explain the importance of capturing decisions made about architecture. They go further, arguing that capturing the reasons for a decision and alternatives considered is just as important. Documenting the rationale when a decision is made gives it context, explains the environment at the time, and helps inform future decisions. 

    The paper describes four strategies used to make Enterprise Architecture decisions. Each decision is an attempt to decide on the best alternative among competing choices. They split decision types into the following buckets :

    • Compensatory. This type of decision considers every alternative, analysing all criteria in low-level detail. Criteria with different scores can compensate for each other, hence the name. There are two types here:
      • Compensatory Equal Weight – criteria are scored and totalled for each potential option, the option with the highest total signifies the best decision. 
      • Compensatory Weighted Additive (WADD) – here a weighting is given for a criterion to reflect significance (the higher the weighting, the higher the significance). The weighting is multiplied by the score for each criterion, then each alternative is summed, the highest total winning. 
    • Non-Compensatory. This method uses fewer criteria. The two types are:
      • Non-Compensatory Conjunctive – alternatives that cannot meet a criterion are immediately dismissed, the winner is chosen among the survivors. 
      • Non-Compensatory Disjunctive – an alternative is chosen if it complies with a criterion, irrespective of other criteria. 

    Say you were buying a car, and you had the following criteria: fuel efficiency, colour, cost, and ease of parking (as scored below). 

    CarFuelColourCostParkingTotalFuel x2 Weighted Total 
    Car A9Black64191828
    Car B6White105211227
    Car C4Grey41018822
    Car D1Red1810211

    The four strategies might look like this: 

    1. Compensatory Equal Weight – in this case you pick the highest unweighted total – Car B
    2. Compensatory Weighted Additive – because you drive long distances, you apply a double weighting for fuel mileage and pick the highest weighted total – Car A 
    3. Non-Compensatory Conjunctive – because you live in a city, you discard any car that isn’t easy to park (at least 7/10). This leaves a choice between C and D you chose the highest score between them – Car C 
    4. Non-Compensatory Disjunctive – you fall in love with a red car – ignore everything else – Car D

    Compensatory decisions are suitable when time and resources are available to   

    • gather the right set of alternatives, 
    • evaluate each alternative in detail
    • score each with consistency and precision. 

    Non-Compensatory decisions are necessary when

    • there is time stress
    • the problem is not well structured  
    • the information surrounding the problem is incomplete
    • criteria cant be expressed numerically
    • there are competing goals
    • the stakes are high
    • there are multiple parties negotiating in the decision. 

    A level of pragmatism is important when choosing a decision strategy. Using Simon’s concept of bounded rationality, compensatory decisions can never be fully worked out. Some level of assumptions are necessary, otherwise the work needed to make every business decision is almost infinite. However, within a set of ‘givens’ (given we need to decide by Friday, and given budget is x, and given resources available are y, and given etc) the weighted additive method (WADD above) has proven effective in my experience. The framework forces decision makers to consider each alternative clearly, as opposed to a clump of criteria mashed together. It also forces all parties to agree a set of weights, helping the group agree on the hierarchy of importance. These processes improve communication between parties, even when they disagree on the choices of criteria and weights. 

    A strange magic happens during negotiation of the scoring, as parties try to force their choice. The mental mathematics going on inside heads is a dizzying. I have witnessed all types of behaviour, from people determined there be no change, to an exec wanting an inflight magazine article to form the basis for all future strategy, to a head of a business unit wanting us to use his nephews school project as part of the solution, all the way to one mid 40s executive, who got so annoyed with the debate, that he started jumping up and down and stamping his feet because he wanted his decision, and “that’s what I’ll get”. 

    Start now


    Since first posted, this article has been subsequently published by Architecture and Governance magazine here