Technical Debt Is Not Debt; It’s Not Even Technical

Co-authored with Dr Paidi O’Raghallaigh and Dr Stephen McCarthy at Cork University Business School as part of my PhD studies, and originally published by Cutter Consortium’s Business Agility & Software Engineering Excellence practice on 22nd of July 2021

Take a minute and write an answer to the question, “What is technical debt?” Then read this article and reread your answer — and see if it still makes sense.

Technical debt is a common term in technology departments at every company where we’ve worked. Nobody explained technical debt; we assumed it was a fundamental property of the work. We never questioned our understanding of it until we discovered a paper by Edith Tom et al. entitled “An Exploration of Technical Debt.” Turns out, we didn’t understand it at all.

Now, one major concern in academia is rigor. Academics like to get deep into a topic, examine the nuances, and bring clarity. After thoroughly reviewing more than 100 seminal papers on technical debt, we saw it as an amorphous ghost, with enormous differences and inconsistencies in its use. Next, we began looking at it in practice, asking colleagues, ex-colleagues, and working technologists, but couldn’t find a satisfactory explanation for it there either. Ultimately, we went back to the original source to figure out the history — and get a sense of its evolution.

One thing that is agreed on: the term technical debt came from Ward Cunningham. Cunningham is the inventor of the wiki and a tech legend. In the early 1990s, his team was building a piece of financial software, and he used a metaphor from the world of finance to explain to his manager how the team was working. As he later explained in a paper at the OOPSLA conference in 1992:

A little debt speeds development so long as it is paid back promptly with a rewrite. Objects make the cost of this transaction tolerable. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.

The metaphor quickly became part of standard technology discourse. Because the conference focused on object-oriented development, it took hold in that community. Popular tech authors such as Martin Fowler and Steve McConnell soon took it on, helping it become part of the broader language in software development. Today, the use of “technical debt” has become commonplace, from a mention in a single paper in 1992 to over 320 million results from a Google search as of July 2021.

Over time, Cunningham saw the term shift to signify taking a shortcut to achieve a goal more quickly, while intending to do a better job in the future. In 2009, dissatisfied with how the metaphor had mutated, he clarified the use of technical debt in a YouTube video. Cunningham disliked the notion that technical debt signified “doing a poor job now and a better one later.” This was never his intention. He stated:

I’m never in favor of writing code poorly, but I am in favor of writing code to reflect your current understanding of a problem, even if that understanding is partial.

But it was too late. By that time, the metaphor had outgrown his initial intent. It was out in the wild, excusing terrible decisions all over the globe. Technical debt now represented both debt taken on intentionally and the more insidious form, hidden or unintentional debt — debt taken on without the knowledge of the team. It had also moved past code and spread to areas as varied as technology architecture, infrastructure, documentation, testing, versioning, build, and usability.

Technical debt allows practitioners to look at tech delivery through the lens of debt. Is this an appropriate lens? Debt repayment has one vital characteristic: it is easy to understand. Debt repayment has three properties that are straightforward to grasp — principal amount, interest rate, and term (i.e., length of time to repay). But when comparing technical debt, there is no agreement on the principal, no agreement on the sum owed. There is no concept of an interest rate for technical debt because technologists individually evaluate each project as a unique artifact. Finally, term length isn’t a fixed concept in technical debt — in fact, Klaus Schmid even argues that future development should be part of the evaluation of technical debt.

Enormous effort and energy have gone into trying to calculate an accurate number for technical debt across many technology and academic departments. Unfortunately, trying to glue a direct mathematical representation to a metaphor seems to have failed. The idea of technical debt as a type of debt doesn’t hold up well in this light.

So is it technical? This depends on whether we consider only the originating action, or the consequences that follow. If an aggressor punches a bystander in the face, we consider not only the action of the aggressor (the originating action) but also the injury to the bystander (the impact of that originating action). Through this lens, technical debt can only be technical if we consider where it originates, as opposed to where it has an impact. Technologists take on the originating action; the business suffers the impacts of those decisions. Technical debt affects:

  • Competitiveness by slowing/speeding up new product development
  • Costs (short-term decrease/long-term increases in development cycles)
  • Customer satisfaction
  • Whether a company can survive

Once we realize that technical debt is a company-wide concern, we can no longer consider it technical, this label is too narrow and doesn’t communicate its significance. In fact, our current ongoing research shows that technical debt may even have an impact beyond the company, and we need to take an even broader view (its effect on society as one example). 

The most important takeaway: we must broaden our awareness of technical debt. In the same way that company executives examine financial cash flows and sales pipelines, we must communicate the consequences of taking on technical debt to this audience. Our most important challenge is to find a shared language to help business stakeholders understand the importance of unknown decisions made in technology departments.

Finally, look back at how you defined technical debt at the beginning of this article. Do you communicate the action or the impact? Is it suitable for a business audience? What is?

If you enjoyed this article, please share it with three people who would appreciate it.

You may also enjoy this article on how technologists make decisions

If you would like to work in Workhuman with talented people on some of the most interesting challenges in technology and society, please contact me, or browse open roles here.

Thank you so much, Mark.


To maximise or to satisfice, that is the question: the 3 lies beneath rational decision making

“Still a man hears what he wants to hear

And disregards the rest”

Paul Simon — The Boxer

I’ve yet to meet a person who claims to be irrational. Everyone is convinced that they make rational decisions — this idea is at the core of theories in economics, organisations, and technology. When faced with a decision, a rational decision maker:

  1. Defines the opportunity/problem
  2. Lists all the constraints (time, budget, resources etc)
  3. Searches for all solutions
  4. Chooses the solution that gives the maximum benefit

According to Stanford Professor James G. March’s A Primer on Decision Making, this model of decision making is called a maximisation

The idea of maximisation, the concept of a rational decision maker is based on 3 lies.

The first lie is that we can predict the future, that we can know every possible solution in advance. This is absurd — no-one can see into the future.

The second lie — we can predict how we will feel in the future about a benefit or consequence. The feeling we get after an event is rarely the feeling we had expected beforehand. To quote tennis great Andre Agassi from his autobiography.

‘Now that I’ve won a slam, I know something very few people on earth are permitted to know. A win doesn’t feel as good as a loss feels bad, and the good feeling doesn’t last long as the bad. Not even close.’

The last lie is the biggest one of all — that we have the time and brainpower to search for every potential solution that exists. The first problem here is a lack of time. If we fully worked out each decision we had to make in our lives, we would have no time for anything else. The courses of action are infinite. A minor change at one level can unleash a butterfly effect of consequences for every level below. The second problem concerns our brainpower. We are incapable of comparing complex outcomes because we suffer from problems of:

  1. attention — too much noise
  2. memory — our limited capacity to store information
  3. comprehension — difficulties in organising, summarising and using information
  4. communication — different people (cultures/generations/professions) communicate information in different ways.

Because of these limitations, we simplify decisions by:

  1. replacing the problem we face with a simpler one
  2. decomposing problems to their component parts and solve these, hoping to solve the full problem by doing so.
  3. seeking patterns and then following rules we have previously established instead of looking for new possibilities
  4. narrowing the problem to a ‘frame’ — narrowing the decision choices available for selection. Frames come from early individual experience, recent frames used come from friends, consultants, and writers.

The legendary Herbert Simon tells us that instead of using a rational strategy, most decision makers ‘satisfice’. This means we compare alternatives until a ‘good enough’ solution is found, then we choose that option. If there is a better alternative, we rarely chose it because we stop thinking about that decision and move on with life. We often fool ourselves into thinking we are maximisers — finding the best solution after an exhaustive search. In reality, we are more likely to satisfice, and move onto the next item on our agenda.

In organisations, situations become more complex. A decision may involve a group of people. The process may continue for a predetermined time, rather than stop when a satisfactory outcome is reached. There may be situations (usually simpler decisions) where the organisation maximise.

To quote March:

“Decision makers look for information, but they see what they expect to see and overlook unexpected things. Their memories are less recollections of history than constructions based on what they thought might happen and reconstructions based on what they now think must have happened, given their present beliefs.”

We think we make sound decisions, but in reality our ability to be rational is bounded by the constraints of time and cognition. We are not rational creatures.


If you enjoyed this article, please share it with three people who would appreciate it. Thank you so much.

This piece has also been published by the Cutter Journal here under the title ‘The 3 Lies of Maximization’


Are you foolish enough to innovate?

“Stay hungry, stay foolish” — Steve Jobs

How can you make a great breakthrough? How can you start the next era-defining business, write the next great song, create the next political movement? To do any of these, you must be more innovative. So where do you start?

Legendary Finnish architect Eero Saarinen sat one morning staring at a broken grapefruit peel, the remains of his breakfast. He was in the middle of an enormous project, designing a new airport for one of the world’s great cities — New York. Staring at the fruit peel, a vision suddenly grabbed him. He would design the airport shaped like a split grapefruit peel. One of the most groundbreaking pieces of architecture — the TWA terminal at JFK airport — was the result.

By Roland Arhelger — Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=46423333

So, how did Saarinen do it? He took on an idea that would have seemed ludicrous to most — something outside the ordinary. How do we follow Saarinen’s lead?

In the seminal book ‘A Primer on Decision Making‘ — Stanford Professor James G. March tells us that there are the three ingredients in a company needed for innovation: slack, luck and foolishness.

Slack is the difference between the outcome chosen, and the best possible outcome (say a successful product sold 1 million, but could have sold 10 million with a different decision). In an organisation, more slack means more innovation. How do you know if there is slack in your company? If you continually exceed targets, you are in the right place. Beware, slack can change. When performance in an organisation exceeds aspirations, slack accumulates, and when performance decreases, slack decreases. 

Luck is necessary for successful innovation. Luck can come in many guises, the right timing, a breakthrough in a related industry, new staff creating the ‘right’ chemistry in a team. Because innovations which provide breakthroughs are difficult to identify in advance — a very promising idea can fail miserably — some level of luck is necessary for an innovation to take hold.

Foolishness produces major innovations. It is the most important ingredient. Ordinary ideas don’t make great breakthroughs, an ordinary idea preserves the existing state of affairs. Organisations need to support foolish ideas even if they have a high probability of failure. This requires a high tolerance for risk and a culture that promotes innovation over conformity.

As someone in an organisation, what can you do to be more innovative? There are three things — you must:

  1. favour high-quality information over social validity
  2. understand your risk tolerance
  3. be foolish.

People in an organisation seek social validity over high-quality knowledge, according to March. Organisations are social systems, all social systems require a shared understanding of the world. In any organisation, there are ambiguities in meaning exists between people. Different people have different interpretations of reality. These ambiguities and interpretations threaten the social system. To combat this threat, mechanisms emerge to create a shared understanding among all participants. We

  • edit our experience to remove contradictions. 
  • seek information to confirm decisions.
  • forget disconfirming data.
  • remember previous experience as more consistent with present beliefs than was the case.
  • change beliefs to become consistent with actions.

A preference for vivid stories emerges, with lots of detail (which is often irrelevant). This allows people to process lots of extra information. The amount of information processed increases confidence, it does not increase accuracy. Beware of detailed stories — these stories give the decision makers what they want — to see the world with more confidence rather than more accuracy.

Every innovator takes risks. To get more innovative results, you must take some level of risk. It’s important to understand your risk appetite when making any decision — it is driven by:

  1. Personality — your natural trait towards risk taking
  2. Reaction — your ability to take variable levels of risk depending on the situation. Decision makers are more risk averse when the outcome involves gains, and more open to risk when it involves losses.
  3. Reasoned choices — you may make a reasoned choice depending on the situation. For example, if you need to finish first in a contest it requires a very different approach than creating an internal partnership with another department.
  4. Reliability — risks taken are affected unconsciously because of unreliability. The situation may suffer from a lack of knowledge, breakdown in communication, trust, or structure. The greater the ignorance, the greater the variability of outcome, the greater the risk.

Finally, for innovation to occur, you need to be foolish. So, what does foolish mean in this context? Innovative ideas take a concept that flies in the face of ‘common knowledge’ and transforms everything around it. However, there is great social pressure in organisations to create a feeling of safety for all and proceed with ideas that give a ‘comfort level’ across the whole group. Anything outside this is considered foolish. A simple check on your idea is — if it doesn’t seem foolish to others, chances are that it’s not likely to be a bold enough vision. A true innovator is treated as a fool when they propose a breakthrough — as the following famous examples show:

  • “Drill for oil? You mean drill into the ground to try and find oil? You’re crazy.” — workers whom Edwin L. Drake tried to enlist in his project to drill for oil in 1859
  • “The wireless music box has no imaginable commercial value. Who would pay for a message sent to nobody in particular?” — David Sarnoff’s associates response to his urgings for investment in the 1920s
  • “I think there is a world market for maybe five computers.” — Thomas Watson, chairman of IBM, 1943.
  • “There’s no chance that the iPhone is going to get any significant market share.” — Steve Ballmer, Microsoft CEO

It is easy to be a cynic and costs nothing to criticise. It is hard to be an innovator. First off — you need to be conscious of how you make decisions. You must be in a supportive organisation. You must take risks and appear foolish to some people. Pray for some luck.

But what is life for, if not to try? Life is to be lived, and every brilliant innovation comes from a person just like you. How many innovations have we lost because it was easier not to rock the boat, because it was easier to listen to the crowd, easier to do what we have always done before. We must grasp the nettle and fight the urge to be safe.

As Irish playwright Samuel Beckett famously said — “Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.”


If you enjoyed this article, please share it with three people who would appreciate it. Thank you so much.


How Technology Architects make decisions

Or why you might spend a fortune on a red car

Chinese translation available here

The remarkable Herbert Simon won the Nobel Prize in Economics in 1978 and the Turing Medal in 1975. Reading about his life gives me a panic attack when I consider how little I have achieved in comparison. He published ‘Administrative Behaviour’ in 1947, and I started reading it in 2021. I started by treating it as a relic of World War II era business, a history book. I was quickly filled with horror as Simon explained business, thinking and decision making in ways which seemed obvious after reading them, but I had never even thought of. I immediately felt weak. I felt like a total imposter. How had I never read Herbert Simon before? Why had nobody told me? It panicked me for days. I managed to drop a reference to the book into every meeting for weeks. That practice soon calmed me down, it turns out almost no-one I know had read it either.  

Early in the book, Simon talks about how each department in an organisation has one job. They take in information and turn it into decisions which are executed (either by them or another department). He introduces the concept of Bounded Rationality – how it is impossible to evaluate an almost infinite set of possibilities when making a decision. Instead we must chose a smaller ‘bounded’ set of assumptions to work within. 

Back in the actual world of architecture, I have always boiled the job down to either a) making decisions or b) provide information to help others make decisions. I’ve only ever had a vague sense of how architects make decisions, even though its been my job for the majority of my career.

In a fantastic paper published in 2014, “EA Anamnesis: An Approach for Decision Making Analysis in Enterprise Architecture”, Georgios Plataniotis, Sybren de Kinderen and Henderik Proper explain the importance of capturing decisions made about architecture. They go further, arguing that capturing the reasons for a decision and alternatives considered is just as important. Documenting the rationale when a decision is made gives it context, explains the environment at the time, and helps inform future decisions. 

The paper describes four strategies used to make Enterprise Architecture decisions. Each decision is an attempt to decide on the best alternative among competing choices. They split decision types into the following buckets :

  • Compensatory. This type of decision considers every alternative, analysing all criteria in low-level detail. Criteria with different scores can compensate for each other, hence the name. There are two types here:
    • Compensatory Equal Weight – criteria are scored and totalled for each potential option, the option with the highest total signifies the best decision. 
    • Compensatory Weighted Additive (WADD) – here a weighting is given for a criterion to reflect significance (the higher the weighting, the higher the significance). The weighting is multiplied by the score for each criterion, then each alternative is summed, the highest total winning. 
  • Non-Compensatory. This method uses fewer criteria. The two types are:
    • Non-Compensatory Conjunctive – alternatives that cannot meet a criterion are immediately dismissed, the winner is chosen among the survivors. 
    • Non-Compensatory Disjunctive – an alternative is chosen if it complies with a criterion, irrespective of other criteria. 

Say you were buying a car, and you had the following criteria: fuel efficiency, colour, cost, and ease of parking (as scored below). 

CarFuelColourCostParkingTotalFuel x2 Weighted Total 
Car A9Black64191828
Car B6White105211227
Car C4Grey41018822
Car D1Red1810211

The four strategies might look like this: 

  1. Compensatory Equal Weight – in this case you pick the highest unweighted total – Car B
  2. Compensatory Weighted Additive – because you drive long distances, you apply a double weighting for fuel mileage and pick the highest weighted total – Car A 
  3. Non-Compensatory Conjunctive – because you live in a city, you discard any car that isn’t easy to park (at least 7/10). This leaves a choice between C and D you chose the highest score between them – Car C 
  4. Non-Compensatory Disjunctive – you fall in love with a red car – ignore everything else – Car D

Compensatory decisions are suitable when time and resources are available to   

  • gather the right set of alternatives, 
  • evaluate each alternative in detail
  • score each with consistency and precision. 

Non-Compensatory decisions are necessary when

  • there is time stress
  • the problem is not well structured  
  • the information surrounding the problem is incomplete
  • criteria cant be expressed numerically
  • there are competing goals
  • the stakes are high
  • there are multiple parties negotiating in the decision. 

A level of pragmatism is important when choosing a decision strategy. Using Simon’s concept of bounded rationality, compensatory decisions can never be fully worked out. Some level of assumptions are necessary, otherwise the work needed to make every business decision is almost infinite. However, within a set of ‘givens’ (given we need to decide by Friday, and given budget is x, and given resources available are y, and given etc) the weighted additive method (WADD above) has proven effective in my experience. The framework forces decision makers to consider each alternative clearly, as opposed to a clump of criteria mashed together. It also forces all parties to agree a set of weights, helping the group agree on the hierarchy of importance. These processes improve communication between parties, even when they disagree on the choices of criteria and weights. 

A strange magic happens during negotiation of the scoring, as parties try to force their choice. The mental mathematics going on inside heads is a dizzying. I have witnessed all types of behaviour, from people determined there be no change, to an exec wanting an inflight magazine article to form the basis for all future strategy, to a head of a business unit wanting us to use his nephews school project as part of the solution, all the way to one mid 40s executive, who got so annoyed with the debate, that he started jumping up and down and stamping his feet because he wanted his decision, and “that’s what I’ll get”. 

Start now


If you are the type of person who enjoys these type of articles, please subscribe for one post per month, and please share it with three people who would enjoy it. Thank you so much for reading

Since first posted, this article has been subsequently published by Architecture and Governance magazine here