Tag: Software Design

  • Book Summary – Leprechauns of Software Engineering by Laurent Bossavit

    Book Summary – Leprechauns of Software Engineering by Laurent Bossavit

    “Everything everyone knows about anything indicates that this is untrue,” – Laurent Bossavit

    The words science and engineering are often used when discussing computers and software. These terms are not well-earned. The terms accidental technology, computer alchemy, or software by listening to unqualified influencers are just as useful. Don’t believe me? Read Laurent Bossavit, and then give me a call.

    Imagine you are at a serious software conference, full of serious people. A presenter confidently states that 75% of software is never used. The year is 1995, and that presenter is from the US Department of Defence. He explains that his department spent $35.7 billion on a software program (yes, that’s billion with a b). 75% of that software was never used. That’s $26 billion wasted.

    This is an extraordinary fact. To double-check its veracity, I would expect a very detailed study of the $35 billion dollar program from the team’s output. That wasn’t done? Ok, that’s a lot of work. Surely they did significant analysis at another level, say a comprehensive survey of users of the software? No? Ok, well maybe a small sample of some users. Didn’t do this either? Sure, I get it; they are busy. We are all busy. They must have gotten a breakdown from the finance department. No. Ok. So I’m lost now. How did they get the figures of $34 and $26 again? A different team, in a different department, who worked on a different $6.8 million dollar project 15 years later wrote a paper. They found that only 2% of the software was fit for purpose, and 75% was wasted. The Department of Defence took this 75% of 6.8 million and simply applied it to their $35 billion dollar program. 15 years later. They used 75% as if it were a law in physics.

    Estimating software use is hardly rocket science, which the DOD should know something about. How did they make such an unfounded claim? The author of this book shows that these types of unscientific claims are common. In fact, the whole industry is filled with examples of bad science, poor reasoning and misuse of numbers. It’s an industry where some basic foundations do not exist.

    Ignore the title. This is an important book. One of those rare and special books, where the underlying concepts can upend how we see the world. For this review, I will use a technique I’ve used previously. I explain the what, why, how, so what, and for whom. I will then describe how I found the book and give you some valuable takeaways from the text.

    What

    In technology, we adopt flawed claims too quickly because we lack training in how to interpret research. This book deals with various ‘established truths’ about software and debunks many of these claims by investigating their origins. These stories are entertaining and give great insight into how various organisations make fundamental errors. However, for me the real value is in the methods that the author uses to expose the claims. This book borrows techniques from serious scientific enquiry to think critically about software. If we realise the 10x programmer doesn’t exist, then we have only gleaned a surface understanding. If we understand how the author came to this conclusion, we are now armed with a new technique to help us analyse every new claim. In a world where critical thinking is increasingly rare, this is a high-value skill.

    Why?

    Why was this book written? It feels like the author simply became so frustrated about how bad things were getting with software development that he started writing about it, and ended up with a book. There is a lack of critical thinking, and training in critical thinking in software. The author is trying to help right this wrong.

    It is impossible to research every single thing we believe. We could spend our whole lives researching and only get to a tiny sliver of knowledge. Instead, we need to satisfice — we need to do a little research and decide on certain ‘established truths’ to build our knowledge on. In software, we often find those truths by sifting through the output of tech influencers. Understanding which popularisers are trustworthy — and therefore which truths are valid — is a murky area. Software professionals have little training in this.

    We have all seen the hype cycles around new technologies – it appears – a few blog posts, then articles and podcasts. Books appear, consultancies recommend it. Suddenly everyone wants to use it, typically without considering why, what it offers, and what the consequences are. ‘Everyone does this’ becomes the mantra. Execs expect to see it, engineers leave and move to companies who use the latest tools so that they ‘stay current’. This lasts until something newer comes along, and the cycle begins again. The author wants to break this cycle, and show how many established truths are misleading, flawed or just plain wrong. He aims to give us tools to help us figure out how to judge these truths for ourselves.

    How

    The author uses a case study method and evaluates some popular claims in software. Through this, he teaches us several techniques from academia so that we can evaluate claims made about software. This was appealing to me, having spent a couple of years in a PhD program learning some of these techniques for the first time. Unfortunately, I’d already spent 20 years in the industry, so I had a lot of unlearning to do. I believe everyone should have access to these techniques. A grounding in the type of critical thinking that this book is based on can change how you view your work, but also how you live your life. In a world where wild claims are thrown about with abandon, the techniques in this book are vital tools to improve your own work, and your life.

    So What

    So what? Really? You might live your life without some basic critical thinking skills. You are likely to say things in meetings that are plain wrong. You are justifying what you do with ‘perceived wisdom’ that makes no sense. You might base your life’s work on mistakes. You are living a lie. You are an embarrassment.

    So that.

    For Whom

    For anyone interested in learning how to research any scientific claims, or anyone working in software, whether as an engineer, tester, manager, executive, or end user.

    How I found it

    I found it from reading an article by Hillel Wayne called Is Software Engineering Real Engineering? This is a self-published book, and I purchased it here.

    Valuable takeaways.

    Be constantly vigilant about how you think. Humans are not logical machines, we are bags of emotion. Few of us read the research that is coming out of academia. Instead, we rely on peers, colleagues and tech popularisers on the internet. Software conferences, books, articles, and the like are a helpful addition, but they often lack the critical rigour of academic research.

    We have become detached from scientific methods of enquiry. We must use the knowledge and tools from academia to allow us to become better thinkers. Academia needs to share some blame for this chasm. Academics don’t do a good enough job of communicating their research to the software community.

    We suffer from several critical thinking issues.

    Information cascade. If everyone else believes a claim, we frequently assume it’s true, without questioning the original claim.

    Discipline envy. We borrow our experimental design from medicine and call it evidence based. The author cautions against this, as it seems to be an attempt to sound impressive, but frequently hides conceptual or methodological flaws. The author points out that medical research has a raft of problems. There is a large body of research methods from social science that software largely ignores to its discredit.

    Citation blindness. Essentially, we don’t do a good job at checking citations. If cited research supports a hypothesis, we assume the research actually backs it up. Unfortunately, some research papers are not really empirical, or they may support a weaker version of the claim. Occasionally, they don’t support the claim, but cite research that does. Far from being balanced, some research is opinionated, reflecting the authors’ biases.

    Anyone who thinks issues with critical thinking are a recent phenomenon, needs to improve their critical thinking!

    Myths we mistakenly believe – (for more info, read the book)

    10x programmer
    TDD is better than not TDD
    Waterfall is not Agile
    Bugs cost more the later you find them

    Flaws:

    The title. It’s a terrible title for a significant book, and it almost put me off before I began. A book for people who are serious about software, about thinking, deserves a better title. There is a second reason it bothered me. I am Irish, after all. So I traipse over to the wall and add this book to the list of Irish cultural gems bastardised and commercialised out of all recognition (leprechauns are currently in fourth place, just below Halloween, St. Patrick’s Day and Count Dracula).

    I hope the author re-publishes under a new title.

    This is a self-published book. The author is on the cover (Laurent Bossavit), but no editor is mentioned. I wonder if an editor could have turned this into an even better one. The writing can be a little jumpy. Some arguments go on loo long, and then fade out. Some chapters are separated without an obvious reason. These are minor flaws, but it’s a shame because the content and thesis behind the book are fascinating.

    Interlude

    There is a fantastic interlude — a cartoon, and it’s called “How to lie”. I won’t spoil it here, but it contains a line that we should all use more liberally:

    “Everything everyone knows about anything indicates that this is untrue.”

    What we need to do

    Become scientists. The author believes we all need to both practice and study software development. This means becoming familiar with cognitive social science to understand how people work, the mathematical properties of computation to understand how computing works, and observing and measuring both laboratory conditions and real-world practice to gain a more in-depth understanding.

    We need to ask better questions. For a new article/book, does it quote sources? Have the authors read the sources? For the most important ‘truths’, can I read the original sources and make up my own mind?

    We must get above our work and think.

  • Technical Debt Is Not Debt; It’s Not Even Technical

    Co-authored with Dr Paidi O’Raghallaigh and Dr Stephen McCarthy at Cork University Business School as part of my PhD studies, and originally published by Cutter Consortium’s Business Agility & Software Engineering Excellence practice on 22nd of July 2021

    Take a minute and write an answer to the question, “What is technical debt?” Then read this article and reread your answer — and see if it still makes sense.

    Technical debt is a common term in technology departments at every company where we’ve worked. Nobody explained technical debt; we assumed it was a fundamental property of the work. We never questioned our understanding of it until we discovered a paper by Edith Tom et al. entitled “An Exploration of Technical Debt.” Turns out, we didn’t understand it at all.

    One major concern in academia is rigor. Academics like to get deep into a topic, examine the nuances, and bring clarity. After thoroughly reviewing over 100 seminal papers on technical debt, we saw it as an amorphous ghost, with enormous differences and inconsistencies in its use. Next, we began looking at it in practice, asking colleagues, ex-colleagues, and working technologists, but couldn’t find a satisfactory explanation for it there either. Ultimately, we went back to the original source to figure out the history — and get a sense of its evolution.

    One thing that is agreed on: the term technical debt came from Ward Cunningham. Cunningham is the inventor of the wiki and a tech legend. In the early 1990s, his team was building a piece of financial software, and he used a metaphor from the world of finance to explain to his manager how the team was working. As he later explained in a paper at the OOPSLA conference in 1992:

    A little debt speeds development so long as it is paid back promptly with a rewrite. Objects make the cost of this transaction tolerable. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.

    The metaphor quickly became part of standard technology discourse. Because the conference focused on object-oriented development, it took hold in that community. Popular tech authors such as Martin Fowler and Steve McConnell soon took it on, helping it become part of the broader language in software development. Today, the use of “technical debt” has become commonplace, from a mention in a single paper in 1992 to over 320 million results from a Google search as of July 2021.

    Over time, Cunningham saw the term shift to signify taking a shortcut to achieve a goal more quickly, while intending to do a better job in the future. In 2009, dissatisfied with how the metaphor had mutated, he clarified the use of technical debt in a YouTube video. Cunningham disliked the notion that technical debt signified “doing a poor job now and a better one later.” This was never his intention. He stated:

    I’m never in favor of writing code poorly, but I am in favor of writing code to reflect your current understanding of a problem, even if that understanding is partial.

    But it was too late. By that time, the metaphor had outgrown his initial intent. It was out in the wild, excusing terrible decisions all over the globe. Technical debt now represented both debt taken on intentionally and the more insidious form, hidden or unintentional debt — debt taken on without the knowledge of the team. It had also moved past code and spread to areas as varied as technology architecture, infrastructure, documentation, testing, versioning, build, and usability.

    Technical debt allows practitioners to look at tech delivery through the lens of debt. Is this an appropriate lens? Debt repayment has one vital characteristic: it is easy to understand. Debt repayment has three properties that are straightforward to grasp — principal amount, interest rate, and term (i.e., length of time to repay). But when comparing technical debt, there is no agreement on the principal, no agreement on the sum owed. There is no concept of an interest rate for technical debt because technologists individually evaluate each project as a unique artifact. Finally, term length isn’t a fixed concept in technical debt — in fact, Klaus Schmid even argues that future development should be part of the evaluation of technical debt.

    Enormous effort and energy have gone into trying to calculate an accurate number for technical debt across many technology and academic departments. Unfortunately, trying to glue a direct mathematical representation to a metaphor seems to have failed. The idea of technical debt as a type of debt doesn’t hold up well in this light.

    So is it technical? This depends on whether we consider only the originating action, or the consequences that follow. If an aggressor punches a bystander in the face, we consider not only the action of the aggressor (the originating action) but also the injury to the bystander (the impact of that originating action). Through this lens, technical debt can only be technical if we consider where it originates, as opposed to where it has an impact. Technologists take on the originating action; the business suffers the impacts of those decisions. Technical debt affects:

    • Competitiveness by slowing/speeding up new product development
    • Costs (short-term decrease/long-term increases in development cycles)
    • Customer satisfaction
    • Whether a company can survive

    Once we realize that technical debt is a company-wide concern, we can no longer consider it technical. This label is too narrow and doesn’t communicate its significance. In fact, our current ongoing research shows that technical debt may even have an impact beyond the company, and we need to take an even broader view (its effect on society as one example). 

    The most important takeaway: we must broaden our awareness of technical debt. In the same way that company executives examine financial cash flows and sales pipelines, we must communicate the consequences of taking on technical debt to this audience. Our most important challenge is to find a shared language to help business stakeholders understand the importance of unknown decisions made in technology departments.

    Finally, look back at how you defined technical debt at the beginning of this article. Do you communicate the action or the impact? Is it suitable for a business audience? What is?