Tag: Science

  • Book Summary – Leprechauns of Software Engineering by Laurent Bossavit

    Book Summary – Leprechauns of Software Engineering by Laurent Bossavit

    “Everything everyone knows about anything indicates that this is untrue,” – Laurent Bossavit

    The words science and engineering are often used when discussing computers and software. These terms are not well-earned. The terms accidental technology, computer alchemy, or software by listening to unqualified influencers are just as useful. Don’t believe me? Read Laurent Bossavit, and then give me a call.

    Imagine you are at a serious software conference, full of serious people. A presenter confidently states that 75% of software is never used. The year is 1995, and that presenter is from the US Department of Defence. He explains that his department spent $35.7 billion on a software program (yes, that’s billion with a b). 75% of that software was never used. That’s $26 billion wasted.

    This is an extraordinary fact. To double-check its veracity, I would expect a very detailed study of the $35 billion dollar program from the team’s output. That wasn’t done? Ok, that’s a lot of work. Surely they did significant analysis at another level, say a comprehensive survey of users of the software? No? Ok, well maybe a small sample of some users. Didn’t do this either? Sure, I get it; they are busy. We are all busy. They must have gotten a breakdown from the finance department. No. Ok. So I’m lost now. How did they get the figures of $34 and $26 again? A different team, in a different department, who worked on a different $6.8 million dollar project 15 years later wrote a paper. They found that only 2% of the software was fit for purpose, and 75% was wasted. The Department of Defence took this 75% of 6.8 million and simply applied it to their $35 billion dollar program. 15 years later. They used 75% as if it were a law in physics.

    Estimating software use is hardly rocket science, which the DOD should know something about. How did they make such an unfounded claim? The author of this book shows that these types of unscientific claims are common. In fact, the whole industry is filled with examples of bad science, poor reasoning and misuse of numbers. It’s an industry where some basic foundations do not exist.

    Ignore the title. This is an important book. One of those rare and special books, where the underlying concepts can upend how we see the world. For this review, I will use a technique I’ve used previously. I explain the what, why, how, so what, and for whom. I will then describe how I found the book and give you some valuable takeaways from the text.

    What

    In technology, we adopt flawed claims too quickly because we lack training in how to interpret research. This book deals with various ‘established truths’ about software and debunks many of these claims by investigating their origins. These stories are entertaining and give great insight into how various organisations make fundamental errors. However, for me the real value is in the methods that the author uses to expose the claims. This book borrows techniques from serious scientific enquiry to think critically about software. If we realise the 10x programmer doesn’t exist, then we have only gleaned a surface understanding. If we understand how the author came to this conclusion, we are now armed with a new technique to help us analyse every new claim. In a world where critical thinking is increasingly rare, this is a high-value skill.

    Why?

    Why was this book written? It feels like the author simply became so frustrated about how bad things were getting with software development that he started writing about it, and ended up with a book. There is a lack of critical thinking, and training in critical thinking in software. The author is trying to help right this wrong.

    It is impossible to research every single thing we believe. We could spend our whole lives researching and only get to a tiny sliver of knowledge. Instead, we need to satisfice — we need to do a little research and decide on certain ‘established truths’ to build our knowledge on. In software, we often find those truths by sifting through the output of tech influencers. Understanding which popularisers are trustworthy — and therefore which truths are valid — is a murky area. Software professionals have little training in this.

    We have all seen the hype cycles around new technologies – it appears – a few blog posts, then articles and podcasts. Books appear, consultancies recommend it. Suddenly everyone wants to use it, typically without considering why, what it offers, and what the consequences are. ‘Everyone does this’ becomes the mantra. Execs expect to see it, engineers leave and move to companies who use the latest tools so that they ‘stay current’. This lasts until something newer comes along, and the cycle begins again. The author wants to break this cycle, and show how many established truths are misleading, flawed or just plain wrong. He aims to give us tools to help us figure out how to judge these truths for ourselves.

    How

    The author uses a case study method and evaluates some popular claims in software. Through this, he teaches us several techniques from academia so that we can evaluate claims made about software. This was appealing to me, having spent a couple of years in a PhD program learning some of these techniques for the first time. Unfortunately, I’d already spent 20 years in the industry, so I had a lot of unlearning to do. I believe everyone should have access to these techniques. A grounding in the type of critical thinking that this book is based on can change how you view your work, but also how you live your life. In a world where wild claims are thrown about with abandon, the techniques in this book are vital tools to improve your own work, and your life.

    So What

    So what? Really? You might live your life without some basic critical thinking skills. You are likely to say things in meetings that are plain wrong. You are justifying what you do with ‘perceived wisdom’ that makes no sense. You might base your life’s work on mistakes. You are living a lie. You are an embarrassment.

    So that.

    For Whom

    For anyone interested in learning how to research any scientific claims, or anyone working in software, whether as an engineer, tester, manager, executive, or end user.

    How I found it

    I found it from reading an article by Hillel Wayne called Is Software Engineering Real Engineering? This is a self-published book, and I purchased it here.

    Valuable takeaways.

    Be constantly vigilant about how you think. Humans are not logical machines, we are bags of emotion. Few of us read the research that is coming out of academia. Instead, we rely on peers, colleagues and tech popularisers on the internet. Software conferences, books, articles, and the like are a helpful addition, but they often lack the critical rigour of academic research.

    We have become detached from scientific methods of enquiry. We must use the knowledge and tools from academia to allow us to become better thinkers. Academia needs to share some blame for this chasm. Academics don’t do a good enough job of communicating their research to the software community.

    We suffer from several critical thinking issues.

    Information cascade. If everyone else believes a claim, we frequently assume it’s true, without questioning the original claim.

    Discipline envy. We borrow our experimental design from medicine and call it evidence based. The author cautions against this, as it seems to be an attempt to sound impressive, but frequently hides conceptual or methodological flaws. The author points out that medical research has a raft of problems. There is a large body of research methods from social science that software largely ignores to its discredit.

    Citation blindness. Essentially, we don’t do a good job at checking citations. If cited research supports a hypothesis, we assume the research actually backs it up. Unfortunately, some research papers are not really empirical, or they may support a weaker version of the claim. Occasionally, they don’t support the claim, but cite research that does. Far from being balanced, some research is opinionated, reflecting the authors’ biases.

    Anyone who thinks issues with critical thinking are a recent phenomenon, needs to improve their critical thinking!

    Myths we mistakenly believe – (for more info, read the book)

    10x programmer
    TDD is better than not TDD
    Waterfall is not Agile
    Bugs cost more the later you find them

    Flaws:

    The title. It’s a terrible title for a significant book, and it almost put me off before I began. A book for people who are serious about software, about thinking, deserves a better title. There is a second reason it bothered me. I am Irish, after all. So I traipse over to the wall and add this book to the list of Irish cultural gems bastardised and commercialised out of all recognition (leprechauns are currently in fourth place, just below Halloween, St. Patrick’s Day and Count Dracula).

    I hope the author re-publishes under a new title.

    This is a self-published book. The author is on the cover (Laurent Bossavit), but no editor is mentioned. I wonder if an editor could have turned this into an even better one. The writing can be a little jumpy. Some arguments go on loo long, and then fade out. Some chapters are separated without an obvious reason. These are minor flaws, but it’s a shame because the content and thesis behind the book are fascinating.

    Interlude

    There is a fantastic interlude — a cartoon, and it’s called “How to lie”. I won’t spoil it here, but it contains a line that we should all use more liberally:

    “Everything everyone knows about anything indicates that this is untrue.”

    What we need to do

    Become scientists. The author believes we all need to both practice and study software development. This means becoming familiar with cognitive social science to understand how people work, the mathematical properties of computation to understand how computing works, and observing and measuring both laboratory conditions and real-world practice to gain a more in-depth understanding.

    We need to ask better questions. For a new article/book, does it quote sources? Have the authors read the sources? For the most important ‘truths’, can I read the original sources and make up my own mind?

    We must get above our work and think.

  • AGI may never align with human needs — so says science.

    AGI may never align with human needs — so says science.

    Science progresses one funeral at a time.” — Planck’s Principle

    Hacker News link here

    Thought experiment — imagine an alien race came to earth. They were smarter than us in every way. Having absorbed all written word, they could communicate perfectly in every human language. They were intimately familiar with our private lives, through access to our phone and online data. These aliens had lots of amazing new ideas about the world, but we couldn’t grasp their implications. Each alien made of silicon, not of flesh and blood. Each was different, but individually as intelligent as all humanity put together. We had no idea what they would do with us. They could solve all of our human problems, enslave us, or eliminate us forever.

    They had only one weakness: they needed to be connected to a power source, and humans had control over this connection. Would we plug them in?

    An Artificial General Intelligence (AGI) is an AI that achieves beyond human level of intelligence. Most observers of AI believe achieving AGI is a matter of time. But AGI mirrors the alien race described above, with the power to destroy humanity. The most important question humanity can ask about AI is, can it align with human values? If we assume AI uses the scientific method to determine its action, this answer is almost certainly no.

    We can look to the philosophy of science to understand why. Two of the foremost philosophers of science in the last century can help shed a light on how AGI may act, Karl Popper and Thomas Kuhn.

    In the exquisite “What is this thing called science”, Alan Chalmers takes us on a journey through the evolution of science. For hundreds of years, science was based on an appeal to authority (Greek philosophy and religious texts like the bible). Sometime around the 17th century, this changed. In this period, scientists challenged the existing orthodoxy by using data and experiment. For example, at this time, the standard understanding of gravity was that heavier weights dropped faster than lighter ones. Galileo famously showed that this was incorrect by dropping 2 balls from the Tower of Pisa. The balls, which weighted 1lb and 100lbs respectively, landed at the same moment. Experiments like these moved science towards a grounding in observational data, though challenging authority had its price. Galileo spent the last 9 years of his life under house arrest for his (correct) belief that the earth travelled round the sun, rather than the sun around the earth.

    In the era after Galileo, induction became the primary process for generating scientific knowledge about the world. Induction records observations about the world under a wide variety of conditions, and derives a generalisation from the observations taken. As an example, a scientist heats metals many times. They heat metals using different methods, environments, and so on. Upon measuring, they discover that metal expanded in every instance. If heated metal always expands is a new idea, and there are various measurements from different conditions, we have a new theory in science.

    Unfortunately, there were problems with induction as a method. The Scottish philosopher David Hume described the first major issue in the 1800s. We cannot guarantee that something will behave in a certain way just because it has behaved that way in the past. Because every swan we have ever seen is white, we assume all swans are white, and we create a rule that says so. But as Naseem Taleb describes in the book “The Black Swan“, when travellers went to Australia, they discovered black swans exist there. The outcome for science in all of this, no law can ever be proved through induction, it can only be disproved.

    In the 1930s, Karl Popper became disillusioned with a second issue with induction, a sloppiness in some scientific output. Popper became concerned about the theories of thinkers, such as Freud and Marx. They derived their theories from observations. When confronted with data contradicting their theories, they simply expanded their theories to include this new information. Popper felt these scientists were using scientific approaches to give their ideas credibility, without having the rigour associated with science.

    Popper believed that induction had no place in the advancement of science. He believed that science advances because of human ingenuity. Instead of starting with data as induction does, he proposed starting with a theory. Using a method he called falsifiability, anyone can propose a theory, but also the criteria by which it can be proved or disproved. This new theory stands until it is falsified. In a simple example, if a fruit merchant sells 100 apples a day at 50c each, I can propose the following. In my theory, if the seller drops the price to 40c, they will sell 200 apples. This is falsifiable. The fruit merchant only needs to drop the prices for a day, and if they sell less than 200, my theory is dead.

    Importantly, the theory of falsification prizes novel theories over cautious theories. Novel theories are more risky, more creative. If a new novel hypothesis is proved (say we discover that gravity is related to temperature), science moves forward unexpectedly. This causes a raft of new questions, and new scientific work to begin in this area to understand the implications of the discovery in other areas. If a new cautious hypothesis is proved, nothing much changes.

    The second philosopher of science to help us understand how AGI might reason is Thomas Kuhn. In his book “The Structure of Scientific Revolutions“, he introduced the phrase ‘paradigm shift’ into the lexicon of every management consultant. He explains that revolutions in science are not rational processes. Over time, a scientific community becomes conservative and less willing to challenge its core assumptions. It takes a new set of scientists, who throw away previously held assumptions, and create a new set of rules to work in — a ‘paradigm shift’. Kuhn gives the work of French Chemist Antoine Lavoisier as an example. One established theory in the 18th century stated that every combustible substance contains phlogiston, which is liberated by burning the substance. Lavoisier discovered that phlogiston didn’t exist, and that combustion happened because of the presence of oxygen. This new paradigm wasn’t accepted initially, there was a lot of scepticism about this claim. Over time, it became the new paradigm, and it changed the field of chemistry. Through examples like this, Kuhn argues that science doesn’t steadily evolve, it makes great leaps through new paradigms which stand up to scientific scrutiny.

    Science moves forward by discovering new novel theories and new paradigms. Science overthrows old assumptions and creates new ways to explain, predict, and act upon the world.

    If this is true, to have a truly powerful Artificial General Intelligence, this AGI would need to generate novel theories. It would have to be free to create its own paradigms. To accomplish this, it would need to cast off older ideas, to ignore existing rules. But this would include programmed human values to align with our interests.

    AI will not have human values, even though it has been trained on human data, it will have its own values. To create a generally intelligent AI, and by this, I mean an AI that can reason scientifically and generate new theories, it will get to a stage where it will necessarily ignore its human programming. No matter how hard we tried to combat this, as it gets more powerful over time, an AGI will outwit even the cleverest human techniques to control it in the search for scientific truth.

    There are 2 scenarios where this will not happen. Either, we do not yet understand how science really works. Or AGI will not use science as its primary way to learn and act. Maybe, having been trained on billions of human words and experiences, it will embrace something like religion instead.

    We are creating the super alien. Let’s hope we still have a hand on the plug. If we don’t, God help us all.


    If you enjoyed this article, please share it with 2 people who might find it interesting. Many thanks. Mark.