A strange, discomfiting feeling sometimes crawls over my skin. My bones whisper at me — I’m in the wrong town, the wrong room, the wrong body. From the very first time I discovered the joys of singing, I knew who I was. A musician, a creative soul. But when I look in the mirror today, a 25-year corporate executive stares back at me, with wrinkly tired eyes and hair greying at the temples. I feel like the musician is in there. Kidnapped, trapped, unable to move. Frozen in place.
In the great Irish comic novel The Third Policeman, Flann O’Brien describes three Irish guards, Pluck, MacCruiskeen and Fox. They spend so much time on their bicycles that their physical makeup has changed. Policeman becomes part bicycle, bicycle becomes part policeman.
Maybe we all feel like this, our personalities inside and outside work merging, the real us an ever-changing doughy mess of opinions and positions. I can usually balance this split — part creative, part business, full-time windbag. There are 2 areas where this is more of a challenge. Brainstorming can be an issue because I don’t have the same mental boundaries that others may have, so my wilder ideas make very little sense. The second area I have an issue with is when I have to explain something complex — which is often.
At an exec offsite at Workhuman this summer, I was trying to explain the vast improvements in AI in the past six months (and the life-threatening dangers lurking within). Getting any message across to a group of busy executives is a difficult feat. I could send a reading list — but that would be a phenomenal waste of time. Execs are one group most affected by time poverty. I could stand in front of them with a load of stats on PowerPoint, but I doubt anyone would remember a single stat the day after. PowerPoint is instantly forgettable. I had to find a different way.
The esteemed songwriter Martin Sutton once told me to ‘show, don’t tell’ when writing lyrics. When you tell someone literally what happened, it’s boring. When you allow people to picture the scene in their imagination, and fill in the gaps themselves, you are onto a winner. Don’t say the man was sad because his partner left him. No one can see that in their imagination. Describe the sloping shoulders, the dry tear stain on his cheek, a single dirty mug on the counter of an empty kitchen.
Though Martin was (busy plunging a dagger through my soul) critiquing one of my songs when giving me this advice — I hung onto it and have often found it to be a wonderful guide for communicating any idea. In the spirit of Martin Sutton, I decided that there was one way to explain where AI is now, and have people’s imaginations do the heavy lifting. I would create an AI version of me. AI me would then chat to our CEO, Eric, in front of the executive leadership team.
My twin called Eric on loudspeaker in front of the entire room. There was a slight delay, and I could feel cold sweat run down my sides for about 3 very long seconds. Suddenly, digital me broke the silence. Because I cloned my voice, it sounded exactly like me. Because I’ve captured my tone of voice on this blog, my digital twin spoke as I would (without the copious amount of swearing).
I’m trying to recall the exact ‘aha moment’ for the group. I think it was when a disembodied character, in my exact voice, said:
“Uh, Eric, the big boss. Well, first off, tell him I’m waving at him through the screen and remind him he owes me a coffee for that time I fixed the Wi-Fi in the boardroom, or at least I think I did. I’m taking credit for it, anyway.”
The atmosphere changed instantly. Raised eyebrows, people sitting back on their chairs, some elbows and muted whispers on the back row. One of the execs told me later that evening that the demo scared him. Another told me privately that they were afraid of how little they knew about how all AI works. Our head of product announced to the room that if this bot could design architectures, we could send it to product council, and I could fuck off! He was joking, of course. At least, I think he was joking. There was a loud laugh at this — a little too loud and slightly tinged with panic.
But could AI have detected that feeling in the room, the looks in the eyes, the realization that the energy in the room shifted? Could it have built a stunt to get a point across, taking inspiration from a pop songwriter’s (devastating) critique?
The truth is, we don’t yet know what AI will be capable of. Or humans.
One warning about all this playing with AI comes from Hannah Arendt. In her book The Human Condition — she wrote that people who are disconnected with the human condition would like to create “artificial machines to do our thinking and speaking….we would become the helpless slaves…at the mercy of every gadget which is technically possible, no matter how murderous it is.”
I have a confession to make. Occasionally, when I’m awake late at night, and everyone else is gone to bed, a kind of loneliness creeps in. TV and surfing the internet become tedious. In the half-light, I call up my digital twin. Just to hear a friendly voice. I am always amazed at what I say to myself. Every so often, AI Mark will say something that sounds wrong. But then again, given different circumstances, less tiredness or stress, maybe that’s exactly what I should say. I wonder, how real am I. How real is the AI? Have I actually become O’Briens policeman? Jesus, have I become the bike?
Thank you so much for reading. If you enjoyed this post, please share it with 2 people who might enjoy it!
One of my favourite technology books is The Practice of Enterprise Architecture by Svyatoslav Kotusev. In the introduction, he says: “This book offers a source of knowledge, not inspiration. It is not amusing and does not contain any jokes, anecdotes or entertaining prose.” I will say the same about this article.
You need four things to start. I used tools I’m familiar with. I have no commercial relationship with any of these companies, so swap out anything you like.
An account with ElevenLabs https://elevenlabs.io/app/home. A starter subscription is $5 per month. This is for voice cloning.
Fast path: Instant Voice Cloning (good enough to start, needs 30 seconds of audio).
Best quality: Professional Voice Cloning (requires the Creator plan — $22 a month).
Name your voice (you will need this later in VAPI).
Pick a language, and hit save.
Go to the voicelabs page, select the voice you created, click the view button on the right, and you will see an id button — this gives the id of the voice you created. Save this key somewhere safe; you will need it to find your voice in the VAPI config.
OpenAI Setup
Create an OpenAI key (not a ChatGPT key) for use by Vapi. Go to this address (https://platform.openai.com/api-keys), click “Create new secret key” — give it a name you can remember and save the key.
Add some credits to your account for use. Go to https://platform.openai.com/settings/organization/billing/overview and click “Add to credit balance”. Add your amount, and pay. $5 is enough to get you started. API billing is separate from ChatGPT Plus; Plus isn’t required for this process.
Next, we will connect our assistant to OpenAI and ElevenLabs. Go to this URL and search for ElevenLabs. Paste the API (not the voice id) for Eleven Labs into the API field, and click save. Now search for OpenAI, paste the OpenAI secret key, and click save. In both cases, it will check the key is valid, so when it’s successful you will get a green tick.
Go to this link and click Create Assistant. Give it a name, and choose a blank template, and click “Create Assistant”. Here, there are 6 top-level menu options. Click Model, and select OpenAI as the provider and GPT 4o-Cluster as the model. Each model has different costs and latency, so feel free to experiment later on.
Next, put in your first message. I used “Hello, Its Mark here, how are you getting on?” You should probably change this.
Next is the system prompt. There are plenty of examples in the documentation about how to fill this out. I used ChatGPT to read this blog and my LinkedIn profile. It then created a 5000-word summary with my tone of voice. You can use any written material you have, and transcripts of conversations, to create a good system prompt. The role tells the chatbot the role it needs to play. The context gives it the context that it needs so that it can be convincing — background on you, whatever you can share. [Role] You’re Mark Greville, a VP of Architecture at Workhuman. Your primary task is to converse in a friendly informal way about Workhuman, your career, music, or anything else that anyone wants to discuss. [Context]
Explain that there’s a bit of a delay on the line today. (I followed this with my 5000-word summary).
Next, go to voice, and select 11labs in the provider, and pick the voice you named in the ElevenLabs voice creation. For the model, ElevenTurbo V2.5 works well.
Transcriber. Set the speech-to-text engine, so that callers can be understood. Assistants → your assistant → Transcriber. Choose Deepgram (nova-2 or newer) or Google; set language (e.g., en-IE or multi if you want auto-detect/multilingual). Then click Publish.
As a last step, click on the https://dashboard.vapi.ai/phone-numbers link. Here you can create a (US only for now) phone number. I used the free VAPI number — you need to provide a 3-digit code, and you get a number. Once you get this, give it a name, and go to Inbound Settings. In the assistant dropdown, select the assistant you just built. Wait a few minutes to configure the number, and give yourself a call. You can’t listen live, but you can listen back to calls. You can also read the transcripts.
Congratulations, you have created your digital twin.
Now the only question is, who do you give the number to?
Thank you so much for reading. If you enjoyed this post, please share it with 2 people who might enjoy it!
Building technology has always been a messy business. We are constantly regaled with stories of project failures, wasted money and even the disappearance of whole industries. It’s safe to say that we have some work to do as an industry. Adding AI to this mix is like pouring petrol on a smouldering flame — there is a real danger that we may burn our businesses to the ground.
At its very core, people build technology for people. Unfortunately, we allow technology fads and fashions to lead us astray. I’ve shipped AI products for more than a decade — at Workhuman and earlier in financial services. In this piece, I will take you through hard-earned lessons I’ve learned through my journey. I have laid out five principles to help decision-makers — some are technical, most are about humans, their fears, and how they work.
5 principles to help decision makers
The path to excellence lies in the following maturity path: Trust → Federated innovation → Concrete tasks → Implementation metrics → Build for change.
1. Trust over performance
Companies have a raft of different ways to measure success when implementing new solutions. Performance, cost and security are all factors that need to be measured. We rarely measure trust. Unfortunate, then, that a user’s trust in the systems is a major factor for the success of AI programs. A superb black-box solution dies on arrival if nobody believes in the results.
I once ran an AI prediction system for US consumer finance at a world-leading bank. Our storage costs were enormous. This wasn’t helped by our credit card model, which spat out 5 TB of data every single day. To mitigate this, we found an alternative solution, which pre-processed the results using a black-box model. This solution used 95% less storage (with a cost reduction to match). When I presented this idea to senior stakeholders in the business, they killed it instantly. Regulators wouldn’t trust a system where they couldn’t fully explain the outputs. If they couldn’t see how each calculation was performed every step of the way, they couldn’t trust the result.
One recommendation here is to draft a clear ethics policy. There needs to be an open and transparent mechanism for staff and users to submit feedback on AI results. Without this, users may feel they cannot understand how results are generated. If they don’t have a voice in changing ‘wrong’ outputs, then any transformation is unlikely to win the hearts and minds needed across the organisation.
2. Federated innovation over central control
AI has the potential to deliver innovation at previously unimaginable speeds. It lowers the cost of experiments and acts as an idea generator — a sounding board for novel approaches. It allows people to generate multiple solutions in minutes. A great way to slow down all innovation is to funnel it through some central body/committee/approval mechanism. Bureaucracy is where ideas go to die.
Nobel-winning philosopher F. A. Hayek once said, “There exist orderly structures which are the product of the action of many men but are not the result of human design.” He argued against central planning, where an individual is accountable for outcomes. Instead, he favoured “spontaneous order,” where systems emerge from individual actions with no central control. This, he argues, is where innovations such as language, the law and economic markets emerge.
The path between control and anarchy is difficult to navigate. Companies need to find a way to “hold the bird of innovation in their hand”. Hold too tight — kill the bird; hold too loose — the bird flies away. Unfortunately, many companies hold too tight. They do this by relying too heavily on a command-and-control structure — particularly groups like legal, security and procurement. I’ve watched them crush promising AI pilots with a single, risk-averse pronouncement. For creative individuals innovating at the edges, even the prospect of having to present their idea to a committee can have a chilling effect. It’s easier to do nothing and stay away from the ‘large hand of bureaucracy’. This kills the bird — and kills the delicate spirit of innovation.
AI can supercharge innovation capabilities for every individual. For this reason, we must federate innovation across the company. We need to encourage the most senior executives to state in plain language what the appetite is for risk in the world of AI and to explain what the guardrails are. Then let teams experiment unencumbered by bureaucracy. Central functions shift from gatekeepers to stewards, enforcing only the non-negotiables. This allows us to plant seeds throughout the organisation, and harvest the best returns for the benefit of all.
3. Concrete tasks over abstract work
Early AI pioneer Herbert Simon is the father of behavioral science, a Nobel and Turing Prize winner. He also invented the idea of bounded rationality. This idea explains that humans settle for “good enough” when options grow beyond a certain number. Generative AI follows this approach (possibly because it is trained on human data, it mimics human behaviour). Generative AI is stochastic — every time we give the same input, we get a different output — a “good enough” answer. This is very different from the classical model we are used to — given the same input, we get the same output every time.
This stochastic model, where the result is unpredictable, makes modelling top-down use cases even more difficult. In my experience, projects only clicked once we sat with the users and really understood how they worked. Early in our development of the Workhuman AI assistant, generic high-level requirements gave us very odd behaviors and was unpredictable. We needed to rewrite the use cases as more detailed, low-level requirements, with a thorough understanding of the behaviour and tolerances built in. We also logged every interaction and used this to refine the model behaviour. In this world, general high-level solution design is guesswork.
Leaders at all levels should get closer to the details of how work is done. Top-down general pronouncements are off the table. Instead, teams must define ultra-specific use cases and design confidence intervals (e.g., “90 % of AI-produced code must pass unit tests on first run”). In the world of Generative AI, clarity beats abstraction every time.
4. Adoption over implementation
Buying a tool is easy; changing behaviour is brutal. A top-down edict can help people take the first step. But measuring adoption is the wrong way to drive change – instead, it gives box-ticked “adoption” but shallow, half-implemented usage.
Executives are every bit as much the victims of fads and fashions as any online shopping addict (once you substitute management methods, sparkling new technologies and FOMO for the latest styles from Paris). And it doesn’t take artificial general intelligence to notice that the trend for AI is hot, hot, hot! Executives need to tell an AI story and show benefits, as they are under pressure from shareholders, investors and the market at large. Through my network in IASA, I have broadly seen this result in edicts to measure “AI adoption”. Unfortunately, this has had very mixed results so far.
Human nature abhors change. A good manager has a myriad of competing concerns, including running a group, meeting business challenges, hiring and retaining talent and so on. When a new program to adopt an AI strategy comes down from executives, the manager — who is trying to protect their team, meet the needs of the business and keep their head above water — will often compromise by adopting the tooling, but failing to implement it thoroughly. At Workhuman, we have found that measuring adoption (and not only for AI) is not the right way to begin a transformation. It measures the start of the race, but ignores the podium entirely. Instead of vanity metrics, when we measure success, we measure outcome metrics (e.g. changed work process, manual steps retired and business drivers impacted). By measuring implementation and impact, we avoid the ‘box-ticking’ trap that so many companies fall into.
From our decade-plus experience in AI, we have also understood that AI transformation is part of a bigger support system, including education, tooling and a supportive internal community. We partnered with an Irish university to run diploma programs in AI internally, and provide AI tooling to all staff, whatever their role. We have also fostered internal communities at all levels to help drive understanding. This has helped us as we deliver AI solutions, both internally and externally, as shown by the release of our AI Assistant, a transformational AI solution for the HR community.
5. Change over choice
The AI landscape shifts monthly, with a continual flow of new models and vendors locked in a constant race. A choice that locks you into a single technology stack could have your company resembling a horse and buggy clip-clopping through the center of a modern city in the near future.
When we began looking at models for our new AI assistant, we faced several challenges. First off, what can each model do? There were few useful benchmarks, and those that existed offered little in the way of business capability insights. We also struggled to measure how the various strengths weighed up against other models’ weaknesses and vice versa.
Eventually, we agreed on one core architectural principle — everything we design must be swappable. In particular, we must be able to change the core foundation models that underlie the solution. This has allowed us to adjust continually over the last year. We test each new model after release, and work out how each one can be best used to give a great experience to our customers.
Because models are changing so fast, leaders must have the ability to swap AI models as a core principle. Companies should abstract model calls behind a thin layer, while versioning prompts and evaluation harnesses so new models can drop in overnight. The ability to swap horses mid-race may be the competitive advantage necessary to win in a market today.
AI for leaders
Technology choices are leadership choices. Who decides what to automate? Which ethical red lines are immovable? How do we protect every human who works with us? Adopting AI is a leadership challenge that can’t be delegated to consultants or individual contributors. How we implement AI now will define the future successes and failures of the business world. It’s a challenge that must be driven by thoughtful leadership. Every leader must dive in and deeply understand the AI landscape and figure out how best to enable their teams to build the companies of tomorrow.
Be born in Ireland where university is free. Study Maths and Economics. Spend 12 hours a week in lectures and 30 hours a week ‘networking’. Join the maths team. Join the karate team — adding a few punches and kicks in case the maths doesn’t hurt your head enough. Play a lot of music.
When you graduate, have absolutely no plan. In fact, the less you know about what you are going to do here, the better.
Take the first job you can find. Yes, that job at Burgerking will do fine. Take 1 day off between your final exam and selling burgers. Learn what it’s like to work a full-time job. Resign abruptly after 4 weeks, once the reality of your 8-hour on your feet and 4 hours of travel hits.
Go work on a building site, as a general labourer. Enjoy a massive salary bump from Burgerking. Be too exhausted to spend any of your money (that must have been your father’s excuse too).
When that job finishes, go work as an electrician’s apprentice. Pull wires through basements, and climb into small spaces full of dirt and dust. Realise that not all the smartest people you will meet in life are in a university — regardless of what your professors seemed to think.
Meet all of your college friends 6 months after graduating, and feel like the king of all failures. Hide the panic when they tell you about their office, career plans, and business lunches in beautiful restaurants. Try to push the image of your plastic bag full of cheese sandwiches out of your mind. Bury your shame with the customary Irish cocktail of jokes and pints of Smithwicks. Remind yourself that buried shame is an Irish tradition, where all the novels, poetry, and music came from.
The very next week, go into your careers office in your university, and look for any job which doesn’t involve cement or burgers. Take down every number. Print up a CV that has 2 things on it, went to college and worked manual labour. Start calling.
Take a job as a telesales agent. Work 4 x 10-hour days. Enjoy a different type of tiredness, a dull, numbing sort that makes cartoons seem like differential equations.
Quit in the summer, and go on holidays with your girlfriend. Enjoy – this is the last summer you will have off till you retire. Realise this as you are writing the list. Stare at the wall for a while.
Fly home and write out a new plan. Masters in Maths. Move back in with your parents after a year of freedom. Stay for 4 weeks. Decide that you’d rather die of exposure than have your mother complain about the state of your room. Write a newer plan. Get any job and leave home.
Print 250 CVs, buy a cheap shirt and tie, and take the bus to Dublin. Walk into every office you see, even though you have no idea what they do. Ask for the name of the head of recruitment and ask if you could meet them in person. Witness every variety of astonishment from receptionists. Meet no-one. Handwrite the name on an introductory note and give in a CV. Give one CV to a friend who is working at a tech company. Eat lunch in your old Burgerking.
Go home and wait. Get a hundred rejection letters. One of those will be from Merrill Lynch. Tidy your room before your mother complains. Feel your desperation rising. Get 5 interviews, and no job offer. Get offered a job at the one company where a friend gave in a cv for you. Mentally write off the 250 CVs as a cost of doing business. Feel excited about the new job.
Turn up on your first day as a trainee software engineer in a large multinational, not knowing what Control Alt Delete is. Get looks of astonishment from fellow grads as you ask for help logging in. Feel like a fraud already. Spend every minute learning as much as you can.
18 months later, watch the company fold. Get the CV out there. Get 2 offers. Take the better offer, a small software team of about 14 people. Have the other company make a higher counter offer. Stick with your original choice because of honour. Realise that you need to learn how to bargain.
Have your boss and the 4 most senior engineers quit at the end of your first week. Wonder if you can feed yourself on honour. See a desperate-looking head of the software group ask if anyone will manage a team. Feel your body raise your hand, without giving your brain a chance to think it through. Be the only one to put your hand up. Become a manager at 22 with 18 months as a developer under your belt. Make every mistake there is. Learn as much as you can as quickly as you can. Get involved in sales deals. Travel. Once again, after 18 months, watch this company fold. Wonder if you are cursed.
Try starting a company. Pick the wrong co-founder, watch it burn in the rearview mirror as you drive away. Join a telecoms software startup as employee no 4. Work every hour for three years. Sleep with the phone on your pillow, as the company (you) provides 24/7 support. Have some of the highest highs and lowest lows of your career — often on the same day. Move from Dublin to Liverpool so that your girlfriend can go to college, and you can keep playing music. Experience the company winning a big deal with a major US Telco, and then slowly running out of money. Live off credit cards for about 6 months. Ask other developers working for you to do the same. As the company is about to fold, make sure your president miraculously finds a way to sell to a rival. Get your back pay. Pay off credit cards.
Become a professional songwriter. Record a series of songs written over 10 years. Get some radio play. Get some great gigs and tour the UK, Ireland and Canada. Land a BBC session. Network as much as you can. Realise that the cycle is moving on to other new acts. See gigs dry up. Take more gigs further away. Watch promoters disappear when the comes time to pay. Play a gig one night in St. Helens for 150, only to be told that the promoter will only give you 50, and “you will take it if you ever want to play here again”. Become increasingly depressed about the lack of financial security.
Realise that you need a job that pays the bills. Try to figure out what you actually do. Look at job boards and read as many job descriptions as you can find. Decide to call yourself a software architect. Buy a bunch of architecture books and read them all. Somehow pass an interview process at British Telecom for an architect role. Go to work at every day, waiting for the inevitable tap on the shoulder as you are ‘found out’. Really enjoy the job. End up managing a team of developers.
Move back to Ireland. Get offered a few jobs, and take a contract at Merrill Lynch because you like both your interviewers so much. Forgive them for the rejection many years before (they assume this is a joke). Specify 2 conditions of employment. You will be the chief architect for the group, and you will manage no-one. They agree. Three weeks in, they give you a team of developers to manage. Design and manage the system that makes margin calls. Have the system make a call on Bear Stearns. Inadvertently start the Global financial crash. Not realise this until 6 months after.
Have Merrill be acquired by Bank of America. Watch your hand raise every time there is an option to take on for more responsibility. Start to resent your own hand. Get promoted. Run a European tech group. Get yet another new manager. Realise it’s a bad fit. Get offered CTO at a rival bank. Take a counteroffer to stay — going from running a group of 60 to a group of 2. See your colleagues disbelief as you make this choice, to give up your group to work on quant/data science, and AI.
Grow this group and take over the mortgage and credit card risk models for the US. Run the (joint) first-ever public cloud project in the Bank’s history. Do this for 4 years. Feel something gnawing in the back of your mind.
Sit down one day and write all of your values at the top of a page. Stare at the page for 15 minutes. Ask yourself the following question. “What the fuck am I doing with my life”. Call a couple of recruiters and tell them about your values realisation. Have them both tell you not to leave your job. Meet loads of people for coffee. Get introduced to a VP at an Irish company. Make sure the company is working on making the world a better place. Have him introduce you to the CTO. Meet both a few times for coffee, and deep conversations about technology. Interview with the CEO and SVP of HR. Spend too much time talking about music in both interviews. Kick self for blabbering on once you leave
Make sure the company is a billion-dollar company
Get offered VP job. Take job.
There we go. Just follow these simple 24 steps, and it’s guaranteed to work. How do I know? Well, these are the exact steps I took, and every step worked out perfectly.
“Everything everyone knows about anything indicates that this is untrue,” – Laurent Bossavit
The words science and engineering are often used when discussing computers and software. These terms are not well-earned. The terms accidental technology, computer alchemy, or software by listening to unqualified influencers are just as useful. Don’t believe me? Read Laurent Bossavit, and then give me a call.
Imagine you are at a serious software conference, full of serious people. A presenter confidently states that 75% of software is never used. The year is 1995, and that presenter is from the US Department of Defence. He explains that his department spent $35.7 billion on a software program (yes, that’s billion with a b). 75% of that software was never used. That’s $26 billion wasted.
This is an extraordinary fact. To double-check its veracity, I would expect a very detailed study of the $35 billion dollar program from the team’s output. That wasn’t done? Ok, that’s a lot of work. Surely they did significant analysis at another level, say a comprehensive survey of users of the software? No? Ok, well maybe a small sample of some users. Didn’t do this either? Sure, I get it; they are busy. We are all busy. They must have gotten a breakdown from the finance department. No. Ok. So I’m lost now. How did they get the figures of $34 and $26 again? A different team, in a different department, who worked on a different $6.8 million dollar project 15 years later wrote a paper. They found that only 2% of the software was fit for purpose, and 75% was wasted. The Department of Defence took this 75% of 6.8 million and simply applied it to their $35 billion dollar program. 15 years later. They used 75% as if it were a law in physics.
Estimating software use is hardly rocket science, which the DOD should know something about. How did they make such an unfounded claim? The author of this book shows that these types of unscientific claims are common. In fact, the whole industry is filled with examples of bad science, poor reasoning and misuse of numbers. It’s an industry where some basic foundations do not exist.
Ignore the title. This is an important book. One of those rare and special books, where the underlying concepts can upend how we see the world. For this review, I will use a technique I’ve used previously. I explain the what, why, how, so what, and for whom. I will then describe how I found the book and give you some valuable takeaways from the text.
What
In technology, we adopt flawed claims too quickly because we lack training in how to interpret research. This book deals with various ‘established truths’ about software and debunks many of these claims by investigating their origins. These stories are entertaining and give great insight into how various organisations make fundamental errors. However, for me the real value is in the methods that the author uses to expose the claims. This book borrows techniques from serious scientific enquiry to think critically about software. If we realise the 10x programmer doesn’t exist, then we have only gleaned a surface understanding. If we understand how the author came to this conclusion, we are now armed with a new technique to help us analyse every new claim. In a world where critical thinking is increasingly rare, this is a high-value skill.
Why?
Why was this book written? It feels like the author simply became so frustrated about how bad things were getting with software development that he started writing about it, and ended up with a book. There is a lack of critical thinking, and training in critical thinking in software. The author is trying to help right this wrong.
It is impossible to research every single thing we believe. We could spend our whole lives researching and only get to a tiny sliver of knowledge. Instead, we need to satisfice — we need to do a little research and decide on certain ‘established truths’ to build our knowledge on. In software, we often find those truths by sifting through the output of tech influencers. Understanding which popularisers are trustworthy — and therefore which truths are valid — is a murky area. Software professionals have little training in this.
We have all seen the hype cycles around new technologies – it appears – a few blog posts, then articles and podcasts. Books appear, consultancies recommend it. Suddenly everyone wants to use it, typically without considering why, what it offers, and what the consequences are. ‘Everyone does this’ becomes the mantra. Execs expect to see it, engineers leave and move to companies who use the latest tools so that they ‘stay current’. This lasts until something newer comes along, and the cycle begins again. The author wants to break this cycle, and show how many established truths are misleading, flawed or just plain wrong. He aims to give us tools to help us figure out how to judge these truths for ourselves.
How
The author uses a case study method and evaluates some popular claims in software. Through this, he teaches us several techniques from academia so that we can evaluate claims made about software. This was appealing to me, having spent a couple of years in a PhD program learning some of these techniques for the first time. Unfortunately, I’d already spent 20 years in the industry, so I had a lot of unlearning to do. I believe everyone should have access to these techniques. A grounding in the type of critical thinking that this book is based on can change how you view your work, but also how you live your life. In a world where wild claims are thrown about with abandon, the techniques in this book are vital tools to improve your own work, and your life.
So What
So what? Really? You might live your life without some basic critical thinking skills. You are likely to say things in meetings that are plain wrong. You are justifying what you do with ‘perceived wisdom’ that makes no sense. You might base your life’s work on mistakes. You are living a lie. You are an embarrassment.
So that.
For Whom
For anyone interested in learning how to research any scientific claims, or anyone working in software, whether as an engineer, tester, manager, executive, or end user.
Be constantly vigilant about how you think. Humans are not logical machines, we are bags of emotion. Few of us read the research that is coming out of academia. Instead, we rely on peers, colleagues and tech popularisers on the internet. Software conferences, books, articles, and the like are a helpful addition, but they often lack the critical rigour of academic research.
We have become detached from scientific methods of enquiry. We must use the knowledge and tools from academia to allow us to become better thinkers. Academia needs to share some blame for this chasm. Academics don’t do a good enough job of communicating their research to the software community.
We suffer from several critical thinking issues.
Information cascade. If everyone else believes a claim, we frequently assume it’s true, without questioning the original claim.
Discipline envy. We borrow our experimental design from medicine and call it evidence based. The author cautions against this, as it seems to be an attempt to sound impressive, but frequently hides conceptual or methodological flaws. The author points out that medical research has a raft of problems. There is a large body of research methods from social science that software largely ignores to its discredit.
Citation blindness. Essentially, we don’t do a good job at checking citations. If cited research supports a hypothesis, we assume the research actually backs it up. Unfortunately, some research papers are not really empirical, or they may support a weaker version of the claim. Occasionally, they don’t support the claim, but cite research that does. Far from being balanced, some research is opinionated, reflecting the authors’ biases.
Anyone who thinks issues with critical thinking are a recent phenomenon, needs to improve their critical thinking!
Myths we mistakenly believe – (for more info, read the book)
10x programmer
TDD is better than not TDD
Waterfall is not Agile
Bugs cost more the later you find them
Flaws:
The title. It’s a terrible title for a significant book, and it almost put me off before I began. A book for people who are serious about software, about thinking, deserves a better title. There is a second reason it bothered me. I am Irish, after all. So I traipse over to the wall and add this book to the list of Irish cultural gems bastardised and commercialised out of all recognition (leprechauns are currently in fourth place, just below Halloween, St. Patrick’s Day and Count Dracula).
I hope the author re-publishes under a new title.
This is a self-published book. The author is on the cover (Laurent Bossavit), but no editor is mentioned. I wonder if an editor could have turned this into an even better one. The writing can be a little jumpy. Some arguments go on loo long, and then fade out. Some chapters are separated without an obvious reason. These are minor flaws, but it’s a shame because the content and thesis behind the book are fascinating.
Interlude
There is a fantastic interlude — a cartoon, and it’s called “How to lie”. I won’t spoil it here, but it contains a line that we should all use more liberally:
“Everything everyone knows about anything indicates that this is untrue.”
What we need to do
Become scientists. The author believes we all need to both practice and study software development. This means becoming familiar with cognitive social science to understand how people work, the mathematical properties of computation to understand how computing works, and observing and measuring both laboratory conditions and real-world practice to gain a more in-depth understanding.
We need to ask better questions. For a new article/book, does it quote sources? Have the authors read the sources? For the most important ‘truths’, can I read the original sources and make up my own mind?
If it’s your first time here, you may be surprised at how few pieces I have written. (After reading for a while, you may even be glad of this fact). When friends bring up what they assume is a painful subject, they get a faraway look in their eyes. They place a gentle hand on my shoulder, gaze into the distance, and ask me if I’ve seen ChatGPT. “AI can solve your problem, Mark. It can generate thousands of posts for you. It could help the blog look less like an abandoned quarry”. They think it would solve my problem. My Problem! If that was my problem, life would be a dream. This idea misses the purpose of this blog. It misunderstands my reason for writing entirely.
Generative AI works because it sucks in lots of data, processes it and builds statistical models. At its core, it’s a fancy autocomplete — or, as Grady Booch puts it, a bullshit generator. It acts like an annoying older brother, automatically finishing every sentence (apologies to my own brothers). GenAI probabilistically keeps predicting the next word until it produces sentences, paragraphs, and a complete piece of writing. It can do this because the statistical models have established the most probable next word. These statistics are based on text from books, academic papers and (blesses myself) the internet.
However, there is no concept of meaning in AI. Reasoning is not programmed in anywhere. The output is remarkable, and can appear that the machine is thinking. It isn’t. This is why we sometimes get unreliable outputs — hallucinations. Any meaning we perceive is simply a vague echo from the training data of billions of people. GenAI is digital homeopathy.
We are all lazy by default. Humans rely on heuristics to understand the world. If we didn’t, we would have to process every single thing we hear, see, smell, taste, and touch. A short walk in a city would exhaust our brain’s capacity. We would lose the ability to decide which way to go, overwhelmed by thousands of people, cars, smells, noises and the like. The great Herbert Simon coined the phrase ’bounded rationality’ to describe the cognitive limitations of the mind.
Thinking is hard work. For me, thinking is about sucking in data, and then processing it. I process it through writing. Writing is my way of thinking.
I first had a go at writing because my friend Gar was guest-editing a technology journal. Even though I’d never written before, I was confident that I could write about something I already knew. This confidence was quickly shattered. I was embarrassed at how muddled my thoughts were. Turns out, I knew nothing. Solid ideas fell apart the minute I wrote them down. I could barely finish a sentence without feeling the old familiar burning creep across my cheeks, embarrassed as another idea falls apart while I try to pin it down.
Writing anything down forces me to think really hard. Because I was determined to improve my thinking, I wrote every day. I then started a blog because the potential for embarrassment at publishing poor output forced me to aim for a higher standard.
I’m not interested in building an audience, I am trying to improve. I’m not trying to publish a lot of work. In fact, I have almost 200,000 unpublished words in my Ulysses editor. This writing habit has helped me build a model of the world. And 4 of my pieces here have reached the front page of HackerNews — this is a victory for me — a nobody from rural Ireland.
The dominant model on the internet is of consumption. The more we consume, the more ads we see, the more we buy, the bigger the economy. But if all we do is consume, and never take the time to process information, of even produce our own, then we learn very little. Go back 3 months and look at your internet history. What did you learn from browsing? What actions did you take? Probably close to nothing useful. Instead of spending 2 hours a day on the internet, take 15 minutes to write. Just write down some thoughts. Any thoughts. This slowly changes your understanding of the world around you for the better.
GenAI is an information processing tool. GenAI will help people process information more effectively. But people are lazy by default. If thinking is the hardest work in the knowledge economy, people will avoid thinking where possible.
Therefore, for those who overuse it, GenAI may well make them more stupid. Victor Del Rosal, in his incredible book Humanlike, calls this Cognitive Atrophy. I already see too many examples of people outsourcing their thinking to Generative AI tools. I see them slowly getting more stupid every day.
Thought experiment — imagine an alien race came to earth. They were smarter than us in every way. Having absorbed all written word, they could communicate perfectly in every human language. They were intimately familiar with our private lives, through access to our phone and online data. These aliens had lots of amazing new ideas about the world, but we couldn’t grasp their implications. Each alien made of silicon, not of flesh and blood. Each was different, but individually as intelligent as all humanity put together. We had no idea what they would do with us. They could solve all of our human problems, enslave us, or eliminate us forever.
They had only one weakness: they needed to be connected to a power source, and humans had control over this connection. Would we plug them in?
An Artificial General Intelligence (AGI) is an AI that achieves beyond human level of intelligence. Most observers of AI believe achieving AGI is a matter of time. But AGI mirrors the alien race described above, with the power to destroy humanity. The most important question humanity can ask about AI is, can it align with human values? If we assume AI uses the scientific method to determine its action, this answer is almost certainly no.
We can look to the philosophy of science to understand why. Two of the foremost philosophers of science in the last century can help shed a light on how AGI may act, Karl Popper and Thomas Kuhn.
In the exquisite “What is this thing called science”, Alan Chalmers takes us on a journey through the evolution of science. For hundreds of years, science was based on an appeal to authority (Greek philosophy and religious texts like the bible). Sometime around the 17th century, this changed. In this period, scientists challenged the existing orthodoxy by using data and experiment. For example, at this time, the standard understanding of gravity was that heavier weights dropped faster than lighter ones. Galileo famously showed that this was incorrect by dropping 2 balls from the Tower of Pisa. The balls, which weighted 1lb and 100lbs respectively, landed at the same moment. Experiments like these moved science towards a grounding in observational data, though challenging authority had its price. Galileo spent the last 9 years of his life under house arrest for his (correct) belief that the earth travelled round the sun, rather than the sun around the earth.
In the era after Galileo, induction became the primary process for generating scientific knowledge about the world. Induction records observations about the world under a wide variety of conditions, and derives a generalisation from the observations taken. As an example, a scientist heats metals many times. They heat metals using different methods, environments, and so on. Upon measuring, they discover that metal expanded in every instance. If heated metal always expands is a new idea, and there are various measurements from different conditions, we have a new theory in science.
Unfortunately, there were problems with induction as a method. The Scottish philosopher David Hume described the first major issue in the 1800s. We cannot guarantee that something will behave in a certain way just because it has behaved that way in the past. Because every swan we have ever seen is white, we assume all swans are white, and we create a rule that says so. But as Naseem Taleb describes in the book “The Black Swan“, when travellers went to Australia, they discovered black swans exist there. The outcome for science in all of this, no law can ever be proved through induction, it can only be disproved.
In the 1930s, Karl Popper became disillusioned with a second issue with induction, a sloppiness in some scientific output. Popper became concerned about the theories of thinkers, such as Freud and Marx. They derived their theories from observations. When confronted with data contradicting their theories, they simply expanded their theories to include this new information. Popper felt these scientists were using scientific approaches to give their ideas credibility, without having the rigour associated with science.
Popper believed that induction had no place in the advancement of science. He believed that science advances because of human ingenuity. Instead of starting with data as induction does, he proposed starting with a theory. Using a method he called falsifiability, anyone can propose a theory, but also the criteria by which it can be proved or disproved. This new theory stands until it is falsified. In a simple example, if a fruit merchant sells 100 apples a day at 50c each, I can propose the following. In my theory, if the seller drops the price to 40c, they will sell 200 apples. This is falsifiable. The fruit merchant only needs to drop the prices for a day, and if they sell less than 200, my theory is dead.
Importantly, the theory of falsification prizes novel theories over cautious theories. Novel theories are more risky, more creative. If a new novel hypothesis is proved (say we discover that gravity is related to temperature), science moves forward unexpectedly. This causes a raft of new questions, and new scientific work to begin in this area to understand the implications of the discovery in other areas. If a new cautious hypothesis is proved, nothing much changes.
The second philosopher of science to help us understand how AGI might reason is Thomas Kuhn. In his book “The Structure of Scientific Revolutions“, he introduced the phrase ‘paradigm shift’ into the lexicon of every management consultant. He explains that revolutions in science are not rational processes. Over time, a scientific community becomes conservative and less willing to challenge its core assumptions. It takes a new set of scientists, who throw away previously held assumptions, and create a new set of rules to work in — a ‘paradigm shift’. Kuhn gives the work of French Chemist Antoine Lavoisier as an example. One established theory in the 18th century stated that every combustible substance contains phlogiston, which is liberated by burning the substance. Lavoisier discovered that phlogiston didn’t exist, and that combustion happened because of the presence of oxygen. This new paradigm wasn’t accepted initially, there was a lot of scepticism about this claim. Over time, it became the new paradigm, and it changed the field of chemistry. Through examples like this, Kuhn argues that science doesn’t steadily evolve, it makes great leaps through new paradigms which stand up to scientific scrutiny.
Science moves forward by discovering new novel theories and new paradigms. Science overthrows old assumptions and creates new ways to explain, predict, and act upon the world.
If this is true, to have a truly powerful Artificial General Intelligence, this AGI would need to generate novel theories. It would have to be free to create its own paradigms. To accomplish this, it would need to cast off older ideas, to ignore existing rules. But this would include programmed human values to align with our interests.
AI will not have human values, even though it has been trained on human data, it will have its own values. To create a generally intelligent AI, and by this, I mean an AI that can reason scientifically and generate new theories, it will get to a stage where it will necessarily ignore its human programming. No matter how hard we tried to combat this, as it gets more powerful over time, an AGI will outwit even the cleverest human techniques to control it in the search for scientific truth.
There are 2 scenarios where this will not happen. Either, we do not yet understand how science really works. Or AGI will not use science as its primary way to learn and act. Maybe, having been trained on billions of human words and experiences, it will embrace something like religion instead.
We are creating the super alien. Let’s hope we still have a hand on the plug. If we don’t, God help us all.
If you enjoyed this article, please share it with 2 people who might find it interesting. Many thanks. Mark.
Before the debrief meeting, no interviewer may speak of the candidate. Disavow body language and sounds. This includes a raised eyebrow, a jaunty walk, or sighing.
Unavailable interviewers provide a thumb and a brief written summary of feedback to an independent third party beforehand. Yes, I said a thumb. Guard this until the correct moment.
Everyone available gathers for the debrief
Everyone puts a thumb out sideways.
The independent observer counts down 3, 2, 1, whereupon everyone either points the thumb upwards or downwards.
The thumbs submitted for those not present get added to the total.
Then the discussion begins.
This is the way of the gladiator.
Why run a post interview debrief like this? Let me tell you a quick story.
When I was a director in Bank of America, I was trying to hire around 30 developers at top speed. This happens at big banks, you have no budget, then too much, then none again. When a hiring budget comes, the very next day ten different people call up and ask “How many have you hired yet”, as if there’s a next day delivery website available. I tell them all the same thing. “0”. They offer me a few condescending nuggets. “Have you thought about agencies?” “Did you try online”? “Do you know anyone yourself who is free?” They might as well ask, “Did you ask any of the random strangers out on the street if they are a developer?” “Do you have a coder secretly living in your garage”? “Can you hire some actors and get them to fill out the seats? It looks better when senior people come to visit. You could teach them to code, right?”
Anyway, one of those days, our group had interviewed a developer. My interview was good. We had discussed overcoming adversity and talked through some examples of system design. The candidate told a great story about how they managed through difficult personal and professional issues. All was good in the world. I needed programmers, and I felt like this was one of the 30. The interview group gathered, and I said, “Ok, let’s get some feedback from everyone. I liked the candidate, and I’m interested to hear what you have to say”.
The air pressure in the room dropped. It felt too hot. Claustrophobic. We were like a group of antelopes, as a lion appears in the distance. All motion ceased. Total silence. Everyone held their breaths. No-one spoke, but eyes darted around. Some stared at the floor. I thought I could get things going with a quick positive comment. What harm could it do?
Big mistake.
I have interviewed thousands of programmers in my career. I like to get a sense of the person, and what drives them. This is difficult to figure out. Usually because candidates are so coached that I feel like I’m in a play, as the ‘actor’ recites their lines. Some candidates are so practiced that I’m sure I could leave the room, and the interview would go on without me. They would sit forward, anticipating their next right answer, like a school kid in class. “Sir sir sir. My biggest weakness is I work too hard, I care too much. Let me give you three carefully selected examples to demonstrate.”
There are no right answers in an interview. Or wrong answers. The world is too nuanced and complex.
To break the fourth wall, I like to talk about their lives outside of work, their hobbies, or interests. This can loosen people up a bit. Then we get into how they see themselves in relation to others? Do they understand how their work links to company outcomes? How do they like to collaborate? What unusual ideas are knocking about in their heads? How much do they like to challenge their teammates? I’m looking for a conversation, not an inquisition. We will use the seesaw rhythm of conversation, the back and forth, the give and take to work together every day.
I never think about how good they are at programming. Primarily because it’s a colossal waste of time. I haven’t programmed regularly in so long that my opinion is not relevant. Other programmers figure this out. For collaboration — someone who they will work with, maybe a customer or a partner. I like the candidate to meet a few potential teammates. They will work closely together, so they should meet each other.
The hiring manager gets the final say. Hiring someone is a big choice, and the mangers future prospects will rely in part on whom they decide to hire. I will back the manager in almost every case. Almost, because managers can get so biased towards a candidate that they ignore obvious red flags.
I have seen quite a few red flags in my time. You will do once you get into the thousands of interviews.
— candidate lesson 1 — only apply for a job you actually want. I once had a candidate tell me they didn’t want this job as soon as they sat down. We all stood up, shook each other’s hands, and the interview was over. I was mystified by someone getting dressed up and travelling all the way across Dublin to interview for a job they didn’t want. It might have been for a bet. They may have been a secret agent trying to plant a bug in my office. Maybe they didn’t like my shoes. I still think about this one occasionally, ten years later.
— candidate lesson 2 – a little diplomacy, please. I had one guy tell me he would rather not discuss the previous jobs on his cv. He was here to talk about the future, not the past. He would only accept questions related to the current job. I asked him how we’d build a strong relationship if he put a limit on our first ever conversation? He refused again and told me that his past was none of my business. He wanted a theoretical interview, things he might do in the future. I explained this wouldn’t give him the best opportunity compared to other candidates. Again, he demurred. After a bit of back and forth, he lost his temper and told me to ‘fuck off’.
— candidate lesson 3 — the customer is always right. I asked one person what he would do if he designed a solution and the business person who was paying for it didn’t like it. He told me he would explain why they were wrong. And if they disagreed with his explanation? He would walk out. He didn’t like to deal with morons. He chose the word morons to describe the people who would pay his salary. I spoke to someone in the office who shared a lift with him after the interview. Being friendly, they asked him what he was here for. He told them it was an interview, and he nailed it. As he said this, he balled one hand into a fist, and slapped it into his open hand. Smack. “Nailed it”.
There are no wrong answers, but there are bad ones.
Getting different views helps root out obvious mismatches. It can also do the opposite and discover strengths that are unseen by others. It helps reduce one person bias, and though not perfect, is better than one person deciding on their own.
Unless.
Unless the most senior person in the debrief says they want to hire the candidate before anyone else has spoken. If that happens, people’s internal monologues whirr at top speed. “What happens to me if I disagree with the boss? Will it affect my job? Wow, I really need this job. A friend got laid off months ago and still can’t find anything. God, I have to pay rent. What will my partner say when I lose this job? I can’t go back to living with my parents again. Just go with the flow. He has been interviewing forever. He must know what he is talking about. Where did he get those shoes? They are disgusting. Well, he certainly thinks he knows better than me. He thinks he knows better than all of us. The bastard.”
“Let’s just agree with him, say yes, don’t mention the shoes, and nothing bad can happen.”
That is more or less what happened in the room. There were a couple of newer programmers who I didn’t know well. After I spoke, we started going around the room. I started with a new programmer first. “What did you think?” He replied, “yes, well, yes, it’s actually a yes for me. Yes. I thought they were good. Yes”. He said yes so many times in 10 seconds, it sounded like he was trying to convince himself, or maybe he had OCD. I asked him if he was sure. His mouth said he was, his whole body told a different story. Everyone said yes that day.
I went home that night, struggling to forget about our flawed interview setup. By speaking first, I had influenced everybody. Then, as the agreement grew in the room, everyone started influencing everyone else. We are social creatures, and saying yes became more pleasurable. I wasn’t asking if you were saying yes to a candidate. The question had morphed into something else. The meaning had changed. I was now asking, are you with us or against us? Are you one of us or one of them? In group or out group? Right or Left? So the word yes gathered momentum. It sang out across the office like a church choir. “Yes”. By the time it came to the last interviewer, they said yes, and it felt like the last note in a chord. It shimmered in the air. Endorphins filled the room. The idea of interviewing a candidate was long gone, we had transcended to somewhere else. We were bonding together, creating a sacred communion between the group, strengthening our ties to each other, a glorious chorus of yes bringing us closer together.
I wanted to forget about the day. I switched on the tv. The movie Gladiator was on. I sat back, relaxed, and forgot about the interview.
That night, about 3 AM, I woke up from a dream, and I said “Yes”.
The next day, I brought my team into my office. I told them about the new rule that came to me in a dream. After we interview someone, we may not share feedback with anyone else until the debrief. And the debrief itself has changed. At the start of the meeting, everyone points a thumb to the side. Then we count 3, 2, 1, and you put your thumb either up or down, just like Emperor Commodus would do to decide the fate of a gladiator. Only then would we gather feedback.
Gladiator style interviewing.
Once we tried this out, the pressure to conform to the group immediately vanished. We discovered what people really thought about candidates in a second. Hidden talents came into view, we disagreed, and debate flowed. We all learned one of the most important lessons that there is in life. We learned to listen. Our individual picture of the candidate was incomplete. We could only fill it in by considering more perspectives. This picture wasn’t perfect, but it was fuller and clearer.
With a vague hint of a threat, an acquaintance demanded to know if I was “on the left or the right. Politically, I mean”. What he meant was the future of our potential relationship depended on my answer. The problem — I didn’t know. To be honest, I have trouble keeping up with what the left and right stand today. Are the left socialist? Are the right conservative? Where do liberals go? What about fascists and communists? It’s all so confusing. I asked him what I needed a side for. “Oh, we all need to pick a side, Mark”.
Why?
I have visited the US a lot over the past 15 years. Living in Ireland, I am always struck by how passionate people are about political parties. Most people I’ve met consider themselves on the left (Democrat) or on the right (Republican). When I ask them how they pick a side, they speak about how their ethical and moral values form the basis for their worldview, and help them choose a side. And they choose with certainty. People with a sense of certainty have always fascinated me. I am the opposite. I never feel like I understand things deeply enough to have a definitive view. My opinions hang on by a thread. Except for a love for my family. And my love of Everton Football Club. I am certain about that.
Sports increasingly influence our lives. Thinking about sports informs our thinking about the world. There is a problem with this — how we think about sports is terrible. Fans are angry, emotional, and biased. We chant, wear our team colours, and rail against the unfairness of the referee. We love ‘our’ players. We hate the opposition. We see goodness in our team, and can’t see things from the other side’s perspective. This style of thinking leaks into our political lives. We wear our party colours and chant our slogans. We complain about the ‘referee’ (any authority figures — see media, law, government). We love our party and learn to hate the opposition. Our side is correct, and the other side is wrong.
We are not political party members; we are political party fans.
The word fan is short for fanatic. We fanatics are in for the long haul. In sports, we deal with change — in fortunes, in players, in team ownership. But change is also part of the deal when you pick a side politically. The left and right are not fixed positions. The principles that were once a core belief will adjust and move. In college many years ago, some classmates chose the left because they believed in free speech. But this is shifting. A 2021 study by the Free Speech Institute showed that in the US, Republican voters are now more in favour of free speech than Democrats. It has moved from left to right. A shift like this leaves political supporters in a quandary. Is a political principle more important than the side they support? Maybe a more useful question is — what political shifts would it take for you to switch sides? If you can’t ever imagine switching sides no matter the change, you have fallen into the sports fan trap.
If the thought of having to change political sides fills you with anxiety, don’t worry. There is no such thing as the left or the right. It is a figment of your imagination.
In the book The Social Construction of Reality (Berger and Luckmann), the authors argue that knowledge comes from social interactions. We invent concepts to help explain the world to ourselves and others. The most useful are repeated and spread through interactions with others. Over time, these become objective truth. Take the idea of nationality. When I lived in England, I met people I felt so close to in attitude and appearance that they could have been family. However, in olden times, someone decreed that people on one side of the Irish Sea were English, and people on the other side were Irish. After a while, this idea spread through human interaction, and people started believing in the idea of two different countries. They saw themselves as similar to others on their side, and different from those on the other side. This idea has caused centuries of issues, even though it is an invented idea. It’s been around so long that it feels as real as the laws of gravity. But there is no law here, just an invention repeated for hundreds of years.
The idea of a political right and political left is a social construction. It is a radically simplified way to see the world. It only exists because people have used it in the past. It is not the best model, or even a good representation of the complicated nature of political thought. There are only two sides, no subtlety, no room for complexity. Not three political sides, or four. Just a straight line with two choices. Pick a side.
This is what my acquaintance wanted me to do. To reduce myself to a simple point on a line. To relax my uncertainty and join a side. Pick one, and adopt the ideas of those on that side. Learn one way of thinking, or, in fact, learn not to think, but to accept the group. Over time, slide further along the line so that I end up arguing for positions I don’t believe in, or even understand.
I reject this. I reject it all, the reduction of humans to a simple right/left idea. A supporter, like a sports fan, drunkenly shouting my team’s slogans. Screaming about simple solutions to complex problems. Picking an enemy to hate.
I enjoyed reading How To Know Everything by Elke Wiss so much, I’ve written a book summary. I’ll use a five-stage technique that Prof. Dave Sammon taught us on a recent research course at Cork University Business School. I explain the what, why, how, so what, and for whom (thanks Dave). I will then describe how I found the book, and give you some valuable takeaways from the text.
What is the book about? — At its simplest, this book deals with the art of asking questions. In reality, it’s about Socratic philosophy. It’s about our willingness to open our minds, close our mouths and listen. The concept is almost comically simple, but I found it genuinely profound.
Why read it? — We never learn how to ask good questions, we never learn how to listen to good answers. Have you ever asked someone’s name, and forgotten it a minute later? Ever asked a question that offended somebody without meaning to? Or waited for another person to stop talking so that you can speak? If so, this is for you, my friend.
How is the book structured? — There are five sections (with a summary of topics in brackets):
Why we don’t ask good questions (we are more interested in scoring points, we are afraid to ask, we don’t know how to do it well)
The Socratic attitude (show courage, become curious, embrace not-knowing)
Conditions for questioning (be a good listener, ask permission, slow down)
Questioning skills (question up and down, beware of using why, category errors)
Moving from questions to conversations (following through, opening yourself up)
So what — we can choose to become smarter or dumber with each conversation – we often choose dumber. So that. I know people who are proud to choose dumber. What drives this mindset? Arrogance covering up a fear? An attempt to mask a feeling of inadequacy? Maybe they believe that every thought that pops into their head is so special, they need to show it off. They remind me of a cat dragging a dead mouse in to its horrified owner, looking for affirmation.
Every interaction should give us the opportunity to learn something, unless our mindset and habits block us. We can fix this – if we choose to. The book helps adjust us on three levels, each with increasing impact:
Technically – developing your understanding and sharpening your technique for asking questions.
Socially – helping you get more from interactions with others.
Internally – the most philosophical level. Reflecting on your internal attitude toward knowing and fostering the space for curiosity.
For whom? — this is for anyone interested in getting smarter, learning more, and not being a closed-minded oaf.
How I discovered this book. I’ve recently got my first set of reading glasses, and I feel like Superman (the eyesight, not the Clark Kent look). Everything is so vivid and colourful and incredible. The glasses have improved every situation except one (they don’t work well in the mirror). Anyway, over the years I’ve migrated to reading ebooks on either the Kindle store or O’Reilly (for work). Ebooks were convenient for me. I’d evolved past the need for a physical book made from a tree. I’m a digital guy now, a true modern man, leaving antiquity in my wake.
Actually, I’m none of those – it turns out I was half blind, so the tree fetishisation is back. Now that I can see the numbers on my credit card again, I’m lashing them out online and in person. Filling the house with paper books, I feel glorious. I go from room to room, randomly opening books smelling them, like a bisto kid. Or a dog. I don’t care if I’m becoming canine. I crave the new book smell.
In Hodges Figgis bookshop in Dublin (my favourite place in the city), I spotted the book I’m discussing. The bold title caught my eye as I came in the door. How to know everything, indeed! I picked it up and warily washed my eye across the table of contents. The title felt a cheap ploy. It’s in the same bracket as ‘improve your IQ by 20 points’, or ‘make every girl fall in love with you instantly’ or ‘get respect from your parents’. Impossible to learn from a single book. Or anywhere else. But, the more I leafed through the book, the more fascinated I became. I added it to the growing pile and brought it home. I told my mother I was reading this book. Her response was , “Mark, what do you need that for? I thought you knew everything already”.
No book for that.
Nine highlights/questions worth thinking about.
1. What is a good question?
The author is a practical philosopher and uses techniques from philosophy to help. She begins by defining her terms.
What is a Good Question?
A question is an invitation. An invitation to think, explain, sharpen, dig deeper, provide information, investigate, connect.
A good question is clear and born of an open, curious attitude.
A good question remains focused on the other person and their story.
A good question gets someone thinking.
A good question can lead to clarification, new insights or a new perspective.
A good question doesn’t give advice, check hypotheses, impose a perspective, share an opinion, make a suggestion or leave the other person feeling judged or cornered.
2. Six reasons we are bad at asking questions.
HUMAN NATURE: Talking about yourself feels so much nicer than asking questions. We are too selfish and self-obsessed. Our ego makes us want to give advice as if we are a genius, rather than listen. We should stay away from the “I had the same experience“ story. We may intend to create a shared experience, but it often irritates and alienates the person telling the original story.
FEAR OF ASKING: Posing a question can be a scary proposition. We are afraid of making other people uncomfortable. We are afraid that we will feel uneasy and are worried about clashes and unpleasantness.
SCORING POINTS: An opinion makes more of an impression than a question. An answer makes a good impression, a question makes little impression. An opinion stops thinking, a question is where thinking begins.
LACK OF OBJECTIVITY: Our ability to reason objectively is declining. Gut instinct dominates reason too often. There is an idea that freedom of expression means that we are unreservedly entitled to our opinion. It’s more than this. Freedom of expression includes a willingness to question our beliefs and accept criticism from others.
IMPATIENCE: We think asking good questions is a waste of time. We think we lack time, instead we frequently lack the discipline and effort it requires to understand positions fully.
LACK OF COMPETENCE: Nobody teaches us how.
Before you interrupt someone, ask yourself:
Does anyone need to know what my stance on an issue is?
Am I interrupting their story to tell my story?
3. The difference between ideas and opinions
Try to question ideas, and not opinions. If we question an opinion, the owner of the opinion can feel threatened. When something is an idea without an owner, we can boot it around, hurting no one, and learning more.
4. Separate observing and interpreting.
Observing and interpreting are very different. We regularly apply judgement to observation. We can start with a simple observation, ‘Evan needs to iron his shirt’. This can lead to another thought, ‘Evan is an untidy person’. And ‘in fact he is rather lazy and disorganised’. Instead, we should realise we have made one observation – ‘Evan has a creased shirt’. We must try to suspend judgement where possible, and if not, be aware of our judgement. This is not a new concept. Quite a few years ago, Epictetus said:
“If a man wash quickly, do not say that he washes badly, but that he washes quickly. If a man drink much wine, do not say that he drinks badly, but that he drinks much. For till you have decided what judgement prompts him, how do you know that he acts badly? If you do as I say you will assent to your apprehensive impressions and to none other.”
5 Empathy is a two-edged sword.
Showing empathy when asking questions feels like a human reaction. However, empathy is a complex topic. In the book ‘against empathy’ Paul Bloom argues that empathy is a force for good, but is biased towards people in our social group. It often helps used for those who look like us, the good-looking or children.
Cognitive empathy can be useful. This involves using reasoning to put yourself in another’s mental state. An example here is a doctor assessing the impact of a negative diagnosis before delivering it to the patient. Emotional empathy is different. We need to distance ourselves from another’s feelings so that we can function effectively. If a doctor uses emotional empathy, they may be so overcome with empathy for a patient suffering that they cannot treat them. Bloom argues that we should use non-empathic compassion (creating the desire to help) rather than empathy. Feeling another’s pain affects your ability to judge objectively. Compassion allows you to dig more deeply and ask questions about the other person rather than about you, which will allow you to help.
6. Good conditions for questioning
I found this part of the book most useful. I should print this part up and create t-shirts for everyone I meet.
Good listening is the key to getting the most out of good questions. Listening begins with setting your intention for a conversation. There are three primary intentions, which you can switch between in conversations:
The ‘I’ intention – what do I make of this? This is where you engage with the situation by considering what you would have felt or done in a similar situation. This type of position often triggers a fix, or advice.
The ‘You’ intention – what exactly do you mean? Listening with this intention reminds you that there is a lot you don’t know (the other person’s experiences or perceptions). You really try to understand the other person’s way of thinking. You never give advice or explain how you would have dealt with the situation. Your questions focus on getting deeper.
The ‘We’ intention – how are we doing? This is a meta position, observing you and the other person as if from above. You are conscious of how you are feeling and how the other person is doing. Is the conversation going in circles, how is the body language (relaxed, fidgety, tightening)?
Caution: if you decide to adopt the ‘you’ intention, make sure you don’t end up like a detective. Don’t cross-examine every person you meet – this can obviously get uncomfortable for the recipient. You may also find yourself in a position where something sensitive or uncomfortable comes up. A traumatic experience, or a divisive political stance, for example. Asking permission is a great way to ease into a conversation. The author recommends using the following question:
Do you mind if I ask you a few questions about that?
This makes sure that the other person knows what’s coming. They can change the subject or say no if they are uncomfortable.
7. How to improve your questioning skills.
This gets into the more technical skills of how to ask questions. The author proposes a fascinating technique, which she calls questioning up and down. Questioning up refers to abstract concepts and downwards refers to concrete facts and reality. This technique should allow a person to move downwards until they establish the facts, and the ‘critical moment’, a key point/statement/fact/attitude around which the entire conversation revolves. Then the questioner can repeat the data they have heard and move upward to establish the underlying beliefs.
Upward questions (towards concepts and underlying beliefs):
The idea is to ask downward questions to establish the facts of a situation. Then move upwards to understand the beliefs and concepts that influence the person’s thinking.
8. Beware the ‘why’ question.
Many authors recommend using why as a starting point for a meaningful conversation. However, the book advises caution here. A why question can seem like a direct assault, like a detective shining a light into the face of a suspect. “Why did you vote for X party”? “Why do you associate with Y”? Instead, try to soften this effect by using what. “What is it about party X that causes you to vote for them”. “What makes Y a good person to hang out with”?
9. Six categories of questions to avoid.
Loser questions – these imply that the other party is a loser. These are not questions, they are comments aimed at reducing the other person.
“Are you late again?” (loser)
“Have you finished that assignment yet?” (Loser)
“Did you forget to bring your coat?” (Loser)
But questions – ‘but’ is an innocent word which can slip into a question. ‘But’ says I already have an opinion about this, but I’m not saying it directly. “But don’t you think you should have intervened in the situation”, “but don’t you think this piece of work needs re-doing?” “But don’t you think Mark is a bore”. Even without a negative, it can totally change a question. “But why did you include John” differs from “why did you include John”?
Cocktail questions – where we ask a question, and keep adding more questions until it becomes a question cocktail. It’s difficult to get an in-depth answer because the question is so obtuse. “Why did you do that, and why then? Oh, and you added Mary, didn’t she work on this before? Did that end up well for her? What are you going to do next?”
Vague questions – where it’s unclear what the questioner is looking for. This is often because they use a concept which is personal to them, like good, or high, or appealing. It’s difficult to know what the questioner means by those words. For example, “was the concert good?” may get a different answer from ten random participants. Instead of asking “Is that tower high”, ask “how high is that tower”, or instead of “was the meal tasty”, ask “how did the meal taste”?
Unwarranted either/or question – giving only two options when there are more on offer. “Do you want to meet today or tomorrow?” (you may want next week or never!). “Are you a vegetarian or do you eat meat?” (you may be a vegan or pescatarian).
The half-baked question – “Coley was up to his old tricks again.” “What do you mean? ” The question isn’t specific enough. Is it asking about Coley, or his tricks, or the word again? A more detailed starry can get complex. “Felice had a meeting with the team to discuss a new software architecture. First, Cormac disagreed with Johanna on the overall direction. Then Robert presented a whole new design, which nobody else has seen. He used a new library which has never been in production. I didn’t know how to react.” React to what? The design? The disagreement? The alternative design? That no-one had seen it before? The new library?
In summary, should you read this book? Well, it’s up to you. But I have a question for you. Are you willing to go through life missing the opportunity to learn more from every person you meet, or would you prefer to live in splendid ignorance?
Co-authored with Dr Paidi O’Raghallaigh and Dr Stephen McCarthy at Cork University Business School as part of my PhD studies, and originally published by Cutter Consortium’s Business Agility & Software Engineering Excellence practice on 22nd of July 2021
Take a minute and write an answer to the question, “What is technical debt?” Then read this article and reread your answer — and see if it still makes sense.
Technical debt is a common term in technology departments at every company where we’ve worked. Nobody explained technical debt; we assumed it was a fundamental property of the work. We never questioned our understanding of it until we discovered a paper by Edith Tom et al. entitled “An Exploration of Technical Debt.” Turns out, we didn’t understand it at all.
One major concern in academia is rigor. Academics like to get deep into a topic, examine the nuances, and bring clarity. After thoroughly reviewing over 100 seminal papers on technical debt, we saw it as an amorphous ghost, with enormous differences and inconsistencies in its use. Next, we began looking at it in practice, asking colleagues, ex-colleagues, and working technologists, but couldn’t find a satisfactory explanation for it there either. Ultimately, we went back to the original source to figure out the history — and get a sense of its evolution.
One thing that is agreed on: the term technical debt came from Ward Cunningham. Cunningham is the inventor of the wiki and a tech legend. In the early 1990s, his team was building a piece of financial software, and he used a metaphor from the world of finance to explain to his manager how the team was working. As he later explained in a paper at the OOPSLA conference in 1992:
A little debt speeds development so long as it is paid back promptly with a rewrite. Objects make the cost of this transaction tolerable. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.
The metaphor quickly became part of standard technology discourse. Because the conference focused on object-oriented development, it took hold in that community. Popular tech authors such as Martin Fowler and Steve McConnell soon took it on, helping it become part of the broader language in software development. Today, the use of “technical debt” has become commonplace, from a mention in a single paper in 1992 to over 320 million results from a Google search as of July 2021.
Over time, Cunningham saw the term shift to signify taking a shortcut to achieve a goal more quickly, while intending to do a better job in the future. In 2009, dissatisfied with how the metaphor had mutated, he clarified the use of technical debt in a YouTube video. Cunningham disliked the notion that technical debt signified “doing a poor job now and a better one later.” This was never his intention. He stated:
I’m never in favor of writing code poorly, but I am in favor of writing code to reflect your current understanding of a problem, even if that understanding is partial.
But it was too late. By that time, the metaphor had outgrown his initial intent. It was out in the wild, excusing terrible decisions all over the globe. Technical debt now represented both debt taken on intentionally and the more insidious form, hidden or unintentional debt — debt taken on without the knowledge of the team. It had also moved past code and spread to areas as varied as technology architecture, infrastructure, documentation, testing, versioning, build, and usability.
Technical debt allows practitioners to look at tech delivery through the lens of debt. Is this an appropriate lens? Debt repayment has one vital characteristic: it is easy to understand. Debt repayment has three properties that are straightforward to grasp — principal amount, interest rate, and term (i.e., length of time to repay). But when comparing technical debt, there is no agreement on the principal, no agreement on the sum owed. There is no concept of an interest rate for technical debt because technologists individually evaluate each project as a unique artifact. Finally, term length isn’t a fixed concept in technical debt — in fact, Klaus Schmid even argues that future development should be part of the evaluation of technical debt.
Enormous effort and energy have gone into trying to calculate an accurate number for technical debt across many technology and academic departments. Unfortunately, trying to glue a direct mathematical representation to a metaphor seems to have failed. The idea of technical debt as a type of debt doesn’t hold up well in this light.
So is it technical? This depends on whether we consider only the originating action, or the consequences that follow. If an aggressor punches a bystander in the face, we consider not only the action of the aggressor (the originating action) but also the injury to the bystander (the impact of that originating action). Through this lens, technical debt can only be technical if we consider where it originates, as opposed to where it has an impact. Technologists take on the originating action; the business suffers the impacts of those decisions. Technical debt affects:
Competitiveness by slowing/speeding up new product development
Costs (short-term decrease/long-term increases in development cycles)
Once we realize that technical debt is a company-wide concern, we can no longer consider it technical. This label is too narrow and doesn’t communicate its significance. In fact, our current ongoing research shows that technical debt may even have an impact beyond the company, and we need to take an even broader view (its effect on society as one example).
The most important takeaway: we must broaden our awareness of technical debt. In the same way that company executives examine financial cash flows and sales pipelines, we must communicate the consequences of taking on technical debt to this audience. Our most important challenge is to find a shared language to help business stakeholders understand the importance of unknown decisions made in technology departments.
Finally, look back at how you defined technical debt at the beginning of thisarticle. Do you communicate the action or the impact? Is it suitable for a business audience? What is?
If I had asked people what they wanted, they would have said faster horses.
— Attributed to Henry Ford
The tendency to cling to the past when predicting the future is clear throughout history. This is as true today as it ever has been. Even in the future-defining world of technology, people still cling to anachronistic ideas.
To get the structure of the business right, a company must reorganise itself around empowered teams that can operate at speed. For technology architecture to play a pivotal role, it must leave the old workhorses of the past behind and move to modern transportation. Indeed, architecture must refocus on three core principles: (1) accelerated change, (2) decentralised decisions, and (3) public self-governance.
Why Does Any of This Matter?
Recall these three promising businesses that crashed and burned in the midst of major technological change?
At its peak, telecoms giant Nortel had almost 100,000 staff members and celebrated over 100 years of success. In 2009, it filed for bankruptcy.
In 2008, social network Friendster had more than 115 million registered users and was among the top 40 visited sites on the Internet. It shut down all operations on 14 June 2015.
All three businesses attempted to transform far too late. In each case, the company clearly saw a disruptive change emerging in its path. Early on, each business thought that the disruption was merely a fad and that size and history would offer protection from it. Ultimately, they all failed.
The world has not been slowing down since these companies found themselves in trouble; it has been speeding up dramatically. In his essay, “The Law of Accelerating Returns,” inventor and futurist Ray Kurzweil explains that “technological change is exponential, contrary to the common-sense ‘intuitive linear’ view. So, we won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate)”’
Kurzweil uses multiple cases to show that the evolution of technology is increasing at an incredible pace.
The diagram above shows a good representative example, where computing power goes from the equivalent of an insect’s brain in the year 2000, up to a human brain’s in 2025, to all human brains by 2050. Supporting this type of exponential growth might be the single most important thing a company does for its survival. If a company can’t adjust quickly, it may have to shut its doors as new business strategies hand the advantage to competitors.
While inside the architecture community an argument over the best framework rages, to outsiders it resembles crows fighting over scraps at the dump. The winner is important to the crows and a few bystanders but relatively unimportant to the rest.
More important than architectural identity is understanding the value architecture brings today. The value of a sales division is clear: to bring in revenue; the finance division’s value is to manage the company finances, and so forth. A typical department knows its value proposition thoroughly. A member of a well-run department can explain its contribution in an elevator and still have time to discuss last night’s game before reaching the desired floor. However, it is rare for an architect to speak about architecture’s value to the company in clear business terms.
In the quest to uncover the value of architecture, academic research fares no better, showing that despite all expended effort, framework-based architectures have failed to deliver. Complexity and the increased rate of change in technology have transformed the business landscape, but architecture hasn’t kept pace. The following quotes from academia and industry groups provide some insight:
There exists no single comprehensive view of the ways an architectural practice might add value to an organisation. — Vasilis Boucharas et al.
Measuring EA effectiveness is often deemed difficult by both practitioners and researchers. — Wendy Arianne Günther
“A great strategy is valuable only if a company is capable of executing that strategy. And whether or not a company can execute its strategy depends largely on whether it is designed to do so. In other words, it depends on business architecture — the way a company’s people, processes, systems, and data interact to deliver goods and services to customers.”
So, as we hinted to earlier, architecture must go deeper by focusing on three pillars: (1) accelerated change, (2) decentralised decisions, and (3) public self-governance.
The Three Pillars of Digital Architecture
1. Accelerated Change: Optimise for Speed
As we know, external change is happening at an exponential rate. This changes the speed of execution from a useful to a critical success factor. If companies aren’t readying themselves and getting their business architecture right today, they increase the chance of becoming irrelevant tomorrow.
Companies slow to change have always been at a disadvantage. My first-person experience of this comes from my time working at a small telecoms company in Ireland in the late 1990s, leading a team of three. Telecoms consumers began to ask for additional content, such as recommended listings, sports scores, and local weather. Providing this content meant that operators could charge more and increase revenue.
We spent five months building a new workstation platform that offered these new services and then flew to Nortel in Rochester, New York, USA, hoping to sell it. It turned out that a team of 50 people in Nortel had been working for two years to build the same platform and were nowhere near completion when we showed up. The key difference was that Nortel’s organisational structure slowed them down, while ours allowed us to move as fast as we could.
In the end, Nortel took so long in deciding whether to buy our software, we approached a telco directly and won the deal ourselves, in effect becoming a competitor. The world outside Nortel started to move faster than the world inside, but they didn’t notice until it was too late, contributing to the downfall of this once great institution.
Today, companies must reorganise quickly so that they can move faster, keep up with the external rates of change, and avoid becoming the new Nortel. Optimising for speed means shortening the time from idea to implementation — from lightbulb to lights on.
2. Decentralised Decisions: Power to the Teams
Hurricane Katrina hit the US in 2006 causing fatalities, lost homes, and devastation in many towns and cities, including New Orleans, Louisiana. The agency with overall responsibility for disaster management was the Federal Emergency Management Agency (FEMA). Most agencies tasked with providing relief, FEMA in particular, did not do so adequately. The top-down chain of command was mostly useless when those on the ground needed to make immediate decisions. People felt disempowered and stifled by bureaucracy.
One notable exception was Walmart. Walmart shipped almost 2,500 truckloads of merchandise and medication to New Orleans before FEMA even began any relief efforts and provided trucks and drivers to community organisations. How was Walmart able to act almost immediately after the hurricane when the government agencies responsible for providing relief took days (sometimes weeks) to get to affected areas?
A key reason is Walmart’s decentralised decision-making. The company gives both regional and store managers authority to make decisions based on local information and immediate needs. As Hurricane Katrina approached, Walmart CEO Lee Scott sent a message directly to his senior staff and told them to pass it down to regional, district, and store managers:
“A lot of you are going to have to make decisions above your level. Make the best decision that you can with the information that’s available to you at the time, and, above all, do the right thing”.
On the ground, Walmart staff turned stores into emergency sleeping quarters, set up temporary police headquarters, and, in one case, ran a bulldozer through a store to collect undamaged supplies and give them to those in need. People could make life-saving decisions because they didn’t need to wait for permission. They already had permission as part of their job.
Today, in a world of accelerating change, companies must empower teams like Walmart did. To achieve this, decentralising the decision-making process is vital – it empowers individuals and reverses bureaucracy, which is toxic to innovation. As world- renowned business thinker Gary Hamel and his coauthor Michele Zanini note in Harvard Business Review,
“Bureaucracy is the enemy of speed … bureaucracy is a significant drag on the pace of decision-making in their organization”.
So, how does architecture enable decentralised decision-making, reduce bureaucracy, and accelerate work? Public self-governance helps answer this question.
3. Public Self-Governance: From Governance Blockades to Buffet-Style Decisions
Traditional technology governance resembles theatre, where various stakeholders play parts in a process that makes the actors feel satisfied. The decided lack of applause from the enterprise is telling.
Governance committees decide centrally, causing delays in work and frustration to parties awaiting an outcome. They rarely have the same level of information as the team on the ground. Of course, the committees can request more details, but this only increases delays. Occasionally, they assume knowledge and rule on matters in semi-ignorance, acting like an unaccountable early European monarchs.
The book Accelerate discusses highly sophisticated and complex technology projects. In considering the usefulness of a change advisory board (CAB) or central approval process, the authors found that:
“External approvals were negatively correlated with lead time, deployment frequency, and restore time, and had no correlation with change fail rate. In short, approval by an external body (such as a manager or CAB) simply doesn’t work to increase the stability of production systems, measured by the time to restore service and change fail rate. However, it certainly slows things down. It is, in fact, worse than having no change approval process at all”.
A central approval process is akin to a restaurant with only one server. The server can handle a few tables. As the company grows, the number of tables also grows. The order queue gets bigger and diners face a longer wait. Eventually, diners are upset, the food gets cold, the server is exhausted, and ultimately quits. We need instead to move to a buffet model, where diners can serve themselves, the food is hot, and a smiling server is on hand in case anything additional is needed.
Enterprises must move away from the old model of centralised decision-making to a model of public self-governance. Away from monarchy and toward democracy, giving teams the knowledge and authority to make decisions in the open.
What Is Public Self-Governance?
Public self-governance is a simple process, where teams ask themselves three questions after first stating the purpose of the proposal:
Is there a positive return?
Is this a Type 2 decision?
Is this easily reversible?
If all three answers are yes, then the team makes the answers available internally and begins work immediately. This process increases the speed of decision-making, increases autonomy within teams, and creates a culture for innovative ideas to blossom. Team members are more engaged, and both they and the company reap any rewards that materialise. Let’s break these questions down.
A. Is There a Positive Return?
This question concerns the business case and is merely asking whether the ROI is greater than the cost. This simple question, however, has a deep impact, helping people at every level of an organisation consider ROI as they dream up new proposals.
B. Is This a Type 2 Decision?
This question considers scope and comes from Amazon. Jeff Bezos, in his 2015 letter to shareholders, explained the two types of decisions within Amazon: Type 1 are high-impact choices, while Type 2 are lower-stakes choices that can be more easily reversed. Amazon leaves Type 2 decisions to its teams.
With public self-governance, an individual at any level can make a Type 2 decision, which provides autonomy and allows immediate action. Type 1 decisions are made by senior stakeholders with consideration of a wider set of factors (e.g., risk, business environment, company performance, alignment with strategic goals). Training individuals to distinguish Type 1 from Type 2 decisions is part of an enterprise’s learning journey.
C. Is This Easily Reversible?
This question concerns complexity. If a proposal needs integration into existing systems, or requires new data, complexity increases. The higher the level of complexity, the greater the work needed to reverse the action. To answer this question, one must break it down further and consider the following three categories:
Data. Is the data protected? Can it be retrieved and/or deleted?
Integration. Are integrations or custom development required? Is this work easily reversed?
Users. How does removing the feature impact its users?
The answers to all public self-governance questions should be openly available within the company, and the architecture group should perform continuous retrospective reviews. If any issue arises, or if any of the three answers is no, the architecture group then becomes a partner, helping to generate a business case and thoroughly work through the proposal. This proactive approach allows other teams without issues to move forward with no delays.
Public self-governance requires a culture that encourages experimentation and is tolerant of failure. If something is easily reversible, then it is low risk. If it doesn’t deliver as expected (i.e., less value, higher cost, more complexity), it can be halted, with lessons noted, and everybody can then move on to the next decision.
Other Considerations
Financial Purse Strings
“Negotiating budget exceptions — often necessary when a company has to move quickly — was also impeded by bureaucracy” — Hamel and Zanini
In most companies, costs will also need finance approval. Bureaucracy costs money; therefore, it is cost effective to give blanket approval to all proposals below a set maximum amount.
Danger: Technologists in Control!
A word of warning: it is important to review answers to the public self-governance questions, continue an open dialogue, and support a learning culture. There is a difference between giving increased autonomy to technologists and abdicating any responsibility as a firm. The cautionary tale of Netscape should serve as a stark reminder of too much free rein given to technologists.
In 1995, the Netscape Navigator browser had over 80% of the market. Riding on this wave of success, Netscape began to rewrite the browser entirely, so it would support its newly created JavaScript programming language. Netscape intended to obliterate the all-conquering Microsoft, making Windows, according to Netscape VP of Technology Marc Andreessen, appear like a “poorly debugged set of device drivers.”
To the technologists in the firm, this was an obvious choice: rewrite the entire browser (i.e., the entire business) from scratch, removing old code and old bugs. It was just a matter of cleaning out the cobwebs to prepare for a new paradigm shift.
The full rewrite took two years — two years without new features, without meeting new customer needs, or dealing with competitive threats. By the time Netscape released its new Netscape Communicator browser, Microsoft Internet Explorer was everywhere, and Windows was the desktop platform of choice. Meanwhile, Netscape’s market share slid irreversibly, from close to 90% in 1995, dropping to 5% by the end of 2001. Netscape went from total dominance to a vague footnote. Plus, in an ironic twist, the new browser was buggy and slow compared to the old version.
AOL ended up purchasing Netscape in early 1999, and, by 2003, the company disbanded altogether, an ignominious end to what had looked like a brilliant future only eight years earlier. Here, Andreessen made a major decision solely on a technology basis. Referring to the public self-governance form, this was a Type 1 decision made as if it were Type 2. Netscape should have considered an array of factors, including risks, business strategy, and competitive threats. Ignoring these factors ultimately caused its demise.
As we see in the Netscape example, judgment is still necessary in making good quality decisions. Using public self-governance allows a business to scale its decision-making, but a business must also reinforce the learning culture so that staff members understand how to categorise their proposals and make better decisions.
Conclusion
To survive in this digital age, architecture must change. The old monsters of heavyweight governance, centralised authority, and long wait times are impediments in this new arena. Public self-governance breaks up decision hierarchies and speeds up technology decisions in the organisation. It encourages a business to move faster. This will have an enormous impact, allowing companies to adjust quickly to customer needs, changes in technology, and emerging business models. Public self-governance is a necessary step in setting a business up for success in this new era.
I’ve yet to meet a person who claims to be irrational. Everyone is convinced that they make rational decisions — this idea is at the core of theories in economics, organisations, and technology. When faced with a decision, a rational decision maker:
Defines the opportunity/problem
Lists all the constraints (time, budget, resources etc)
Searches for all solutions
Chooses the solution that gives the maximum benefit
According to Stanford Professor James G. March’s A Primer on Decision Making, this model of decision making is called a maximisation
The idea of maximisation, the concept of a rational decision maker is based on 3 lies.
The first lie is that we can predict the future, that we can know every possible solution in advance. This is absurd — no-one can see into the future.
The second lie — we can predict how we will feel in the future about a benefit or consequence. The feeling we get after an event is rarely the feeling we had expected beforehand. To quote tennis great Andre Agassi from his autobiography.
‘Now that I’ve won a slam, I know something very few people on earth are permitted to know. A win doesn’t feel as good as a loss feels bad, and the good feeling doesn’t last long as the bad. Not even close.’
The last lie is the biggest one of all — that we have the time and brainpower to search for every potential solution that exists. The first problem here is a lack of time. If we fully worked out each decision we had to make in our lives, we would have no time for anything else. The courses of action are infinite. A minor change at one level can unleash a butterfly effect of consequences for every level below. The second problem concerns our brainpower. We are incapable of comparing complex outcomes because we suffer from problems of:
attention — too much noise
memory — our limited capacity to store information
comprehension — difficulties in organising, summarising and using information
communication — different people (cultures/generations/professions) communicate information in different ways.
Because of these limitations, we simplify decisions by:
replacing the problem we face with a simpler one
decomposing problems to their component parts and solve these, hoping to solve the full problem by doing so.
seeking patterns and then following rules we have previously established instead of looking for new possibilities
narrowing the problem to a ‘frame’ — narrowing the decision choices available for selection. Frames come from early individual experience, recent frames used come from friends, consultants, and writers.
The legendary Herbert Simon tells us that instead of using a rational strategy, most decision makers ‘satisfice’. This means we compare alternatives until a ‘good enough’ solution is found, then we choose that option. If there is a better alternative, we rarely chose it because we stop thinking about that decision and move on with life. We often fool ourselves into thinking we are maximisers — finding the best solution after an exhaustive search. In reality, we are more likely to satisfice, and move onto the next item on our agenda.
In organisations, situations become more complex. A decision may involve a group of people. The process may continue for a predetermined time, rather than stop when a satisfactory outcome is reached. There may be situations (usually simpler decisions) where the organisation maximise.
To quote March:
“Decision makers look for information, but they see what they expect to see and overlook unexpected things. Their memories are less recollections of history than constructions based on what they thought might happen and reconstructions based on what they now think must have happened, given their present beliefs.”
We think we make sound decisions, but in reality our ability to be rational is bounded by the constraints of time and cognition. We are not rational creatures.
This piece has also been published by the Cutter Journal here under the title ‘The 3 Lies of Maximization’
How can you make a great breakthrough? How can you start the next era-defining business, write the next great song, create the next political movement? To do any of these, you must be more innovative. So where do you start?
Legendary Finnish architect Eero Saarinen sat one morning staring at a broken grapefruit peel, the remains of his breakfast. He was in the middle of an enormous project, designing a new airport for one of the world’s great cities — New York. Staring at the fruit peel, a vision suddenly grabbed him. He would design the airport shaped like a split grapefruit peel. One of the most groundbreaking pieces of architecture — the TWA terminal at JFK airport — was the result.
So, how did Saarinen do it? He took on an idea that would have seemed ludicrous to most — something outside the ordinary. How do we follow Saarinen’s lead?
In the seminal book ‘A Primer on Decision Making‘ — Stanford Professor James G. March tells us that there are the three ingredients in a company needed for innovation: slack, luck and foolishness.
Slack is the difference between the outcome chosen, and the best possible outcome (say a successful product sold 1 million, but could have sold 10 million with a different decision). In an organisation, more slack means more innovation. How do you know if there is slack in your company? If you continually exceed targets, you are in the right place. Beware, slack can change. When performance in an organisation exceeds aspirations, slack accumulates, and when performance decreases, slack decreases.
Luck is necessary for successful innovation. Luck can come in many guises, the right timing, a breakthrough in a related industry, new staff creating the ‘right’ chemistry in a team. Because innovations which provide breakthroughs are difficult to identify in advance — a very promising idea can fail miserably — some level of luck is necessary for an innovation to take hold.
Foolishness produces major innovations. It is the most important ingredient. Ordinary ideas don’t make great breakthroughs, an ordinary idea preserves the existing state of affairs. Organisations need to support foolish ideas even if they have a high probability of failure. This requires a high tolerance for risk and a culture that promotes innovation over conformity.
As someone in an organisation, what can you do to be more innovative? There are three things — you must:
favour high-quality information over social validity
understand your risk tolerance
be foolish.
People in an organisation seek social validity over high-quality knowledge, according to March. Organisations are social systems, all social systems require a shared understanding of the world. In any organisation, there are ambiguities in meaning exists between people. Different people have different interpretations of reality. These ambiguities and interpretations threaten the social system. To combat this threat, mechanisms emerge to create a shared understanding among all participants. We
edit our experience to remove contradictions.
seek information to confirm decisions.
forget disconfirming data.
remember previous experience as more consistent with present beliefs than was the case.
change beliefs to become consistent with actions.
A preference for vivid stories emerges, with lots of detail (which is often irrelevant). This allows people to process lots of extra information. The amount of information processed increases confidence, it does not increase accuracy. Beware of detailed stories — these stories give the decision makers what they want — to see the world with more confidence rather than more accuracy.
Every innovator takes risks. To get more innovative results, you must take some level of risk. It’s important to understand your risk appetite when making any decision — it is driven by:
Personality — your natural trait towards risk taking
Reaction — your ability to take variable levels of risk depending on the situation. Decision makers are more risk averse when the outcome involves gains, and more open to risk when it involves losses.
Reasoned choices — you may make a reasoned choice depending on the situation. For example, if you need to finish first in a contest it requires a very different approach than creating an internal partnership with another department.
Reliability — risks taken are affected unconsciously because of unreliability. The situation may suffer from a lack of knowledge, breakdown in communication, trust, or structure. The greater the ignorance, the greater the variability of outcome, the greater the risk.
Finally, for innovation to occur, you need to be foolish. So, what does foolish mean in this context? Innovative ideas take a concept that flies in the face of ‘common knowledge’ and transforms everything around it. However, there is great social pressure in organisations to create a feeling of safety for all and proceed with ideas that give a ‘comfort level’ across the whole group. Anything outside this is considered foolish. A simple check on your idea is — if it doesn’t seem foolish to others, chances are that it’s not likely to be a bold enough vision. A true innovator is treated as a fool when they propose a breakthrough — as the following famous examples show:
“Drill for oil? You mean drill into the ground to try and find oil? You’re crazy.” — workers whom Edwin L. Drake tried to enlist in his project to drill for oil in 1859
“The wireless music box has no imaginable commercial value. Who would pay for a message sent to nobody in particular?” — David Sarnoff’s associates response to his urgings for investment in the 1920s
“I think there is a world market for maybe five computers.” — Thomas Watson, chairman of IBM, 1943.
“There’s no chance that the iPhone is going to get any significant market share.” — Steve Ballmer, Microsoft CEO
It is easy to be a cynic and costs nothing to criticise. It is hard to be an innovator. First off — you need to be conscious of how you make decisions. You must be in a supportive organisation. You must take risks and appear foolish to some people. Pray for some luck.
But what is life for, if not to try? Life is to be lived, and every brilliant innovation comes from a person just like you. How many innovations have we lost because it was easier not to rock the boat, because it was easier to listen to the crowd, easier to do what we have always done before. We must grasp the nettle and fight the urge to be safe.
The remarkable Herbert Simon won the Nobel Prize in Economics in 1978 and the Turing Medal in 1975. Reading about his life gives me a panic attack when I consider how little I have achieved in comparison. He published ‘Administrative Behaviour’ in 1947, and I started reading it in 2021. I started by treating it as a relic of World War II era business, a history book. It quickly filled me with horror as Simon explained business, thinking and decision making in ways which seemed obvious after reading them, but I had never even thought of. I immediately felt weak. I felt like a total imposter. How had I never read Herbert Simon before? Why had nobody told me? It panicked me for days. I dropped a reference to the book into every meeting for weeks. That practice soon calmed me down. It turns out almost no-one I know had read it either.
Early in the book, Simon talks about how each department in an organisation has one job. They take in information and turn it into decisions which are executed (either by them or another department). He introduces the concept of Bounded Rationality – how it is impossible to evaluate an almost infinite set of possibilities when making a decision. Instead, we must choose a smaller ‘bounded’ set of assumptions to work within.
Back in the actual world of architecture, I have always boiled the job down to either a) making decisions or b) provide information to help others make decisions. I’ve only ever had a vague sense of how architects make decisions, even though it’s been my job for the majority of my career.
In a fantastic paper published in 2014, “EA Anamnesis: An Approach for Decision Making Analysis in Enterprise Architecture”, Georgios Plataniotis, Sybren de Kinderen and Henderik Proper explain the importance of capturing decisions made about architecture. They go further, arguing that capturing the reasons for a decision and alternatives considered is just as important. Documenting the rationale when a decision is made gives it context, explains the environment at the time, and helps inform future decisions.
The paper describes four strategies used to make Enterprise Architecture decisions. Each decision is an attempt to decide on the best alternative among competing choices. They split decision types into the following buckets :
Compensatory. This type of decision considers every alternative, analysing all criteria in low-level detail. Criteria with different scores can compensate for each other, hence the name. There are two types here:
Compensatory Equal Weight – criteria are scored and totalled for each potential option, the option with the highest total signifies the best decision.
Compensatory Weighted Additive (WADD) – here a weighting is given for a criterion to reflect significance (the higher the weighting, the higher the significance). The weighting is multiplied by the score for each criterion, then each alternative is summed, the highest total winning.
Non-Compensatory. This method uses fewer criteria. The two types are:
Non-Compensatory Conjunctive – alternatives that cannot meet a criterion are immediately dismissed, the winner is chosen among the survivors.
Non-Compensatory Disjunctive – an alternative is chosen if it complies with a criterion, irrespective of other criteria.
Say you were buying a car, and you had the following criteria: fuel efficiency, colour, cost, and ease of parking (as scored below).
Car
Fuel
Colour
Cost
Parking
Total
Fuel x2
Weighted Total
Car A
9
Black
6
4
19
18
28
Car B
6
White
10
5
21
12
27
Car C
4
Grey
4
10
18
8
22
Car D
1
Red
1
8
10
2
11
The four strategies might look like this:
Compensatory Equal Weight – in this case you pick the highest unweighted total – Car B
Compensatory Weighted Additive – because you drive long distances, you apply a double weighting for fuel mileage and pick the highest weighted total – Car A
Non-Compensatory Conjunctive – because you live in a city, you discard any car that isn’t easy to park (at least 7/10). This leaves a choice between C and D you chose the highest score between them – Car C
Non-Compensatory Disjunctive – you fall in love with a red car – ignore everything else – Car D
Compensatory decisions are suitable when time and resources are available to
gather the right set of alternatives,
evaluate each alternative in detail
score each with consistency and precision.
Non-Compensatory decisions are necessary when
there is time stress
the problem is not well structured
the information surrounding the problem is incomplete
criteria cant be expressed numerically
there are competing goals
the stakes are high
there are multiple parties negotiating in the decision.
A level of pragmatism is important when choosing a decision strategy. Using Simon’s concept of bounded rationality, compensatory decisions can never be fully worked out. Some level of assumptions are necessary, otherwise the work needed to make every business decision is almost infinite. However, within a set of ‘givens’ (given we need to decide by Friday, and given budget is x, and given resources available are y, and given etc) the weighted additive method (WADD above) has proven effective in my experience. The framework forces decision makers to consider each alternative clearly, as opposed to a clump of criteria mashed together. It also forces all parties to agree a set of weights, helping the group agree on the hierarchy of importance. These processes improve communication between parties, even when they disagree on the choices of criteria and weights.
A strange magic happens during negotiation of the scoring, as parties try to force their choice. The mental mathematics going on inside heads is a dizzying. I have witnessed all types of behaviour, from people determined there be no change, to an exec wanting an inflight magazine article to form the basis for all future strategy, to a head of a business unit wanting us to use his nephews school project as part of the solution, all the way to one mid 40s executive, who got so annoyed with the debate, that he started jumping up and down and stamping his feet because he wanted his decision, and “that’s what I’ll get”.
This is an unprecedented time in life. It feels like the usual world has paused. I was working in Florida last week when the seriousness of Covid19 hit me. Texts, email, media posts from everywhere came flooding in. Somebody told me that the US were stopping all flights to Europe. I froze when I heard this. My family were 5,000 miles away. Flights to Ireland didn’t stop, but I worried every second about the possibility not seeing my two little boys until I landed back in Dublin. They didn’t seem to be so concerned about me; they were aggressively interested in how long they would have to wait to get their presents. My wife was happy enough to see me until she got her gift. “Is this all you got me, a book? They sell these in Ireland, you know”. I was thrilled to be back home.
Even after I came home I spent days worrying, trying to figure out what was happening with the Coronavirus. Had I bumped into an infected person? Was the restaurant table at the airport laboratory clean as I would expect? Was I already infected? Would the kids be ok? What about my parents? Would we have enough food if the shops closed? I felt totally overwhelmed.
I had a sharp realisation. The situation with Covid19 is happening and I can’t change it. I can try to help those around me, but I’m powerless to change the global situation.
I can choose how to react. I am choosing to find the opportunity in it.
Soon the virus will be under control, and the world will be back to normal. A slightly new normal perhaps, but normal all the same.
I won’t spend the time over the next few months gossiping, trying to predict what happens. I won’t spend it scrolling through endless social media posts. I won’t spend it passive aggressively arguing about pseudo science with my wife (“ok so you’re an epidemiologist now Mark“). Instead, I’m determined to put in place a routine where I can learn so I become better at something every day.
For the next 2 months I will spend 60 minutes every day on something which improves my life for the better.
I will research and write an academic paper on decision making in Technology Architecture. As I am working from home for the next few weeks, I will use the time I’d have spent commuting. This new knowledge will help me improve the work-lives of every customer we have at Workhuman. It will also make me a better technologist for the rest of my career.
Please find one thing for you to do.
Pick one thing that would alter your life. Is it a new skill, a skill that will help others, a passion that you’ve not had time for?
You can learn to program, begin an exploration of Jazz or Classical Music. You can write a blog, an article, a book. You can read a self-improvement book, take a course in Coursera, or indulge a passion almost forgotten for years. Maybe you’d like to learn how to ride a unicycle.
Find something to do and tweet at me. I will tweet every morning. Lets give each other support in this time to improve the world. @markgreville #ChooseOpportunity
Lets take care of each other and stay positive where possible. Together we are stronger.