If it’s your first time here, you may be surprised at how few pieces I have written. (After reading for a while, you may even be glad of this fact). When friends bring up what they assume is a painful subject, they get a faraway look in their eyes. They place a gentle hand on my shoulder, gaze into the distance, and ask me if I’ve seen ChatGPT. “AI can solve your problem, Mark. It can generate thousands of posts for you. It could help the blog look less like an abandoned quarry”. They think it would solve my problem. My Problem! If that was my problem, life would be a dream. This idea misses the purpose of this blog. It misunderstands my reason for writing entirely.
Generative AI works because it sucks in lots of data, processes it and builds statistical models. At its core, it’s a fancy autocomplete — or, as Grady Booch puts it, a bullshit generator. It acts like an annoying older brother, automatically finishing every sentence (apologies to my own brothers). GenAI probabilistically keeps predicting the next word until it produces sentences, paragraphs, and a complete piece of writing. It can do this because the statistical models have established the most probable next word. These statistics are based on text from books, academic papers and (blesses myself) the internet.
However, there is no concept of meaning in AI. Reasoning is not programmed in anywhere. The output is remarkable, and can appear that the machine is thinking. It isn’t. This is why we sometimes get unreliable outputs — hallucinations. Any meaning we perceive is simply a vague echo from the training data of billions of people. GenAI is digital homeopathy.
We are all lazy by default. Humans rely on heuristics to understand the world. If we didn’t, we would have to process every single thing we hear, see, smell, taste, and touch. A short walk in a city would exhaust our brain’s capacity. We would lose the ability to decide which way to go, overwhelmed by thousands of people, cars, smells, noises and the like. The great Herbert Simon coined the phrase ’bounded rationality’ to describe the cognitive limitations of the mind.
Thinking is hard work. For me, thinking is about sucking in data, and then processing it. I process it through writing. Writing is my way of thinking.
I first had a go at writing because my friend Gar was guest-editing a technology journal. Even though I’d never written before, I was confident that I could write about something I already knew. This confidence was quickly shattered. I was embarrassed at how muddled my thoughts were. Turns out, I knew nothing. Solid ideas fell apart the minute I wrote them down. I could barely finish a sentence without feeling the old familiar burning creep across my cheeks, embarrassed as another idea falls apart while I try to pin it down.
Writing anything down forces me to think really hard. Because I was determined to improve my thinking, I wrote every day. I then started a blog because the potential for embarrassment at publishing poor output forced me to aim for a higher standard.
I’m not interested in building an audience, I am trying to improve. I’m not trying to publish a lot of work. In fact, I have almost 200,000 unpublished words in my Ulysses editor. This writing habit has helped me build a model of the world. And 4 of my pieces here have reached the front page of HackerNews — this is a victory for me — a nobody from rural Ireland.
Technical Debt Is Not Debt; It’s Not Even Technical
AGI May never align with human needs
The dominant model on the internet is of consumption. The more we consume, the more ads we see, the more we buy, the bigger the economy. But if all we do is consume, and never take the time to process information, of even produce our own, then we learn very little. Go back 3 months and look at your internet history. What did you learn from browsing? What actions did you take? Probably close to nothing useful. Instead of spending 2 hours a day on the internet, take 15 minutes to write. Just write down some thoughts. Any thoughts. This slowly changes your understanding of the world around you for the better.
GenAI is an information processing tool. GenAI will help people process information more effectively. But people are lazy by default. If thinking is the hardest work in the knowledge economy, people will avoid thinking where possible.
Therefore, for those who overuse it, GenAI may well make them more stupid. Victor Del Rosal, in his incredible book Humanlike, calls this Cognitive Atrophy. I already see too many examples of people outsourcing their thinking to Generative AI tools. I see them slowly getting more stupid every day.
Me, I’m building my own model.