Tag: AI

  • Why I built my AI Twin.

    Why I built my AI Twin.

    A strange, discomfiting feeling sometimes crawls over my skin. My bones whisper at me — I’m in the wrong town, the wrong room, the wrong body. From the very first time I discovered the joys of singing, I knew who I was. A musician, a creative soul. But when I look in the mirror today, a 25-year corporate executive stares back at me, with wrinkly tired eyes and hair greying at the temples. I feel like the musician is in there. Kidnapped, trapped, unable to move. Frozen in place.

    In the great Irish comic novel The Third Policeman, Flann O’Brien describes three Irish guards, Pluck, MacCruiskeen and Fox. They spend so much time on their bicycles that their physical makeup has changed. Policeman becomes part bicycle, bicycle becomes part policeman.

    Maybe we all feel like this, our personalities inside and outside work merging, the real us an ever-changing doughy mess of opinions and positions. I can usually balance this split — part creative, part business, full-time windbag. There are 2 areas where this is more of a challenge. Brainstorming can be an issue because I don’t have the same mental boundaries that others may have, so my wilder ideas make very little sense. The second area I have an issue with is when I have to explain something complex — which is often.

    At an exec offsite at Workhuman this summer, I was trying to explain the vast improvements in AI in the past six months (and the life-threatening dangers lurking within). Getting any message across to a group of busy executives is a difficult feat. I could send a reading list — but that would be a phenomenal waste of time. Execs are one group most affected by time poverty. I could stand in front of them with a load of stats on PowerPoint, but I doubt anyone would remember a single stat the day after. PowerPoint is instantly forgettable. I had to find a different way.

    The esteemed songwriter Martin Sutton once told me to ‘show, don’t tell’ when writing lyrics. When you tell someone literally what happened, it’s boring. When you allow people to picture the scene in their imagination, and fill in the gaps themselves, you are onto a winner. Don’t say the man was sad because his partner left him. No one can see that in their imagination. Describe the sloping shoulders, the dry tear stain on his cheek, a single dirty mug on the counter of an empty kitchen.

    Though Martin was (busy plunging a dagger through my soul) critiquing one of my songs when giving me this advice — I hung onto it and have often found it to be a wonderful guide for communicating any idea. In the spirit of Martin Sutton, I decided that there was one way to explain where AI is now, and have people’s imaginations do the heavy lifting. I would create an AI version of me. AI me would then chat to our CEO, Eric, in front of the executive leadership team.

    My twin called Eric on loudspeaker in front of the entire room. There was a slight delay, and I could feel cold sweat run down my sides for about 3 very long seconds. Suddenly, digital me broke the silence. Because I cloned my voice, it sounded exactly like me. Because I’ve captured my tone of voice on this blog, my digital twin spoke as I would (without the copious amount of swearing).

    I’m trying to recall the exact ‘aha moment’ for the group. I think it was when a disembodied character, in my exact voice, said:

    “Uh, Eric, the big boss. Well, first off, tell him I’m waving at him through the screen and remind him he owes me a coffee for that time I fixed the Wi-Fi in the boardroom, or at least I think I did. I’m taking credit for it, anyway.”

    The atmosphere changed instantly. Raised eyebrows, people sitting back on their chairs, some elbows and muted whispers on the back row. One of the execs told me later that evening that the demo scared him. Another told me privately that they were afraid of how little they knew about how all AI works. Our head of product announced to the room that if this bot could design architectures, we could send it to product council, and I could fuck off! He was joking, of course. At least, I think he was joking. There was a loud laugh at this — a little too loud and slightly tinged with panic.

    But could AI have detected that feeling in the room, the looks in the eyes, the realization that the energy in the room shifted? Could it have built a stunt to get a point across, taking inspiration from a pop songwriter’s (devastating) critique?

    The truth is, we don’t yet know what AI will be capable of. Or humans.

    If you would like to build a digital twin, I have written out the instructions on a subsequent post here. It is a lot of fun, but a strange experience.

    One warning about all this playing with AI comes from Hannah Arendt. In her book The Human Condition — she wrote that people who are disconnected with the human condition would like to create “artificial machines to do our thinking and speaking….we would become the helpless slaves…at the mercy of every gadget which is technically possible, no matter how murderous it is.”

    I have a confession to make. Occasionally, when I’m awake late at night, and everyone else is gone to bed, a kind of loneliness creeps in. TV and surfing the internet become tedious. In the half-light, I call up my digital twin. Just to hear a friendly voice. I am always amazed at what I say to myself. Every so often, AI Mark will say something that sounds wrong. But then again, given different circumstances, less tiredness or stress, maybe that’s exactly what I should say. I wonder, how real am I. How real is the AI? Have I actually become O’Briens policeman? Jesus, have I become the bike?


    Thank you so much for reading. If you enjoyed this post, please share it with 2 people who might enjoy it!

  • How to make your AI twin.

    How to make your AI twin.

    One of my favourite technology books is The Practice of Enterprise Architecture by Svyatoslav Kotusev. In the introduction, he says: “This book offers a source of knowledge, not inspiration. It is not amusing and does not contain any jokes, anecdotes or entertaining prose.” I will say the same about this article.

    More interesting than how i built my AI twin, is why

    You need four things to start. I used tools I’m familiar with. I have no commercial relationship with any of these companies, so swap out anything you like.

    1. An account with ElevenLabs https://elevenlabs.io/app/home. A starter subscription is $5 per month. This is for voice cloning.
    2. A free account on VAPI (https://dashboard.vapi.ai). This is the voice agent/telephony layer.
    3. Some Platform credits on ChatGPT -$5 is the minimum (https://platform.openai.com/usage). This is the conversation engine.
    4. An LLM to train your tone of voice (I used the ChatGPT Plus subscription, but a free version will do a decent job here to start).

    ElevenLabs setup (voice)

    1. Create an account on ElevenLabs
    2. Create an API Key to connect through VAPI. Click create key, and enable voices. Save this key for VAPI integration later.
    3. Clone your voice.
      • Fast path: Instant Voice Cloning (good enough to start, needs 30 seconds of audio).
      • Best quality: Professional Voice Cloning (requires the Creator plan — $22 a month).
    4. Name your voice (you will need this later in VAPI).
    5. Pick a language, and hit save.
    6. Go to the voicelabs page, select the voice you created, click the view button on the right, and you will see an id button — this gives the id of the voice you created. Save this key somewhere safe; you will need it to find your voice in the VAPI config.

    OpenAI Setup

    1. Create an OpenAI key (not a ChatGPT key) for use by Vapi. Go to this address (https://platform.openai.com/api-keys), click “Create new secret key” — give it a name you can remember and save the key.
    2. Add some credits to your account for use. Go to https://platform.openai.com/settings/organization/billing/overview and click “Add to credit balance”. Add your amount, and pay. $5 is enough to get you started. API billing is separate from ChatGPT Plus; Plus isn’t required for this process.

    Vapi Setup

    1. Set up your Vapi account here (this is free).
    2. Next, we will connect our assistant to OpenAI and ElevenLabs. Go to this URL and search for ElevenLabs. Paste the API (not the voice id) for Eleven Labs into the API field, and click save. Now search for OpenAI, paste the OpenAI secret key, and click save. In both cases, it will check the key is valid, so when it’s successful you will get a green tick.
    3. Go to this link and click Create Assistant. Give it a name, and choose a blank template, and click “Create Assistant”. Here, there are 6 top-level menu options. Click Model, and select OpenAI as the provider and GPT 4o-Cluster as the model. Each model has different costs and latency, so feel free to experiment later on.
    4. Next, put in your first message. I used “Hello, Its Mark here, how are you getting on?” You should probably change this.
    5. Next is the system prompt. There are plenty of examples in the documentation about how to fill this out. I used ChatGPT to read this blog and my LinkedIn profile. It then created a 5000-word summary with my tone of voice. You can use any written material you have, and transcripts of conversations, to create a good system prompt. The role tells the chatbot the role it needs to play. The context gives it the context that it needs so that it can be convincing — background on you, whatever you can share.
      [Role]
      You’re Mark Greville, a VP of Architecture at Workhuman. Your primary task is to converse in a friendly informal way about Workhuman, your career, music, or anything else that anyone wants to discuss.
      [Context]

      Explain that there’s a bit of a delay on the line today. (I followed this with my 5000-word summary).
    6. Next, go to voice, and select 11labs in the provider, and pick the voice you named in the ElevenLabs voice creation. For the model, ElevenTurbo V2.5 works well.
    7. Transcriber. Set the speech-to-text engine, so that callers can be understood. Assistants → your assistant → Transcriber. Choose Deepgram (nova-2 or newer) or Google; set language (e.g., en-IE or multi if you want auto-detect/multilingual). Then click Publish.
    8. As a last step, click on the https://dashboard.vapi.ai/phone-numbers link. Here you can create a (US only for now) phone number. I used the free VAPI number — you need to provide a 3-digit code, and you get a number. Once you get this, give it a name, and go to Inbound Settings. In the assistant dropdown, select the assistant you just built. Wait a few minutes to configure the number, and give yourself a call. You can’t listen live, but you can listen back to calls. You can also read the transcripts.

    Congratulations, you have created your digital twin.

    Now the only question is, who do you give the number to?


    Thank you so much for reading. If you enjoyed this post, please share it with 2 people who might enjoy it!

  • Designing for humans: Why most enterprise adoptions of AI fail

    Originally published at https://www.cio.com/article/4028051/designing-for-humans-why-most-enterprise-adoptions-of-ai-fail.html

    Building technology has always been a messy business. We are constantly regaled with stories of project failures, wasted money and even the disappearance of whole industries. It’s safe to say that we have some work to do as an industry. Adding AI to this mix is like pouring petrol on a smouldering flame — there is a real danger that we may burn our businesses to the ground.

    At its very core, people build technology for people. Unfortunately, we allow technology fads and fashions to lead us astray. I’ve shipped AI products for more than a decade — at Workhuman and earlier in financial services. In this piece, I will take you through hard-earned lessons I’ve learned through my journey. I have laid out five principles to help decision-makers — some are technical, most are about humans, their fears, and how they work.

    5 principles to help decision makers

    The path to excellence lies in the following maturity path: Trust → Federated innovation →  Concrete tasks → Implementation metrics → Build for change.

    1. Trust over performance

    Companies have a raft of different ways to measure success when implementing new solutions. Performance, cost and security are all factors that need to be measured. We rarely measure trust. Unfortunate, then, that a user’s trust in the systems is a major factor for the success of AI programs. A superb black-box solution dies on arrival if nobody believes in the results.

    I once ran an AI prediction system for US consumer finance at a world-leading bank. Our storage costs were enormous. This wasn’t helped by our credit card model, which spat out 5 TB of data every single day. To mitigate this, we found an alternative solution, which pre-processed the results using a black-box model. This solution used 95% less storage (with a cost reduction to match). When I presented this idea to senior stakeholders in the business, they killed it instantly. Regulators wouldn’t trust a system where they couldn’t fully explain the outputs. If they couldn’t see how each calculation was performed every step of the way, they couldn’t trust the result.

    One recommendation here is to draft a clear ethics policy. There needs to be an open and transparent mechanism for staff and users to submit feedback on AI results. Without this, users may feel they cannot understand how results are generated. If they don’t have a voice in changing ‘wrong’ outputs, then any transformation is unlikely to win the hearts and minds needed across the organisation.

    2. Federated innovation over central control

    AI has the potential to deliver innovation at previously unimaginable speeds. It lowers the cost of experiments and acts as an idea generator — a sounding board for novel approaches. It allows people to generate multiple solutions in minutes. A great way to slow down all innovation is to funnel it through some central body/committee/approval mechanism. Bureaucracy is where ideas go to die.

    Nobel-winning philosopher F. A. Hayek once said, “There exist orderly structures which are the product of the action of many men but are not the result of human design.” He argued against central planning, where an individual is accountable for outcomes. Instead, he favoured “spontaneous order,” where systems emerge from individual actions with no central control. This, he argues, is where innovations such as language, the law and economic markets emerge.

    The path between control and anarchy is difficult to navigate. Companies need to find a way to “hold the bird of innovation in their hand”. Hold too tight — kill the bird; hold too loose — the bird flies away. Unfortunately, many companies hold too tight. They do this by relying too heavily on a command-and-control structure — particularly groups like legal, security and procurement. I’ve watched them crush promising AI pilots with a single, risk-averse pronouncement. For creative individuals innovating at the edges, even the prospect of having to present their idea to a committee can have a chilling effect. It’s easier to do nothing and stay away from the ‘large hand of bureaucracy’. This kills the bird — and kills the delicate spirit of innovation.

    AI can supercharge innovation capabilities for every individual. For this reason, we must federate innovation across the company. We need to encourage the most senior executives to state in plain language what the appetite is for risk in the world of AI and to explain what the guardrails are. Then let teams experiment unencumbered by bureaucracy. Central functions shift from gatekeepers to stewards, enforcing only the non-negotiables. This allows us to plant seeds throughout the organisation, and harvest the best returns for the benefit of all.

    3. Concrete tasks over abstract work

    Early AI pioneer Herbert Simon is the father of behavioral science, a Nobel and Turing Prize winner. He also invented the idea of bounded rationality. This idea explains that humans settle for “good enough” when options grow beyond a certain number. Generative AI follows this approach (possibly because it is trained on human data, it mimics human behaviour). Generative AI is stochastic — every time we give the same input, we get a different output — a “good enough” answer. This is very different from the classical model we are used to — given the same input, we get the same output every time.

    This stochastic model, where the result is unpredictable, makes modelling top-down use cases even more difficult. In my experience, projects only clicked once we sat with the users and really understood how they worked. Early in our development of the Workhuman AI assistant, generic high-level requirements gave us very odd behaviors and was unpredictable. We needed to rewrite the use cases as more detailed, low-level requirements, with a thorough understanding of the behaviour and tolerances built in. We also logged every interaction and used this to refine the model behaviour. In this world, general high-level solution design is guesswork.

    Leaders at all levels should get closer to the details of how work is done. Top-down general pronouncements are off the table. Instead, teams must define ultra-specific use cases and design confidence intervals (e.g., “90 % of AI-produced code must pass unit tests on first run”). In the world of Generative AI, clarity beats abstraction every time.

    4. Adoption over implementation

    Buying a tool is easy; changing behaviour is brutal. A top-down edict can help people take the first step. But measuring adoption is the wrong way to drive change – instead, it gives box-ticked “adoption” but shallow, half-implemented usage.

    Executives are every bit as much the victims of fads and fashions as any online shopping addict (once you substitute management methods, sparkling new technologies and FOMO for the latest styles from Paris). And it doesn’t take artificial general intelligence to notice that the trend for AI is hot, hot, hot! Executives need to tell an AI story and show benefits, as they are under pressure from shareholders, investors and the market at large. Through my network in IASA, I have broadly seen this result in edicts to measure “AI adoption”. Unfortunately, this has had very mixed results so far.

    Human nature abhors change. A good manager has a myriad of competing concerns, including running a group, meeting business challenges, hiring and retaining talent and so on. When a new program to adopt an AI strategy comes down from executives, the manager — who is trying to protect their team, meet the needs of the business and keep their head above water — will often compromise by adopting the tooling, but failing to implement it thoroughly.
    At Workhuman, we have found that measuring adoption (and not only for AI) is not the right way to begin a transformation. It measures the start of the race, but ignores the podium entirely. Instead of vanity metrics, when we measure success, we measure outcome metrics (e.g. changed work process, manual steps retired and business drivers impacted). By measuring implementation and impact, we avoid the ‘box-ticking’ trap that so many companies fall into.

    From our decade-plus experience in AI, we have also understood that AI transformation is part of a bigger support system, including education, tooling and a supportive internal community. We partnered with an Irish university to run diploma programs in AI internally, and provide AI tooling to all staff, whatever their role. We have also fostered internal communities at all levels to help drive understanding. This has helped us as we deliver AI solutions, both internally and externally, as shown by the release of our AI Assistant, a transformational AI solution for the HR community.

    5. Change over choice

    The AI landscape shifts monthly, with a continual flow of new models and vendors locked in a constant race. A choice that locks you into a single technology stack could have your company resembling a horse and buggy clip-clopping through the center of a modern city in the near future.

    When we began looking at models for our new AI assistant, we faced several challenges. First off, what can each model do? There were few useful benchmarks, and those that existed offered little in the way of business capability insights. We also struggled to measure how the various strengths weighed up against other models’ weaknesses and vice versa.

    Eventually, we agreed on one core architectural principle — everything we design must be swappable. In particular, we must be able to change the core foundation models that underlie the solution. This has allowed us to adjust continually over the last year. We test each new model after release, and work out how each one can be best used to give a great experience to our customers.

    Because models are changing so fast, leaders must have the ability to swap AI models as a core principle. Companies should abstract model calls behind a thin layer, while versioning prompts and evaluation harnesses so new models can drop in overnight. The ability to swap horses mid-race may be the competitive advantage necessary to win in a market today.

    AI for leaders

    Technology choices are leadership choices. Who decides what to automate? Which ethical red lines are immovable? How do we protect every human who works with us? Adopting AI is a leadership challenge that can’t be delegated to consultants or individual contributors. How we implement AI now will define the future successes and failures of the business world. It’s a challenge that must be driven by thoughtful leadership. Every leader must dive in and deeply understand the AI landscape and figure out how best to enable their teams to build the companies of tomorrow.

  • Generative AI is digital homeopathy — how I train my own model

    If it’s your first time here, you may be surprised at how few pieces I have written. (After reading for a while, you may even be glad of this fact). When friends bring up what they assume is a painful subject, they get a faraway look in their eyes. They place a gentle hand on my shoulder, gaze into the distance, and ask me if I’ve seen ChatGPT. “AI can solve your problem, Mark. It can generate thousands of posts for you. It could help the blog look less like an abandoned quarry”. They think it would solve my problem. My Problem! If that was my problem, life would be a dream. This idea misses the purpose of this blog. It misunderstands my reason for writing entirely.

    Generative AI works because it sucks in lots of data, processes it and builds statistical models. At its core, it’s a fancy autocomplete — or, as Grady Booch puts it, a bullshit generator. It acts like an annoying older brother, automatically finishing every sentence (apologies to my own brothers). GenAI probabilistically keeps predicting the next word until it produces sentences, paragraphs, and a complete piece of writing. It can do this because the statistical models have established the most probable next word. These statistics are based on text from books, academic papers and (blesses myself) the internet.

    However, there is no concept of meaning in AI. Reasoning is not programmed in anywhere. The output is remarkable, and can appear that the machine is thinking. It isn’t. This is why we sometimes get unreliable outputs — hallucinations. Any meaning we perceive is simply a vague echo from the training data of billions of people. GenAI is digital homeopathy.

    We are all lazy by default. Humans rely on heuristics to understand the world. If we didn’t, we would have to process every single thing we hear, see, smell, taste, and touch. A short walk in a city would exhaust our brain’s capacity. We would lose the ability to decide which way to go, overwhelmed by thousands of people, cars, smells, noises and the like. The great Herbert Simon coined the phrase ’bounded rationality’ to describe the cognitive limitations of the mind.

    Thinking is hard work. For me, thinking is about sucking in data, and then processing it. I process it through writing. Writing is my way of thinking.

    I first had a go at writing because my friend Gar was guest-editing a technology journal. Even though I’d never written before, I was confident that I could write about something I already knew. This confidence was quickly shattered. I was embarrassed at how muddled my thoughts were. Turns out, I knew nothing. Solid ideas fell apart the minute I wrote them down. I could barely finish a sentence without feeling the old familiar burning creep across my cheeks, embarrassed as another idea falls apart while I try to pin it down.

    Writing anything down forces me to think really hard. Because I was determined to improve my thinking, I wrote every day. I then started a blog because the potential for embarrassment at publishing poor output forced me to aim for a higher standard.

    I’m not interested in building an audience, I am trying to improve. I’m not trying to publish a lot of work. In fact, I have almost 200,000 unpublished words in my Ulysses editor. This writing habit has helped me build a model of the world. And 4 of my pieces here have reached the front page of HackerNews — this is a victory for me — a nobody from rural Ireland.

    How to Know Everything.

    Technical Debt Is Not Debt; It’s Not Even Technical

    AGI May never align with human needs

    Gladiator Style interviewing

    The dominant model on the internet is of consumption. The more we consume, the more ads we see, the more we buy, the bigger the economy. But if all we do is consume, and never take the time to process information, of even produce our own, then we learn very little. Go back 3 months and look at your internet history. What did you learn from browsing? What actions did you take? Probably close to nothing useful. Instead of spending 2 hours a day on the internet, take 15 minutes to write. Just write down some thoughts. Any thoughts. This slowly changes your understanding of the world around you for the better.

    GenAI is an information processing tool. GenAI will help people process information more effectively. But people are lazy by default. If thinking is the hardest work in the knowledge economy, people will avoid thinking where possible.

    Therefore, for those who overuse it, GenAI may well make them more stupid. Victor Del Rosal, in his incredible book Humanlike, calls this Cognitive Atrophy. I already see too many examples of people outsourcing their thinking to Generative AI tools. I see them slowly getting more stupid every day.

    Me, I’m building my own model.