· Strategy  · 3 min read

The Tool is Fine. It’s the Hype I Can’t Stand.

I don’t hate AI. I hate the Cargo Cult surrounding it. A hammer is a great tool for driving nails, but it's a terrible choice for heart surgery.

I don’t hate AI. I hate the Cargo Cult surrounding it. A hammer is a great tool for driving nails, but it's a terrible choice for heart surgery.

Let’s get one thing straight before we continue: I don’t hate AI.

I’ve survived over 20 years in IT. I’ve seen technologies rise, fall, and get rebranded so many times that I’ve lost count. I’ve seen the transition from physical servers to “The Cloud” (which, as we know, is just someone else’s computer). I’ve seen the birth of Web 2.0 and the subsequent death of privacy.

Through all of this, I’ve learned one thing: A tool is just a tool. A hammer is great for driving nails. It’s terrible for performing surgery. If you try to remove an appendix with a hammer, I’m not going to blame the hammer. I’m going to blame the person holding it.

The Ubiquity of the “Magic”

Currently, AI is everywhere. It’s in our IDEs, our email clients, our phones, and arguably, in our breakfast cereal. Because it permeates every aspect of our professional and private lives, it is only natural that I write about it.

If you see me ranting about Large Language Models (LLMs) or the latest “game-changing” generative tool, it’s not because I want to return to a typewriter and a mechanical calculator. It’s because we are currently witnessing a massive, global-scale failure in understanding what we are actually holding in our hands.

My criticism isn’t directed at the math, the neural networks, or the engineering. It’s directed at the Cargo Cult surrounding it.

The Problem of Blind Faith

We have entered an era where “Prompt Engineering” is treated like sorcery, and LLM outputs are treated like divine prophecy. We are using tools we don’t understand to solve problems we haven’t properly defined, all while fueled by a hype cycle that would make a crypto-scammer blush.

The danger isn’t that AI will become sentient and kill us all. The danger is that we are:

  1. Outsourcing critical thinking to a statistical model that doesn’t “know” anything; it just predicts the next token.
  2. Falling for the “Easy Button” trap, assuming that because a machine can generate 1,000 lines of code in seconds, those 1,000 lines are actually correct, secure, or necessary.
  3. Ignoring the systemic risks of building our infrastructure on top of black boxes that we cannot debug.

Use the Hammer. Just Know How it Works.

I use AI (more on that in the disclosure). It’s useful for boilerplate code, summarizing long-winded corporate emails, and occasionally generating a sarcastic image for a slide deck.

But I use it with the same suspicion I’d have for a junior intern who is perpetually high on caffeine: they’re fast, they’re eager, but I’m definitely going to double-check their work before I push it to production.

This blog (and my work in general) is dedicated to dissecting complex systems. AI is the most complex system we’ve played with in decades. Mocking the hype isn’t about being a “hater.” It’s about maintaining a grip on reality while everyone else is drifting off into a hallucinated dreamscape.

If you’re looking for a “Top 10 AI Tools to Replace Your Entire Staff” list, you’re in the wrong place. If you want to understand why your “AI-First” strategy is likely a house of cards built on a foundation of sand, stay tuned.

Success is often luck. Failure is always data. And right now, the AI hype is providing us with a lot of very expensive data.


Back to Blog

Related Posts

View All Posts »