I have a rule of thumb. When my mom asks me about a technology, it has crossed from niche to mainstream. She asked me about ChatGPT over the holidays. To be fair, so did everyone else. In the span of a few weeks, a technology that most people outside of machine learning circles had never heard of became the most talked-about product in the world. ChatGPT will reach 100 million users faster than any application in history. It took TikTok nine months. Instagram two and a half years. ChatGPT is on track to do it a few months.
And just like that, every executive wants an AI strategy.
The Razzle Problem
It seems like every meeting over the past month featured someone showing a freshly minted AI strategy deck. They all look remarkably similar. A slide about the transformative potential of large language models. A slide listing possible use cases, ranging from customer service chatbots to “revolutionize our entire business.” A slide with a vague roadmap to Overhaul Everything.
Missing in action was an answer to a simple question: what problem are you solving?
It sure smells like the beginning of a particularly steep hype cycle, with echoes of cloud, blockchain and all things digital before that. When a technology captures the imagination, executives feel the pressure to act, and organizations start with the solution and work backward to the problem. The result is a lot of motion and very little progress.
On the other hand, the excitement around the hype is valid. Even if it lives up to a fraction of expectations, it’s still huge. So maybe, rather than a strategy, let’s look at it through a tactical lens first. How do we experiment to tease out what may be real, what may be too far-fetched for now, and where the immediate promise lies?

The Dazzle Problem
Here’s what makes this moment particularly tricky. ChatGPT is genuinely impressive. I think most people come away with the same slightly unsettled feeling that this is different. This is not the chatbot on your bank’s website that can’t understand “check my balance.”
The demos are extraordinary. Summarize this document. Draft this email. Write this code. Answer this question in the style of a pirate. It’s dazzling. And dazzle is dangerous in the enterprise, because dazzle skips over all the questions that matter for production use where something is at stake. Like minor things, such as your organization’s reputation.
Large language models hallucinate. They generate plausible-sounding answers that are confidently, eloquently wrong. For a dinner party conversation, that’s entertaining. For a customer-facing application in a regulated industry, that’s a liability. And then there’s the data question. When an employee pastes a client contract into ChatGPT to get a summary, where does that data end up? Most organizations haven’t even thought about this yet, let alone created policies. I guarantee that people across every organization I work with are already using ChatGPT with company data. The shadow AI era has begun, and it makes shadow IT look quaint.
Even if you sort out the accuracy and the data governance, there’s the integration gap. A model that can answer questions brilliantly in isolation is interesting. A model that can answer questions about your specific data, your specific processes, your specific customers is useful. The distance between interesting and useful is where most of the actual work lives. And nobody’s showing that part in the demo.
Then there’s the org chart problem. AI doesn’t fit neatly into any existing box. Is it an IT capability? A data team responsibility? A business function? A new team entirely? I’m watching organizations have the same jurisdictional debates they had about digital five years ago. We apparently enjoy repeating ourselves.
Try This On For Size
If I were leading an enterprise right now, I wouldn’t commission an AI strategy deck. I wouldn’t create a Center of AI Excellence. I wouldn’t hire a Chief AI Officer. Not yet.
I’d start by making sure the leadership team has a grounded understanding of what generative AI can and can’t do. Not from a vendor pitch. Not from a TED talk. From someone who can explain both the potential and the limitations without an agenda. The gap between perception and reality right now is enormous, and bad decisions live in that gap.
Then I’d look at the mess. Every organization has processes that are manual, repetitive, and soul-crushing. Document summarization. Data extraction. Report generation. First-draft content creation. These aren’t glamorous use cases, but they’re real, they’re specific, and they’re where generative AI can deliver value quickly without betting the business on an unproven technology.
Most importantly, I’d get the data house in order. Whatever AI does for your organization, it’s going to run on your data. If your data is fragmented, inconsistent, or poorly governed… and be honest with yourself here… then AI will be fragmented, inconsistent, and poorly governed too. The unsexy work of data quality and data governance just became your most important AI initiative. Nobody’s going to put that on a conference slide, but it’s true.
And I’d create guardrails before needing them. Establish policies for how employees can and can’t use public AI tools with company data. Do it now, not after the incident. Which data can be used with external AI services? Which can’t? Who approves exceptions? Answer these questions before someone makes the decision for you.
At the same time, I’d equip early adopters with tools and time to experiment. Like all innovations before this, the ground-up push is at least as important as the top-down governance to guide it.
Where From Here?
Here’s what I believe. Generative AI is real and will fundamentally change how knowledge work gets done. But “fundamentally change” and “fundamentally change next quarter” are different statements.
The organizations that will capture the most value from AI aren’t the ones moving fastest right now. They’re the ones building the foundation to move fast later. Data readiness. Governance frameworks. A sober understanding combined with the experimental excitement of what the technology does and doesn’t do. The skills to evaluate, integrate, and operate AI capabilities as they mature.
It’s early. For most organizations that will be users of AI but not creators of AI technology themselves, it’s too early to bet the shop on revolutionizing the entire business. The technology is moving fast, but the organizational readiness required to use it well doesn’t happen overnight. But I’d expect the pace of AI capabilities to accelerate much faster than other technologies that came before, so do the groundwork, And maybe revolutionize in a few quarters.

