Your Copilot is (Mostly) Watching

Y

Eighteen months into the generative AI era, there seems to be a shift in the conversations. The panic has subsided. The “will AI take my job” headlines haven’t gone away, but most have moved past the existential dread and into something more interesting: the gap between what AI can do in a demo and what it actually does in their workday.

Every major vendor now has a copilot. Microsoft has one in Office. Salesforce has one in their CRM. ServiceNow, Adobe, SAP, pick your enterprise platform, there’s an AI assistant in it. The promise is that these copilots will automate the tedious parts of knowledge work so people can focus on the high-value stuff. Summarize this meeting. draft this report. or find the relevant clause in this contract. In controlled settings, it often is transformational.

In the wild, it’s more complicated. Most of the copilots I’ve seen deployed in enterprise settings are being used for a narrow set of tasks, by a small percentage of people, and frequently abandoned after the initial novelty wears off. The technology works, but it’s not adopted at the scale the license costs would justify.

Still from Airplane! (1980) Credit: Alamy

Why Knowledge Work is Hard to Automate

The demos scenarios reflect just a clean sliver of real work that AI handles reasonably well. But those are the tip of the iceberg. Most knowledge work lives in the murky water underneath.

A consultant at a professional services firm doesn’t just write reports. She navigates ambiguous client requirements, draws on relationships with subject matter experts, makes judgment calls about what to include and what to leave out, and calibrates her tone based on who’s going to read it and what they care about. The report is the artifact. The work is everything that went into deciding what the report should say.

An insurance underwriter doesn’t just evaluate risk. He weighs incomplete information against company guidelines that have been reinterpreted differently across regional offices for a decade, factors in relationship considerations with the broker, and makes calls that blend actuarial data with experience and intuition. The decision is the output. The judgment is the work.

AI is good at the artifact, but still weak at the judgment. Most knowledge work, the kind that organizations actually pay well for, is judgment-heavy. The tasks that AI automates most easily tend to be the ones that were already the fastest part of someone’s day. There’s value in automating that, and AI will rapidly get better, but so far the impact is limited.

On the other hand, we increasingly see bright spots:

A healthcare administration client rolled out a copilot for their claims processing team. The initial pitch was efficiency: let AI handle the routine claims so adjusters could focus on complex cases. After three months, the efficiency gains were modest. What actually changed was quality. The adjusters started using the copilot as a second opinion, a way to catch things they might miss when processing high volumes. The tool didn’t replace the work. It became a sounding board within the work. Nobody had planned for that use case, but it turned out to be more valuable than the automation they’d envisioned.

Another organization, a large professional services firm, gave their consultants access to an AI assistant for research and document drafting. Usage was enthusiastic for about six weeks, then dropped off. When they actually talked to people about why, the answers were revealing. The tool was fast but unreliable for anything that required domain expertise. Consultants would spend more time verifying the AI’s output than they would have spent doing the work themselves. The ones who kept using it had figured out how to constrain it to tasks where verification was cheap, like formatting, initial literature scans, and boilerplate language. They’d essentially trained themselves to use a powerful tool for modest tasks, because that’s where the trust equation worked.

The Glide Path Nobody Expected

The pattern I keep seeing is that AI adoption in knowledge work follows a curve nobody anticipated. There’s an initial spike of excitement, a dip when reality sets in, and then a slow rebuild where people find the actual use cases that stick. It looks a lot like every other technology adoption curve, except the initial expectations were so high that the dip feels like failure even when the eventual steady state is genuinely useful.

The organizations that handle this well are the ones that treated the dip as information rather than disappointment. They watched what people actually did with the tools, listened to why they stopped using them, and adjusted. The ones struggling are the ones measuring adoption by license utilization and wondering why the numbers don’t match the business case. I’ve even seen cases where firms make a desperate plea to please use the AI tooling that they have invested in.

There’s a management challenge buried in here too. If AI is supposed to make knowledge workers more productive, what does that mean in practice? Produce the same output faster? Produce more output in the same time? Produce better output? These are different goals with different implications, and most organizations haven’t been specific about which one they’re after. “More productive” is the new “digital transformation,” a phrase that means whatever you need it to mean in a budget meeting.

What’s Our Vector?

We’re at an early and somewhat awkward stage. The technology has leapt forward, but the organizational capacity to absorb it hasn’t caught up. Familiar territory, of course, we saw it with cloud adoption, where the technology moved faster than the governance, the skills, and the operating models. We also saw it with Agile and DevOps, where the cultural shift outpaced the organizational structure.

What I think we’re learning is that automating knowledge work is less about replacing tasks and more about reshaping how people and AI tools work together. The copilot metaphor is actually the right one, even if the current implementations are clumsy. A copilot doesn’t fly the plane. A copilot handles specific tasks so the pilot can focus on the things that require human judgment, experience, and context. The question for the next year or two is whether organizations can figure out which tasks belong to which seat.

For now, your copilot is mostly watching. And honestly, that might be okay. It’s learning, as we all are.

Recent Posts

Follow Me