Stop Measuring AI Usage (Start Measuring AI Impact)

I couldn't believe my ears.

"For the first time in my 40-year career, I've never seen anything like GenAI in our business: we’re seeing 100% adoption of this new technology," declared the global head of AI at one of the world's largest professional services firms during a recent fireside chat.

Everyone in the audience nodded appreciatively. “What an achievement,” I’m sure they all marveled.

Except I knew better.

Just weeks earlier, I had facilitated several workshops with hundreds of people up and down the ranks of that very organization. The on-the-ground reality? Most folks weren't even minimally competent with the technology they were supposedly "adopting."

Within 12 hours of that fireside chat, I heard another global leader brag about how they’re driving DAU’s (Daily Active Users) to over 60% of their subscriber base. Even though I don’t have inside knowledge of this firm, I wasn’t taken in.

Behold the great AI measurement delusion: Organizations are tracking usage metrics while missing the only thing that matters—impact.

More use isn't better use. Better use is better use.

The Irony Isn't Lost on Me

Yes, I'm the same person who has spent years preaching that more ideas lead to better ideas. In the world of innovation, quantity often does beget quality.

But with AI, this formula breaks down. Having a hundred mediocre AI interactions doesn't make you proficient. It makes you ineffective. It slows you down. And it probably reinforces every misconception and suspicion plaguing your non-using counterparts.

This isn't a technology adoption curve. It's a human transformation curve—and most organizations are measuring the wrong things.

An AI 2×2 That Actually Helps

Here's the framework I use when evaluating AI adoption:

On the X-axis, we have HOW you're using AI:

  • Poorly: Basic prompts, accepting first answers, minimal iteration

  • Well: Advanced techniques, thoughtful guidance, strategic iteration

On the Y-axis, we have WHERE you're using AI:

  • Trivial: Low-value, sporadic use cases with minimal impact

  • Valuable: Core workflows, strategic processes, leverage points

(2×2 generated based on the above description in ~2 seconds using Napkin AI)

Most organizations are stuck in the bottom-left quadrant: using AI poorly for trivial tasks. They might be tracking "100% adoption" but they’re creating 0% value.

The goal isn't to get everyone logging in daily. It's to move everyone toward the top-right quadrant: using AI strategically for valuable workflows.

I Make Rookie Mistakes, Too

Last week, my dad asked me if an LLM could convert a 100-page PDF to a CSV file. "Easy," I told him confidently.

I dropped the PDF into ChatGPT, unthinkingly asked for a conversion, and shocker… got garbage back. If I didn't know better, I might have concluded, "AI can't do this,” and bounced like so many beginners do.

Thankfully, I have learned to take responsibility for my own outputs. I know that the outputs of an LLM are a direct result of the input provided. and if I got garbage out, it’s because I gave a garbage prompt.

I added one sentence: "Before you answer my question, please walk me through your thought process step by step." This is called “Chain of Thought” reasoning and has been empirically demonstrated to yield better outputs not only from LLMs, but also humans.

The result of adding that one sentence? Perfect execution of the task.

This is what I call a "forehead-slapping moment"—when you realize you've been making a basic mistake all along. And even I, someone who teaches this stuff for a living, have them regularly.

Better Use vs. More Use: Concrete Examples

Consider these contrasting approaches:

More Use: Someone gets AI to write a draft, then painstakingly edits it to match their voice, format, and style by hand.

Better Use: Someone uses few-shot prompting to teach AI their voice first. "Here are three examples of my writing. Create a new piece that sounds like me."

When someone tells me "AI doesn't sound like me," I ask a simple question: "Have you taught it what you sound like?" Most people haven't even considered this possibility.

Or take the agile leader I spoke with last week. He used to spend hours crafting follow-up notes after team sprints, constantly worried he wasn't asking the right questions. Now he's built a GPT that helps him formulate better questions based on team context. The value isn't just time saved—it's the peace of mind knowing his questions are more thoughtful and effective.

That's not just more use. That's better use.

The Metrics That Actually Matter

If your organization is serious about AI impact, stop counting logins and start measuring:

  1. Workflow Augmentation: How many critical workflows have you improved (not just made faster)?

  2. Solution Virality: Is that great GPT you built being used by others? That’s where you start to scale impact. Remember the National Park Service facility manager who created a tool that saved thousands of days across the park system?

  3. Interaction Depth: Are you having single "oracle-style" exchanges (effectively single request google searches with AI), or robust back-and-forth collaborations? The key to outperforming is to treat AI like a colleague, not an oracle.

  4. Technique Adoption: How often are you using advanced prompting methods like few-shot prompting and chain-of-thought reasoning?

  5. Calendar Coverage: What percentage of your regular responsibilities are AI-augmented? Seriously, audit your calendar and regular responsibilities to assess how many of them incorporate layers of AI? As Brice Challamel, Head of AI at Moderna, told us on Beyond the Prompt, ““I can’t imagine doing any part of my job without incorporating layers of AI into it. It would be so lazy, so stupid, so reckless.”

Your AI Audit: Start Here

If you're ready to move beyond the "more use" delusion, here's your action plan:

First, check quantity (but don't stop there):

  • If your chat history doesn't show multiple interactions today, you're underusing AI

  • If you haven't built multiple custom GPTs — like, multiple pages’ worth — you're missing opportunities

Then, audit quality:

  • How much back-and-forth happens in your interactions?

  • How often do you use phrases like "before you answer, think step by step"?

  • Have you taught the AI your voice, preferences, and context (few shot prompting)?

  • How many of the solutions you’ve built do you use regularly? How many others use them regularly?

Finally, audit strategic coverage:

  • Open your calendar

  • For each recurring meeting or responsibility, ask: "Has this responsibility been AI-augmented?"

  • If more than half aren't, you're leaving value on the table

The organizations that understand this difference between more use and better use are quietly outperforming their competitors—while everyone else celebrates meaningless "adoption" metrics.

Because in five years, no one will care how many people logged into ChatGPT. They'll care about who used it to transform their work.

Related: The 45 Minutes That Saved 20 Years
Related: Beyond the Prompt: Brice Challamel

Join over 24,147 creators & leaders who read Methods of the Masters each week

Next
Next

Show, Don't Tell: Your Team Won't Use AI Until They See You Using It