Your AI Task Force is Missing the Point
Undisputed Beyond the Prompt fan favorite Jenny Nicholson drops by the blog to share some insights on stimulating AI-powered exploration and innovation. There’s so much goodness in this post, I’m going to let it speak for itself.
So, without further ado, I’m pleased to announce… “Folks, heeeeeeeeeere’s Jenny!”
***
“It is a truth universally acknowledged, that a single organization in pursuit of new technology, must be in want of a task force.” (Forgive me, Jane Austen.)
Almost the moment a new technology comes around, emails start flying. All saying some version of, “Oh (crap), we gotta get on this quick! Or, at least, we have to look like we’re getting on this quick so people don’t accuse us of being out-of-touch and irrelevant!”
And whenever those emails start flying, a task force inevitably follows.
Having sat in dozens of these during my career, I have to confess…I’m not a huge fan. The people on them are already believers, the rest of the org doesn’t care, and eventually everyone moves on to the next shiny thing.
But LLMs are different. (FYI: LLMs are “large language models” aka “what most people mean these days when they talk about AI.”) I don’t care who you are, what your title is, what your background is. Give me an hour and I can teach you 10 ways an LLM can make your job easier and better.
I am absolutely desperate to get as many people using LLMs as possible. But they’re not. Not like they could be. Not like companies would want them to if they understood what was possible.
I’ve spent three years trying to understand not just LLMs, but also how people and companies engage with them, how our own human world models get in the way.
So yeah, every company does need an AI task force. But they can’t be constructed on our existing models of how “technology” works and what to do with it.
To explain what I mean, we need to take a little detour.
Swerve alert: Let’s talk about pencils.
Prepare for an extended trip into an analogy featuring pencils. (I’m proud to report that this one came straight out of my 45-year-old meat computer.)
Let’s imagine that pencils were just invented: A crazy new technology that lets you create marks, erase those marks, and write more marks.
We’re talking super-advanced graphite transfer technology combined with friction-based infinite undo mode!
Call the task force! Book demos with platforms that tailor this graphite-transfer technology to create shopping lists! Or to write headlines! Or hell, even to draw super-advanced schematics!
“If you’re not using Infinite Undo Mode, you’re already behind!”
Sure you could do that. Or you could just give every single person in your company a pencil and encourage them to discover for themselves how to use it.
LLMs are made of technology, but in practice, are a lot more like pencils. And if you think about it that way, the job of an AI task force becomes a lot more clear.
How DO you use a pencil?
You can’t just hand a kid a pencil and tell them to go to town. You have to teach them how to use it. We all had to learn at some point. How to hold it. How to sharpen it. How to use the eraser. What happens if you press down too hard. What happens if you write too lightly. How different types of lead make different types of marks.
But once you know how to use a pencil, you can do whatever the hell you want with it. And THAT’S the cool part.
A single pencil can be used to write a love note, scribble a ransom letter, plan out your schedule, draw a portrait so detailed it looks like a straight-up photograph.
The way I use a pencil is different from how anyone else does. Because I’m me and the pencil is an extension of me. Same for you. Same for everyone.
See where I’m going with this?
The control fallacy (or how NOT to hold a pencil)
Everyone’s freaked out about AI. Employees and leadership alike.
But the mainstream corporate narrative around LLMs is not doing us any favors.
Integrating AI into your company isn’t like implementing a new product management platform or a new video-conferencing product.
Nobody knows exactly what they can do. Hell, nobody even knows exactly how the damn things work!
Which is why it can be incredibly tempting to find a solution that looks more like what we’re used to and that tells you exactly how to use it.
But most organizations would be far better served by giving their employees access to an LLM and letting THEM figure out how it can help them.
Which is a level of freedom that can be TERRIFYING.
“What about data privacy?!” “How do we control hallucinations?” “They might generate unsafe content!” “We can’t prevent unpredictable behavior!”
These are real concerns. But they are way more likely to be issues when you’ve pushed all your chips into a fully automated LLM-powered application stack and/or employees don’t have data-secure model access so they throw their work into whatever free tool they can get their hands on.
But even more importantly, top-down control is anathema to innovation.
In the moment, innovation looks a lot like (messing) around: experimenting, tinkering, chasing crazy ideas down rabbit holes. OpenAI calls them YOLO runs. Google used to call it 20% time (though who knows if they still do it).
Control is a top-down desire. Breakthroughs are a bottom-up phenomenon.
Want an example? How about penicillin? (JU: I might also point out the Post-It Note)
Penicillin exists because a scientist went on vacation and came back to a moldy petri dish. Innovation is messy. It’s unpredictable. And it doesn’t show up on a pre-approved list of use cases.
Every day, someone invents a new way to use AI to make their jobs and their lives easier, more fun, and more impactful. Every day, I invent a new way to make MY job and MY life easier, more fun, and more impactful. And I do it with tools that cost $20 a month.
THAT’S how this works. You (mess) around. You find out. When you find out something good, we call that “innovation.”
A task force can’t make innovation happen. But a task force CAN create the conditions to support it. How?
We can start with four questions I ask every company I partner with:
- Can everyone on your team access a frontier LLM like GPT, Claude, Gemini, or Llama?
- Does everyone have a deep understanding of how they work?
- Is everyone both expected and encouraged to use them?
- Are people freely sharing what they’re learning with others?
If you can answer “yes” to all of those, feel free to stop reading here. (Also send me a note so I can tell the world about you and why you’re brilliant.)
For the rest of us, I’ve got some ideas.
Start with access.
No more looking for the “perfect” platform. No more limiting access to a select few “experts.” No more waiting for permission slips from IT to try out 128 different tools.
Make access to a foundation model as ubiquitous as Slack, as accessible as Google docs, as readily available as the free pens they hand out at conferences.
Imagine if everyone in the company, in every department, had access to a foundation LLM (NOT the free ChatGPT, let me be clear) and were encouraged to use it to make their jobs better, easier, and more engaging in whatever way they see fit.
Who knows what they could do? Probably not a few members on an AI task force.
The point is, THEY know what sucks about their jobs. THEY know what they could do if they only had the time. But unfortunately, too few of them know how they can use LLMs to do those things.
Which comes to my next point…
Teach people how LLMs work.
Maybe you’ve already given everyone access to the tools. That’s amazing!
But providing access without education is like giving a toddler a chainsaw and hoping for the best, but the chainsaw doesn’t even have gas. Not mortally dangerous. Still not a great idea.
And one thing I’ve learned in two plus years of leading training and adoption efforts: People DO NOT KNOW how to use these things.
I don’t care if they’ve had training before. Many of those trainings are done by people who, to put it very politely, have done a lot of reading but very little doing.
We’ve spent our entire lives learning how computers work and so we try to use LLMs like they’re computers. But they’re not.
They’re human simulators. And just like humans, to work best with them, you have to understand how they “think,” what they’re great at, where they fall down, and what YOU need to bring to the relationship to help them succeed. You need to realize that the hardest part isn’t getting them to do things, it’s inventing new things you can ask them to do. You need to change your mindset from one where you have to get it right to one where you can experiment all you want.
We focus on the technical side of things, when it’s our mindset that gets in the way.
The magic is not in the machine. The magic is in people who have tacit knowledge and lived experience collaborating WITH a gigantic pattern recognition tool trained on the human collective.
Make it safe to share.
Okay, you’ve given people access. You’ve given them the right kind of training — helping them understand HOW LLMs work and what that means for them, not dictating WHAT they should do with them.
Now comes the fun part: give them a chance to flex.
But they won’t share without encouragement. Many people I talk to who use LLMs all day confess that it feels like they’re getting away with something they shouldn’t be.
That doesn’t keep them from doing it. But it DOES keep them from talking about what they’re doing. Which means nobody else can learn from them.
People need to be encouraged to experiment. And they need to be celebrated for sharing what they learn. And yeah, they need an incentive to do it.
(One sneaky way I’ve discovered to get people sharing: Ask them what they’re doing with AI outside of work. When you separate the technology itself from the ever-present fear of not being able to feed one’s family, people tend to come out of the woodwork with super-creative and inventive ideas that get other people’s wheels turning.)
Want to formalize the knowledge sharing? Do an all-company AI hackathon. Put people into cross-functional teams and give them a challenge. Give them food. Give them prizes. (It’s amazing what humans are capable of when there’s free food and the chance to win a prize.)
Can’t afford a whole day? Start a monthly challenge where people share a use case they’ve discovered and give some cash to the one that’s the most innovative. Let everyone contribute. Let everyone vote.
Honestly, it doesn’t matter how you encourage experimentation and sharing, just that you do. And that your focus is on exploration, not efficiency.
As Einstein said, “Play is the highest form of research.” Let people play. Because the more they play with LLMs, the more they understand how they work, the more they’re likely to uncover use cases that no third-party platform could predict.
Task Force, assemble!
Yeah, you do need an AI task force. But you likely need to rethink who should be on it. You'll need someone who has the knowledge and the agency to greenlight access to a foundation model, in a way that works for the company. But that's the extent of technical knowledge required for a task force. Because the rest of it isn't an engineering problem. It's an imagination problem.
The ideal AI task force is made up of people, from every department, who are the absolute best at what they do. The ones you’d fight keep from leaving. They don’t need to know how an LLM works, not to start. They do need to know how to inspire others, how to rally a crowd, how to put together an awesome program, how to solve problems, how to (mess) around and find out.
Get these people together and give them a clear job: Help every single one of their co-workers access the technology, feel empowered to use it, aware of what they can do with it, excited to experiment, and enthusiastic about working at a company full of superhumans.
And if you’re not quite sure how to pull it off, I’m always here to help.
(This article appeared in a slightly different form on Jenny’s Medium - check out her fantastic diagram-packed version here)
Related: Collect and Connect (the Post-It Note origin story)
Related: Scrape Your Knees: Falling is Essential to Mastery
Related: Declare an AI Recess
Related: Start Using AI for Yourself
Related: Commission A Personal AI Project
Related: Beware Prompt Hoarding
Join over 24,147 creators & leaders who read Paint & Pipette each week
Growth mindset expert Diane Flynn shares insights and advice for a more experienced generation of workers who might feel somewhat hesitant to embrace the collaborative superpowers of GenAI.