Jeremy Utley

View Original

Ethan Mollick: “Latent Expertise: Everyone is in R&D”

Dear innovators and future-shapers,

In our increasingly-AI-driven world, many of us are unknowingly trapped by the limits of our imagination – not because we lack creativity, but because our experiences with generative AI are still too few. As an innovation expert, I've seen time and again how unexpected inputs ignite the sparks of imagination. I’m hoping to do that with this guest post. Today, I'm sharing a powerful guest feature by Ethan Mollick that resonated deeply with me, "Latent Expertise: Everyone is in R&D."

Ethan’s insights align with my conviction: basic AI fluency isn't the end goal – it's the launchpad for unleashing true innovation. Most people's imaginations remain unsparked simply because they haven't had enough hands-on experience with generative AI. This piece challenges us to break free from conventional thinking about AI implementation and explore its vast, untapped potential.

So many folks I’m talking to are standing at the edge of the canyon, unsure how to dive in. To help bridge this critical experience gap, I'm excited to announce that I'll soon be launching a 2-week "Conversational AI" email course. This course aims to provide you with the foundational experiences needed to ignite your AI-driven innovation journey. A bunch of you all (shout out to the Try Ten™ Community, which had a bunch of folks sign up to beta test!) have already been through an early iteration of the email course, and have given a ton of great feedback. You are welcome to sign up here if you’d like more info.

Let's dive into Ethan’s article to start expanding our collective imagination of what's possible with AI. The future belongs to those who can envision and create it – and that journey begins with broadening our AI experiences.

***
Latent Expertise: Everyone is in R&D

Ideas come from the edges, not the center

AI discussions often fall into a weird dichotomy - it is either all “hype” or else the age of the superhuman machines is imminent. At least for now, that is a false dichotomy. There are areas where AI is better than an expert human at particular tasks, and areas where it is completely useless. Instead of blanket statements, we should focus on specifics: we know that LLMs, without further development, are already useful as a co-intelligence that greatly improves human performance (in innovation, productivity, coding, and more), but we also have yet to figure out every strength and weakness.

The first wave of AI adoption was about individual use, and that seems to have been a huge success, with some of the fastest adoption rates in history for a new technology. But the second wave, putting AI to work, is going to involve integrating it into organizations1. This will take longer and will be the key to true productivity growth. After talking to many companies, however, I see many of them following the same well-trod path, viewing AI as an information technology that can be used for cost savings. I think this is a mistake. To see why, let’s reconsider the old analogy comparing AI to the Industrial Revolution.

One of the most fascinating things about the Industrial Revolution in England is how much progress happened in so many industries - from textiles to medicine to metallurgy to instrument-making - in a short time. A lot of credit is given to the great inventors like James Watt and his steam engine, but economists have suggested that these great inventors alone were not enough. Instead, their work needed to be adjusted and made real by people who altered the technology for different industries and factories, and then further refined by the people who implemented it for specific uses. Without a base of skilled craftsman, mechanics, and engineers working in many mills and factories, the Industrial Revolution would have just happened in theory.

Yet I worry that the lesson of the Industrial Revolution is being lost in AI implementations at companies. Many leaders seem to have adopted a view that the main purpose of technology is efficiency. Following the Law of the Hammer (“to a hammer every problem looks like a nail”), they see AI as a cost-cutting mechanism. Any efficiency gains must be turned into cost savings, even before anyone in the organization figures out what AI is good for. It is as if, after getting access to the steam engine in the 1700s, every manufacturer decided to keep production and quality the same, and just fire staff in response to new-found efficiency, rather than building world-spanning companies by expanding their outputs.

Starting with centralized systems built for efficiency has other drawbacks besides strangling growth. Right now, nobody - from consultants to typical software vendors - has universal answers about how to use AI to unlock new opportunities in any particular industry. Companies that turn to centralized solutions run as typical IT projects are therefore unlikely to find breakthrough ideas, at least not yet. They first need to understand the value of AI in order to use it.

And to understand the value of AI, they need to do R&D. Since AI doesn't work like traditional software, but more like a person (even though it isn't one), there is no reason to suspect that the IT department has the best AI prompters, nor that it has any particular insight into the best uses of AI inside an organization. IT certainly plays a role, but the actual use cases will come from workers and managers who find opportunities to use AI to help them with their job. In fact, for large companies, the source of any real advantage in AI will come from the expertise of their employees, which is needed to unlock the expertise latent in AI.

Latent Expertise

Large Language Models are forgetful foxes in a Berlinian sense: they know many things, imperfectly. Oddly, we don’t actually know everything they know, in part because training data is kept secret, but also because it isn’t always clear what LLMs learn from their training data. Yet it is clear that they do have expertise hidden in their latent space - they outperform doctors at diagnosing diseases in some studies, and exceed more doctors in providing empathetic replies to patients, even though those were not expected uses of the system.

Unlocking the expertise latent in AI is, for now, a job for experts. There are multiple reasons for this. The first is that experts can easily judge whether work in their field is good or bad, and in what ways. Take, for example, two topics we are familiar with: teaching people to apply frameworks and interactive tutoring. We can give the AI two simple prompts, one to get the AI to act as a tutor and the other to get it provide frameworks for solving problems (if you want to try the more advanced versions of these prompts that actually work well, you can try the Tutor GPT and Frameworks GPT:

Since we know something about these topics, we can instantly tell that the answer on frameworks, while not amazing, is not terrible. It suggests a number of possible approaches, and, in the text that I cut out of the image, how to apply them. The frameworks are appropriate, and, if the goal is to make you think about the problem, this is not a bad start. The tutor prompt is a different matter. Good tutors need to interact with the student, not assume knowledge. They should not merely throw information at the student but adapt to their abilities and meet them where they are. And they should figure out what you know, not ask you what you think you need to know. The tutor fails at all of these points.

But because we know what good tutoring does, and what the gaps of the AI were, we can easily determine what behaviors we needed to suppress or activate in order for the AI to act as a good tutor. We also know how to teach other people to be a tutor, a skill that is remarkably transferrable to AI but just writing those instructions as a more elaborate prompt. The result is a solid tutor from a prompt alone (see it here). In fact, Google has tested a version of our prompt against a fine-tuned educational model they built, and found that they have statistically similar performance across most dimensions.

This also illustrates another reason why experts are best at using AI for now. As one of my PhD advisors, Eric von Hippel, pointed out: R&D is very expensive because it involves lots of trial and error, but when you are doing a task all the time, trial and error is cheap and easy. That is why a surprisingly large percentage of important innovation comes, not from formal R&D labs, but rather from people figuring out how to solve their own problems. Learning by doing is cheap if you are already doing.

Experts thus have many advantages. They are better able to see through LLM errors and hallucinations; they are better judges of AI output in their area of interest; they are better able to instruct the AI to do the required job; and they have the opportunity for more trial and error. That lets them unlock the latent expertise within LLMs in ways that others could not.

Expert Co-Intelligence

For example, LLMs can write solid job descriptions, but they often sound very generic. Is there the possibility that the AI could do something more? Dan Shapiro figured out how. He is a serial entrepreneur and co-founder of Glowforge, which makes cool laser carving tools. Dan is an expert at building culture in organizations and he credits his job descriptions as one of his secret weapons for attracting talent (plus he is hiring). But handcrafting these “job descriptions as love letters” is difficult for people who haven’t done it before. So, he built a prompt (one of many Glowforge uses in their organization) to help people do it. Dan agreed to share the prompt, which is many pages long - you can find it here.

I am not an expert in job descriptions, and certainly don’t haven’t done the trial-and-error work that Dan has done about this topic, but I was able to draw on his expertise thanks to his prompt. I pasted in a description of the job of Technical Director for our new Generative AI Lab (more on that in a bit) at the end of his prompt, and Claude turned it from standard job description to something that is much more evocative, while maintaining all the needed details and information. Expertise, shared.

Because LLMs have so many generalist abilities, they are also capable of solving other unexpected problems, including ones in industries far from knowledge and creative work. If you go to many factory floors in the US, you will see a lot of old manufacturing equipment with analog dials and no way to connect them to modern manufacturing software and processes. But it turns out that LLMs can be trained to read gauges, and AI can even make smart decisions about what an anomalous reading might mean and when to alert humans. With expert guidance, I wonder if LLMs will allow older plants to skip over an entire phase of development, the way that cell phones allowed many countries to skip the step of building elaborate landline networks.

Starting with expert use has another advantage over the way companies are approaching AI today. Rather than starting with a centralized software solution, you can start with simple prompting and GPT approaches, and only build more advanced tools when experts discover limitations. For example, I have been working with a team of talented folks at Wharton building interactive teaching simulations based on ideas from gaming for over a decade, and we have gotten pretty good at it. These simulations are terrific teaching tools but took months (or years!) to build and a ton of expert work from game designers, subject matter experts, programmers, and fiction writers. So it was amazing when we discovered that we could get 80% of the way to a good game with a prompt alone… but there were still things missing. Prompting didn’t let us handle all of the access control and reporting issues needed to run a class, or the need to provide direct educational materials, or to offer consistent feedback across multiple simulations. These were things we learned were critical to education simulations.

So, we built something new, a system of AI agents that worked together to deliver an educational game experience, with each agent playing a role based on our decade of experience in this field. One of our first AI-based games teaches people how to pitch a startup company to investors.

As opposed to our old way of doing this, which took huge amounts of time, most of the work is done by well-prompted AI agents. After watching a video about how to pitch, students talk to a mentor agent that helps tutor them (customizing the tutoring experience to their interests and experience level), then an investor agent that acts like a venture capitalist, while an evaluator agent looks on to grade the work and keep things on track through prompt injection. Then a progress agent gives feedback, and an insights agent helps the teacher understand how the class is doing. We are also working on new agents to transform the creation of games. We are building one that will interview subject matter experts and create customized games for them, and also spin up playtest agents to evaluate and modify the game. What used to be a years-long process can now be done in weeks and soon, in days. The full architecture, and all the prompts, are in our paper.

Unlocking latent expertise for all

If making current AI systems work better is about expert guidance, then we need experts to share what they learn, or people will use AI in ways that don’t draw on latent expertise. For every student using our tutor prompt, tens of thousands of students are just asking the AI to explain something “like I am 10 years old.” For every carefully build job description, thousands of people are just using LLMs and getting indifferent results. Success is going to come from getting experts to use these systems and share what they learn.

For companies, this means figuring out ways to incentivize and empower employees to discover latent sources of expertise and share them. There are tons of barriers to doing this in most companies - politics, legal issues (real or imagined), costs, etc. - but for organizations that succeed, the rewards will be large. And for those of us outside the corporate world, the mission of discovering latent expertise and sharing it is even more urgent. Realizing the benefits of AI, and mitigating the harms, requires people to understand when and how to use it.

That is why I am really excited to have launched a new effort at Wharton, the Generative AI Lab. The goal of the Lab is to build (and to share knowledge about how to build), research-based uses for AI that help anyone access the latent expertise within, while avoiding the downside risks. We are going to be releasing software like Primer, the AI-agent based educational system, open source, for anyone to build on. I hope this is just the start, and we will see more experts in academia, industry, and government step up to share what works and what doesn’t in AI. And I hope that they do it in open and transparent ways that we can build on together.

We need to figure it out together because there is nobody else who can. The AI labs themselves don’t know what they have built, or what tasks LLMs are best suited for. Bad actors will find bad uses for AI regardless of what we do. We need to work at least as hard to not just mitigate the damage they will do, but also to find good uses that help humans thrive using these new tools. Sharing what we learn, and what works, is a good way to start.

Related: Start Using AI for Yourself
Related: Commission A Personal AI Project
Related: Try This Before You Give Up on AI
Related: Beyond the Prompt: Ethan Mollick
Related: More Info: 2-Week AI Email Course

Join over 24,147 creators & leaders who read Paint & Pipette each week

See this gallery in the original post