Special Episode: “Beyond the Prompt” with GregShove

“Beyond The Prompt” with Greg Shove

Special Episode: Show Notes

Today, we join forces with Henrik Werdelin to launch Beyond The Prompt, a podcast that explores how companies leverage AI to streamline operations and better serve their customers. Henrik is a remarkable entrepreneur and one of the visionary founders of Prehype, a community of entrepreneurial individuals dedicated to assisting one another and collaborating with companies to foster innovation and create innovative ventures.

In this special episode, Henrik and I sit down with Greg Shove, CEO of next-generation business upskilling platform Section, to discuss his experience of leveraging AI in the context of optimizing, accelerating, and transforming various aspects of Section's operations. Employing its exclusive sprint format, Section collaborates with renowned business school professors to provide top-notch business education that translates into tangible real-world results. In this conversation, Greg unpacks the adoption of AI in the workplace, the challenges and opportunities it presents, and specific use cases within Section.

This far-ranging conversation explores the importance of trust and transparency in AI adoption, the everyday tasks that AI can help with, and a framework for AI integration. You’ll also gain insight into AI’s role in education, the reinvention of education delivery, common misconceptions around AI, and much more! For a valuable look at the pragmatic considerations, hurdles, and possibilities associated with integrating AI into everyday business and educational contexts, listen in now!

Key Points From This Episode:

  • Greg's role at Section and his motivation for pivoting to online learning and AI.

  • How the pandemic was a catalyst and a false signal for Section.

  • The pivotal moment when Greg realized the power of AI.

  • Aspects of leadership and trust needed for AI adoption.

  • Three modes for approaching AI: optimize, accelerate, and transform.

  • AI’s potential in high-value tasks and decision-making processes

  • Challenges in prompting AI effectively and learning from your mistakes.

  • Examples of tasks that are suitable and not suitable for AI.

  • Strategies for experimenting with and adopting AI into your workflow.

  • Common misconceptions and pitfalls of using AI.

  • The role of AI in education and the potential disruptions it may cause.

  • Why a dynamic AI strategy, experimentation, and knowledge of advancements are critical.

  • Greg’s thoughts on the future trends and potential of AI.

  • Jeremy and Henrik’s main takeaways from the conversation. 

Quotes:

“Happy consumers are a great way to access enterprise customers.” — @GregShove [0:04:40]

“Anxiety has been the primary emotion associated with AI for most of us.” — @GregShove [0:08:29]

“One of the cool things that AI is allowing people to do is to reduce the space between people with an insight or an understanding of a customer and the ability to produce an output.” — @werdelin [0:25:40]

“The magic of generative AI or these models is that they move across domains. They move across industries. They move across functions. They move across skills with such ease. We don't as humans.” — @GregShove [0:26:41]

“The biggest pitfall [of AI] is misinformation and unrealistic expectations.” — @GregShove [0:32:22]

Links Mentioned in Today’s Episode:

Greg Shove 

Greg Shove on X

Greg Shove on LinkedIn

Section

Claude

ChatGPT

Bard

Perplexity

Fathom

Dall-E

Midjourney

Synthesia

Lex

Notion

Superhuman

Whisper

Voice AI

Anthropic

Henrik Werdelin 

Henrik Werdelin on X

Prehype

Jeremy Utley

Jeremy Utley Email

Jeremy Utley on X

Jeremy Utley on LinkedIn

SPECIAL EPISODE [TRANSCRIPT]

"GS: Let's be clear, right? We're not talking about building. We're not talking in this conversation about spending a million dollars or more, millions of dollars to build an AI or AI product. We're not talking about that at this moment. We're talking about knowledge workers every day using AI. It costs 20 bucks a month. Pay for it yourself or get your employer to pay for it. And if your employer is clueless and won't pay for it, then pay for it for yourself. And so, I think the success metric is different. It's about what percentage of your team is AI-ready, or AI-comfortable, or AI-competent. I call it the AI class. I think the workforce is splitting into two."

[00:01:46] HW: Welcome to this very first episode of Beyond The Prompt. I'm Henrik Werdelin. Here together with my co-host, Jeremy Utley, in a podcast that explores how companies can leverage AI to streamline operations and better serve their customers. 

We're thrilled to have Greg unpack real-world examples of using AI to increase productivity and making a better company. A very warm welcome to you. And to be on the prompt, here's Greg. 

[00:02:10] GS: My career is the typical Silicon Valley career, 30 years of hard labor. No unicorns. Some exits. Good quality of life. Financial security. Established for our family. Not easy. Lots of pivots. Lots of heartache. Lots of dead ends. That's typical. It's not what they sell us at Stanford Business School or what the media, the myth that that creates about building companies. But fired from Apple. It was from my first job out of business school here in the US. I came from Toronto. And after that, serial entrepreneur. 

[00:02:47] JU: Just by way of background, if you want to talk briefly about your role at Section. And then, specifically, I think where we want to go is what have you been trying – I know from our interactions that you felt real passion and purpose around being forward-leaning with AI. We'd love to understand maybe how you came to Section? When the AI moment dawned upon you? And what were some of the early decisions you made? And then we can take the conversation from there. 

[00:03:14] GS: Yeah. Great. Scott Galloway is a close friend. I've known Scott for over 30 years. He started the company. Had raised a seed round and initially had launched the company as a media business. And I came on about a year and a half after it was launched and decided to pivot it to an edtech business. Maybe the second worst business model in the digital kind of landscape. 

But Scott's a great professor. A great educator. And I thought we could marry his abilities with lives, video-based learning. Live. Not asynchronous. I felt that online learning hadn't yet fulfilled his potential. If it's Achilles heel – was it? Still, no one really wanted to do it and no one really showed up and completed. And the metrics for platforms like Udemy and LinkedIn Learning I think confirm that over the last 10 years. And so, I thought we could build a different experience with kind of a higher bar for the student in terms of learning outcomes. We did that. We enjoyed an incredible growth spurt via the pandemic. 

[00:04:11] HW: Amazing catalyst, right? 

[00:04:13] GS: Yeah. Amazing catalyst. And a false signal to some extent. Like other companies that enjoyed that kind of boost, right? When it was over, there certainly was a hangover. We had raised a fair amount of capital, which we invested in marketing, but also in curriculum development. That money was well spent in terms of building out our curriculum. And here we are now really figuring out how to build a capital-efficient learning business that supports the enterprise. We still serve consumers and they're a great way to access – happy consumers are a great way to access enterprise customers, right? So, that consumer enterprise flywheel.

[00:04:45] JU: Diving to the AI moment. You're on this journey. You experience the bump, and then the hangover and all that. Where in the midst of that can you call it CEO heroes? 

[00:04:55] GS: It was simple for me. The moment was I had decided this year – I'm two years cancer-free. I decided this year to work for 10 more years. And I hadn't yet decided with what level of intensity across the whole 10. But I decided to work intensely for the next 5 years and at least somewhat intensely for the next five for a total of 10. I'm 62. And I realized that basically AI could be testosterone for my brain. 

I realized that if I was going to be as productive as I wanted to be, I also set a goal of creating $10 million of additional family wealth for my kids and grandkids. And so, when I made those decisions and then basically started playing around in December, January time frame with AI, I thought, "Okay. If I could combine my ambition and my experience with the cognitive boost of AI, then I'd have a chance. I'd have a much better chance of having kind of making the impact, and having the productivity and contributions I want to make over the next 10 years." 

Silicon Valley is a tough game. It's an agist place, right? We discriminate against people over the age of 40 amongst others. Not the only. Anyway, I just felt like it was a chance for me to add a superpower. 

[00:06:11] JU: What did you see that made you feel testosterone for my brain? What was it? Do you remember the moment where you go, "Oh?” 

[00:06:18] GS: It was two things. It was the lack of friction to get an answer. And my point of view, and this will be the one of the lasting impacts, sort of generative AI or this chat interface, which was interface friction will not be tolerated going forward. We're going to expect and we're going to have these sort of magical experiences where we get the answer that we are looking for in a way that just has less keystrokes, less cognitive load, less effort, less hassle. 

One, the moment was playing with GPT and where it felt like, "Okay. That was the equivalent of four or five searches done in one." Right? And the second was quickly thinking about it as a thought partner. I was able to quickly realize that it's not search. It's brain power, right? And specifically for me, thought partnership. And this idea of approaching something from a different angle. 

But by my age, you have a lot of bias. You have a lot of experience but you have a lot of bias. You have a lot of set beliefs. You have points of you. And I just find AI so refreshing and invigorating in terms of allowing me to come at a problem or an idea just from a different angle, right? Which is just harder to do as you get older. Our brains are less plastic.

[00:07:29] JU: I know. We hate to admit it. Okay. I see all that. Now what I'd love for you to talk about, and I know a little bit of the answer because of my experience with you, did the organization see it immediately? Does everybody – is it as apparent as it was to you, "We need to make a radical investment and exploration." Or did that require some leadership? And if it did, what did that leadership entail? 

[00:07:54] GS: Yeah. Well, it's the classic CEO, "I've seen the future," kind of moment. Everybody else is, "What? No. You haven't you're an idiot. You're the guy that did three layoffs. Really? You're the genius in the room." Absolutely had one of those moments. Probably more than one. 

And then the reality of this is everybody's got day jobs. Everybody's busy. The team is lean. The capital is less available. And so, the team is smaller and working really hard. And this is more work. 

Frankly, the media doesn't help even if you're in Silicon Valley. But if you're not in AI, you're just – anxiety has really been the primary emotion associated with AI I think for most of us for the whole year. 

[00:08:35] HW: I guess when a CEO says, "Hey, we should do AI." It seems that a lot of people on teams hear you say, "Yeah, we should do AI because then I can fire a bunch of you." And, obviously, that could be an outcome. But it could also be an outcome that you can give people an Iron Man suit and you can make the boring kind of part of their job less boring. And you can allow them to do more of the fun stuff. 

Jeremy, I was talking about you can see the efficiency that AI can give you as both helping you with the top line if you give people the ability to make a better product. Or the bottom line – 

[00:09:04] GS: Yeah. That's exactly the way – I think that's the right way to think about it. We think about it in one of three sorts of modes or approaches, which is optimize, accelerate and transform. And so, when I started to play with it – and I did the, "Hey, look at this," at all-hands meeting where I was slacking people, "Hey, check this out." Or I was forwarding newsletters and I was cutting-pasting links to, "Hey, check this out." And people I'm sure were like setting up a little folder or whatever. Little, "Hey, all the stuff from Greg about AI." And I'll look at it maybe on the weekend but I probably won't. That went on for two or 3 months. 

Then I realized I need to do something else. And so, I need to make this kind of both real and strategic at the same time. These are smart people who work at Section at any of our organizations. They want to know like why are we doing this and where is this taking us. 

I thought about as exactly the same way you do. How do we make ourselves more efficient internally? How do we optimize? Can we run the business better allowing us to do more of the good stuff and less of the drone work? And that does not mean laying off people, obviously, in most cases. There will be job. There will be job loss. Significant job loss in some areas, right? Right now and certainly not that Section. Because everybody's working too hard. But we thought about as optimized. 

I thought about, "Well, how do I make the business go better? More revenue. Expand margin. Improve the product in measurable ways." And that's the kind of second bucket. And most of our attention went to those two buckets. 

Transform is, "Okay. I'm at such existential risk. Or I see such incredible opportunity. I might want to move to that kind of mode." I might take a chunk of money and a bunch of people and actually go try to do something that's more transformative. I'd say I got people to understand, "Listen. I'm not trying to lay anyone off. I'm trying to make us operate more efficiently." And we're in the content business. This is called generative AI. 

[00:10:49] HW: Yeah. It is content. 

[00:10:51] GS: It generates stuff. Content. Right? I had to get people there. And it took me too long. I should have got people there faster. It probably took me 90 days because I wasa doing that CEO thing, "Hey, read this. Check this out. What about this? Why don't we try that?" And I think it would have been more effective to do that for a month and then call timeout. It took me a while to really figure out this framework or figure out this approach and then get people in a room and say, "Hey, let's start talking about it a little more intentionally." Versus cool demos. 

[00:11:19] HW: Was it like real? It's just old hands like we're doing now? We've heard other CEOs to kind of send out like a manifesto. What was the tactical kind of approach? 

[00:11:27] GS: Yeah. No manifesto yet. We don't have AI. We don't have AI principles or policies yet. But we will soon. No. It was more really running a series of brainstorming sessions using this idea of optimize, accelerate, and transform. And we took transform off the table. I'm not sure we should have. And we're bringing it back on the table now. And I'll talk about that in a moment. We basically use those two kinds of operating modes, optimize and accelerate. 

Okay. Let's talk about AI, right? Optimize is easy. They're both pretty easy to figure out, I think. Because you just do an audit of your internal workflows. Ask people to do an honest audit of their internal workflows. It's not that hard, right? Look at your calendar. Look at your Asana boards, whatever it might be, and map out how you spend your day. How you spend your week? And look at these moments where you're doing tasks that we think AI could do at least as well. Maybe even better, right?

[00:12:14] JU: With that caveat, how do you get people to honestly look at the workflows? Because I can imagine a scenario where Greg's saying we want to see what AI can do. And I look at my calendar and I go there's a lot of it. How do you keep people from wanting to hide that stuff? Because you imagine a scenario where people go, "I don't want Greg to know that AI can do a lot of my job." How did you provide that assurance that optimize is not about getting rid of you? It's about supercharging you. 

[00:12:40] GS: Yeah. Listen. That's all about trust. They may not have thought I'd seen the future like I was that smart. But going through the last three years that we've been through together as a team, they at least at some level trust me, right? Meaning, that every time we did the layoff, we were transparent about when we were doing it. Why we're doing it? And who's going to be impacted? You got to earn that, right? There's just no other way. 

I think it's harder in bigger companies. And I think, frankly, there is a lot of cynicism. A lot of mistrust. The relationship is so frayed in so many ways, I think, between kind of knowledge workers and their employers. That's why people have two jobs at the same time. 

Yeah. Listen, in that respect, we're fortunate or I'm fortunate. I think there was enough trust to say, "Listen, let's look at this and see if we can get some gains out of it." But it's a great question. I think in large organizations, it's tough. You're going to have to somehow – you're going to have to diagnose where you're at in that trust scale or trust meter. Because, otherwise, that will be the behavior you'll get. You'll get people – or we see it now in the other way, right? That Salesforce study that just came out from this week. I think it was 14,000 employees. That 64% were passing off AI work as their own work. It makes sense in this context of lack of trust. 

One of the things we did around the same time, and I think it helped, which was we started acknowledging when we are using AI. And we even did something as goofy-ass AI shoutouts at all hands. When I started and people were like, "What? You're going to do like an AI shoutout?" We do like human shoutouts every week. But it was just really an attempt to say to people we're all going to be using AI. It's going to be okay. We're going to celebrate the wins. We're going to talk about the losses so we can learn from them. And, yeah, we're going to move forward together. 

[00:14:16] HW: What are a few examples of something that felt into the optimized bucket?

[00:14:20] GS: Oh, easy stuff. Right? Low-hanging fruit would be transcriptions and translations of content. Obviously, preparing scripts for video shoots. A lot of marketing tasks that I'm sure you know about. Email templates and building drafts of marketing email campaigns and cadences or sequences. Yeah. Stuff like that. Basically, what I'd consider V1 work. Most of gen AI today I think is – the way to think about it is get you off the blank page and get a good V1 faster.

[00:14:48] HW: But I would imagine even that have already yielded efficiency. I think that's one of the interesting thing with the tools that are available right now. You don't have to do a lot before it become something that's really useful.

[00:15:01] GS: You don't. And it doesn't cost a lot. People tend to think, "Oh, this is going to be expensive." Or it's going to be like the prototype. And then a pilot and all that stuff. And I got to – it's 20 bucks a month. It's 20 bucks a month and some training, right? You got to help people basically how to get better at prompting. The gains are so obvious at that – 

[00:15:18] HW: On your age question – or your age comment earlier, do you feel that because prompting is as easy for everybody, you don't have to understand Python? Do you think this is one of the things that could be easily adapted by everybody? Or is prompting AI also a young person's sport? 

[00:15:35] GS: I don't think it's a young person's sport. But I think it's not easy in terms of good prompting. The complex contextually relevant and directed prompts. People are now calling them structured prompts. I don't think they're natural for us necessarily. The conversational approach is more natural. But the bottom line is I think prompting still is harder than it should be. And we obviously need to move to some version of an agent or some – the GPTs – November 6 to me was a kind of watershed moment for AI, I think, and probably for the board of AI, too. But I'm blown away by GPTs and really showing a path forward that can unhook AI from prompting basically and bring real kind of value at a very kind of task or in a task level. At a very sort of micro level in terms of people in their daily workflows. But yeah, I think that prompting is just hard for all of us to really do it well. And so, I'm looking forward to prompting less in 2024.

[00:16:28] JU: Yeah. You can imagine that being something that OpenAI actually – maybe the answer is for AI to understand what we're trying to say. Where prompting just gets built in. ChatGPT gets GPT. 

[00:16:39] GS: I want to go back to Henrik's question though. We did one thing early that I think was a light bulb moment for all of us. And I think it's a good idea for others to try. Is we took something that was very high value, infrequent but high value, and applied AI to it to see what would happen? Let me give you the example. And it's the AI. Partner in AI as a board member. We don't record our board meetings but we take good notes. We had a board meeting in the summer where I asked the team to make faithful notes better than usual around all the input and ideas we got from the board. 

And so, we used that. And then we used the board deck that we had sent the board as a pre-read and then we basically ran the board deck through four models, right? Claude, GPT 3.5, 4. I guess, 5. Bart and Bing. And compared the output from the AI after bunch of prompts. Not just one. But we started with the obvious prompt, right? Pretend you're a board member. I'm the CEO. I sent you this preread. It was mind-blowing really on how good particularly Claude was. 

In fact, GPT 4 performed poorly in that moment I think primarily because of a crazy hallucination. But Claude nailed it. And we actually rated Claude as 91% overlap to the feedback including quite nuanced feedback about the stage of our company, our cap table on preference. The kind of growth we have to create or margins we'd have to create. Sort of growth versus profitability kind of discussion. 

I shared that with everybody. I shared it with the team internally. And they're like, "What? With that kind of simple prompt." Right? And the conversation afterwards with one preread, AI could do that. I shared it with the board. I said, "Hey, guys. It's time for you to bring your A-game. Because –"

[00:18:20] HW: Yeah. You're about to get replaced.

[00:18:22] GS: Yeah. Setting up a board meeting takes two hours at least of scheduling effort. Then the thing's 90 minutes. And then there 30 minutes afterwards. And then there's some email follow-up. Claude got 91% of what you said. And I got a great board. I got the former CEO at Time Warner. I got board members who are on the board of Moderna and Akamai. I've got a great board, right? And so, I'm like, "Guys –" 

[00:18:42] JU: Are they concerned all about AI taking their jobs? Is that – 

[00:18:45] GS: Yeah. One of the responses was, "Hey, we loved AI until you showed us this. Because now it's – what? It's coming for us?" It's coming for all of us. 

[00:18:52] JU: What I love about that example, Greg, is that if you bring that back to your team, you're showing the team this is how it's relevant for me. And I think sometimes you use that word V1 work. I think what that can – the way that can be interpreted is it's about low-level tasks. It's about the junior employees and what you do with that example. As you say, the board deck. It's just in the org structure. You conceive of it as the highest-level task and you say, "Even there, I'm using it. It helps me a lot." And I haven't been fired and nor has the board. But it actually – it's just this amazingly elegant example to show people it's not about you not having a job. It's about you doing a better job.

[00:19:34] GS: Yep. I think that's right. And I think it is that providing that context as well. Listen, why am I doing this? I'm doing this because – and by the way, I'm doing it. Meaning, I'm now giving my board decks to Claude and GPT 4 prior to my board meetings so I can raise my game. And what I'm trying to get people to understand is that meeting will now be more productive, right? 

If everybody has not just read the preread but now used AI with the preread, they're going to be able to come in and hopefully move the conversation forward faster and hopefully obviously bring the benefit of human experience and the years on the planet that we all have to these conversations, right? And so, I think people at that moment, they're, "Okay. I get it. We're not trying to do away with me or the meeting. We're trying to make the meeting better and trying to get better answers, better decisions, things like that." 

That's why I like thinking about these high-value moments, or meetings, or discussions, or decisions, or conversations that we have and bringing AI into them as a test case or use case. And, by the way, sometimes it doesn't work. And that's obviously part of the conversation. And I think that also helps. The anxiety will drop a little bit. Hey, it's not AGI and they're all – don't worry about all that. These are like – they're kind of dumb. They're getting smarter and they often don't work. We're not going to use it for that. And you're going to have to go back to the old way. And so, sort of really highlighting where AI has failed internally where we've stopped using it for tasks has been I think really good as well. 

[00:20:56] JU: What's an example of a task that you stopped using AI? 

[00:20:58] GS: Well, I used to deal – one that's relevant to me. We used to do scripting. Video scripting. We don't think AI is – the speed doesn't make up for the lack of quality. We're now back to writing scripts fully manually. If you will, human. That's one example. That's out of the education team. 

For me, I used to use AI over the summer and into – just a couple of months ago, I was using AI to write my monthly email to the board. Because we do a great weekly email summary of the business. We call three Ps, right? Progress, priorities and problems. 

I was taking four weeks of 3 Ps from my direct reports. Feeding it into Claude and saying write me the first draft of my monthly board update. It seemed to work well enough. Or maybe I was just looking confirmation bias. I was just thinking it was doing that well enough, right? Because I was trying to show examples of how AI can help. 

But the reality is, by September, October, Haley, my assistant was like, "This is just not worth it. We're not getting back a good enough V1. Let's just go back to the old way of cut and paste and edit." Now that's back on my to-do list as an hour to-do once a month. Versus I thought I could get it down to 15 minutes and AI do the rest. And it's just not working right now for whatever reason. And we'll try again in Q1 and see if we can get it to work. Maybe we build a GPT for it. That's one of the reasons I'm so excited about GPTs, right? Because with GPTs, you can give really specific instructions. You can constrain the training data is a way to think about GPTs, right? And make it very task-oriented. That'd be a great test to build and see if I can then get back that hour of time, which is I think how we should be thinking about it. If I can get back these hours of time over the course of a week, they'll add up to be meaningful. 

[BREAK]

[00:22:42] JU: Research is clear that our first idea probably isn't our best idea. That's true for you, me, as well as your organization. But that first idea is an essential step to better ideas. How do you improve your idea flow? That's my passion and the work I do with organizations. If you'd like to explore how I can help your organization Implement better ideas, let's talk. Check out my website, jeremyutley.design. Or drop me a line at jutley@jeremyutley.design. Let's make ideas flow better. 

[INTERVIEW CONTINUED]

[00:23:19] HW: I was curious on a super nerdy question. I've also experienced that, for some things, I just go straight to Claude. For some things, I go to GPT. And for just normal chatting, I would use Pi or whatever. You kind of increasingly kind of like a little bit on your team have the go-to LM. It sounds like you have the same. Could you explain that and maybe try to help create some vocabulary on how to think about how to use what service for what? 

[00:23:47] GS: Yeah. Yeah. I think well let's talk with the – well, here's what I think about it. Framework-wise is daily, weekly, occasional and then in testing mode. I've got this portfolio, right? And so, daily for me and for us mostly at Section, it is GPT 4. The most muscular model with the most range if you will. And Claude. 

And for some reason, and I don't know why, Claude seems to be better at thought partner work and seems to be better at business. It just seems to have more, yeah, business-oriented thoughtfulness. I don't know. We think Claude's a better thought partner for people, for executives, for people who work in business. I don't know why. 

And now Perplexity I'd say had been added to that list in the last couple months. Really, our go-to for more research-oriented. Clearly, when we want to see the sources. Now GPT 4 is revealing the sources as well. But I think Perplexity really I think owns that at this moment in time. And then, of course, Fathom for note-taking. I use Fathom. That's probably the go-to daily. 

And then weekly. For me I never got my head around Midjourney. And I'm not a creative. And so, Dall-E for me is just so much easier and good enough. Although Jeremy's wagging.

[00:24:51] JU: Sorry. Sorry. I can't let that comment stand. That's my other life, Greg. I can't abide. Because people know me. And even though this podcast isn't about creativity, I can't let the comment, "I'm not a creative,” go – just like my dad's a lawyer and you have to mark an objection. Marking an objection, we can keep going. But I just want to make sure I could reflect. I objected to your comment that you're not a creative. But please continue.

[00:25:14] GS: Okay. You're objecting? Or was this just like you object – 

[00:25:18] JU: I fundamentally disagree with – you're not a creative? 

[00:25:20] HW: We're all creatives. We're all creatives. 

[00:25:21] JU: A hundred percent, a 110%. And by the way, if we share this conversation with 100 people and ask a simple question, "Is Greg a creative person?" 100 people will say, "Absolutely. He's doing stuff I never imagined." I'm mostly just teasing and making – 

[00:25:39] GS: I get it. 

[00:25:39] HW: Maybe to riff on that, I think one of the cool things that AI is allowing people to do is to reduce the space between people with an insight or an understanding of a customer and the ability to produce an output. And I think, historically, the reason why people would define themselves as not creative was because they were not able to produce the final output that was seen as being something that was visual, or music, or whatever it was. 

And obviously now, where somebody like you, Greg, can take all the wisdom and insight you have and package it in and you can get Midjourney, or Dall-E, or whatever to render that graph, or image, or whatever it is, I think that actually changed quite a bit.

[00:26:21] GS: Yeah. I agree. I think about – that's one angle to think about. The other is someone who has done the mental math or the napkin math but can't get it into that sort of CFO-ready presentation. And they're not viewed as that person who builds robust business cases. But now they can, right? Or they're going to get much closer now with AI. That's obviously that. 

To me, the magic of generative AI or these models really is that they just move across domains. They move across industries. They move across functions. They move across skills with just such ease. And we don't as humans. And so, we're in our silos both skill-based and industry-based. And AI doesn't live in a silo. It's just so powerful in that respect. 

That's how I think about it. My daily tools or weekly, less frequent, something like Synthesia. We're playing with synthetic video and voice a little bit. We'll see where we go with that. And then testing. For me, what I'm testing right now is Lex. Just started with lex.ai to see if I can find a good writing partner or, again, get off the blank page. And then, of course, beginning to play with the AI features that are appearing in the tools we use every day like Notion or Superhuman. 

[00:27:31] JU: One thing that I just want to contribute to the comment about Claude being a better business kind of thought partner. I agree. But the challenge is, for me, there's so little time where I'm sitting at my desk. And GPT with Whisper and Voice to me is my go-to thought partner. Not necessarily because it's better. The best thought partner is the one who's always available. 

And I kid you not. Practical use case, I had a sales call last week with a client. And I know I've got 40 minutes to do a workout. And if I don't leave now, I know that email is going to suck up my life for the next 40 minutes. I get up and I go. But I know I've got to send a recap. The fact that I can open up Chat GPT all via voice while I'm stretching, "Hey, I just talked to Dan. And so, we talked about this and this. And I want to send him a follow-up. Would you might to take a first pass and a memo that I could send him letting him know that I'm excited? That I heard these three things and da-da." Right? 

Oh, yeah. And don't forget this. Now I'm done stretching. And the one thing that I probably would have forgotten to do I did with the thought partner who may not be the best, but it's the one who's available for that moment. 

[00:28:34] HW: Absolutely.

[00:28:35] GS: Yeah. Listen, I think my takeaway from that is you got to give OpenAI a lot of credit, a lot of props the last 12 months. The rate, the pace of their product, product advancements or releases has just been incredible. As we all know, Google slipped Gemini to Q1. And I'm sure at Google, they're sitting around thinking, "Shit. We get to Q1 and we're going to be behind again." Because OpenAI will have Push pushed further ahead now that they decided who the CEO should be again. But, yeah, really impressive what they're doing. And hard for others to catch up. Yeah. Absolutely. We need a mobile app from Anthropic soon.

[00:29:08] JU: Yeah. I can't believe it isn't there yet. Folks at Anthropic, if you're listening to this, please. I want to shift gears. Just last topic that I've got on my mind. Henrik may have others. But I'd love to hear about success stories. And I'd love for you to brag on yourself and get practical. What would you say – what is a success in applying generative AI to the business at Section that you feel like, "Wow. This is a great example of the kind of impact you can have." And then, also, I know because of the course, there's loads of examples. If you want to refer to anything outside of Section as well, we'd love to hear that too. But for folks who are going, "What kind of – really? What kind of impact can this have on our operations?" Are there one or two case studies that you could share that just highlight the practical real economic value of integrating generative AI? 

[00:29:57] GS: Yeah. I don't. I want to come at the answer a little differently. I think success at this moment. Talking about this moment, right? Because we're talking about 20 bucks a month. Let's be clear, right? We're not talking about building. We're not talking in this conversation about spending a million dollars or more millions of dollars to build an AI or AI product. We're not talking about that at this moment. We're talking about knowledge workers every day using AI. It costs 20 bucks a month. Pay for it yourself or get your employer to pay for it. And if your employer is clueless and won't pay for it, then pay for it for yourself. 

And so, I think the success metric is different. It's about what percentage of your team is AI-ready, or AI-comfortable, or AI-competent. I call it the AI class. I think the workforce is splitting into two. The AI class and everybody else. The knowledge workforce is about to split. And we need ourselves to be in the AI class. And the more of us inside of an organization that are in the AI class, that means the organization will be in the AI class. 

That's clearly to me the challenge over the next 12, 24 months. And in that, if we make that happen, we'll get the successes, and we'll get the business cases and we'll get the ROIs. But we're not looking for a lot of ROI because it's only costing us 20 bucks a month. That's one part of the answer. 

Second part of the answer is that there are hundreds of use cases and they are specific. Meaning, the application of that use case to your workflow or your company's kind of workflows are so specific that my examples probably don't matter. And, by the way, what I'm looking for? I'm looking for one use case that saves half an hour less. If you can't come up with one use case that's going to give you back half an hour in a week or an hour in a week – here's how I think about it.

100 bucks an hour for knowledge workers. And that's probably the low end at least in Silicon Valley. But use that number. It's a nice round number for what we pay a knowledge worker. Our token cost this week for GPT 4 are around two cents, 500 words. Right? Do the math. That's a lot of queries to the AI, which means to me several things. My employee should be able to get enough value from 20 bucks a month. That's number one. Number two I'll add a lot of value to someone who I'm paying 100 bucks an hour or two or more. Because they're going to be able to do a lot of queries and get some value from the AI. Just the math works. 

[00:32:06] HW: You clearly have been down I think for many organizations on the advanced side. A lot of some pitfalls to avoid. Have you had anything where you were like, "If I could give myself the advice not to chase that rabbit, I would?" 

[00:32:20] GS: I think the pitfalls are the ones that people talk about. The biggest pitfall is misinformation and unrealistic expectations. “The misinformation is coming from media. It's stealing our data. It's going to take our jobs. We can't trust big tech.” That might be true. I'm not naive. Some big tech are not trying to save the world. big tech's not really trying to make education or healthcare more accessible. They just want to make more money. They want us addicted to AI because they want to charge us 20 bucks a month for it. That's the reality of the situation. It's our job to find use cases and, frankly, avoid some of these pitfalls. But some of this is misinformation. Some of this is just the missed expectations, right? It's being oversold to us. 

And so, I think the first thing I'd say to any leader is don't oversell it. Be optimistic but pragmatic. Acknowledge that it's not for the anxious. If you're culturally anxious. If you're team or you as a leader are anxious, AI is not going to help. It's going to hurt. It is in this moment. 

And so, that means you might want to wait. My advice to some is wait. If you're anxious, if everything has to pencil out, if everything has to work, if you're a no-mistake culture. If you've got – if this is not going to land well, then don't do it. You can sit on the sidelines at least for a while depending on where you are in the crosshairs of AI. That's the first thing every leader should do. 

Every leader should have an honest conversation with themselves or someone that they – a thought partner that they trust or more than one to really assess how soon do I think I end up in the crosshairs. 

[00:33:43] HW: And I think on that thing – and I'll say this as a statement but as a question. To your point about the future and using AI to make big transformative changes in your organization, it does also seem that things are moving so fast right now. That if you were to even try to spec what is the future that I can build with AI and I start the million-dollar project tomorrow, you're very likely to build something that's going to be wrong. Is that a fair statement? 

[00:34:08] GS: I think that's a fair statement and I wouldn't do that. I would have obviously a product vision, or a product strategy, or some idea of what I think I want or where I think I might be going with this. But we have to think about this as a thousand experiments inside of a single road map. Otherwise, I think the risks are too great particularly if you're an incumbent, right? And even for startups, and we're seeing that every week now in Silicon Valley in terms of AI startups, right? Talk about pivots. They're having to pivot every week based on what OpenAI is doing in terms of changing the model, changing the capabilities, changing the economics of the model and so on. 

But back to your pitfall question, it's all about leadership at this moment and just having realistic conversations with people around this. I think it's possible here what these tools can do. And here are the first three, or five, or 10 steps we're going to take. And we're just going to experiment our way into this both in terms of spend, capital and time. 

And I get it. If you're the CEO of a 20,000-person organization, 20 bucks a month adds up. It's going to be $4 million of incremental tech spend. It's not in the budget currently. And so, your CIO is saying, "Oh, okay. I got to pay that – I got to pay that for everybody? What do you want to take out of the road map?" Because it's four million bucks. Or you got to grab that money from some other – obviously, some other source. 

It's real money when you scale that up. I get that. Even if you do the discounts and stuff like that. Starting in a meaningful but small way is my opinion. And grow into it or experiment into it. But be very – to your point, it's changing so fast. You can end up in the crosshairs quickly. 

[00:35:30] JU: Yeah. That's right. One thing I really like, Greg, about your class is you encourage students to re-evaluate every three months. I think that's great. It's not just that you run a framework one time and then you take your marching orders. It's every three – this now needs to be a part of your regular rhythm of review, is where are we in the crosshairs? Where are we in our organizational development? Where we with the latest advances that have occurred? How do they affect our business model? And if you're not regularly reviewing, you're going to be operating off of an outdated model very quickly. 

[00:36:04] GS: Absolutely. And I think that's right in terms of its limitations. Its capabilities and limitations. Oh, it's too biased. It's less biased now and it's getting less biased. It hallucinates. Hallucinates less now. And it's going to hallucinate less than Q1. You check back in to capabilities, limitations, business model costs, cost of queries, for example, or API cost coming down dramatically on November 6. Right? 

Yeah, I think that's right. I think this idea of a one-year AI strategy makes no sense. I think you should have a head of AI, by the way. I think it should be a business person. Not a tech person. That's the person responsible for doing this every three months. 

[00:36:35] HW: It does seem to be a golden opportunity for a lot of IT teams to suddenly get back into glory days. It'll be interesting to see how many of those IT teams are actually going to grab that one and saying instead of seeing it's this necessary evil comparable to let's not give everybody in the office internet as some of us might remember in the 90s. 

[00:36:52] GS: Sure. Yeah. Or banning Facebook. Remember that? But it will happen faster. This reminds me of how IT reacted to SaaS, right? Which is, "Wait a minute. I'm not invented here. Or I didn't choose that kind of thing." And then eventually they lost that battle so to speak. But it took probably a decade really, right? 

I think similar, right? The rally is going to shadow AI. It's already everywhere. Just like shadow IT, SaaS generated shadow IT. AI is everywhere already. And it's the high school kids first and then the college kids. And now it's the younger employees. And they know that their work is mind-numbingly repetitive. They know it is. 

[00:37:32] JU: By the way. Sorry. This is actually non-trivial. Henrik and I have a mutual friend, Bracken Darrell, who I understand made major inroads when he was at Logitech in part because of his kids' gaming habits informed some of his big strategy. Greg, you have sons or you have kids, I think. 

[00:37:50] HW: Yeah, kids. Yeah.

[00:37:51] JU: Yeah. How have your kids affected your understanding of this and your mindset towards it? 

[00:37:56] GS: They haven't helped, to be honest. And I think they started to ignore my texts by June or July. Because two of them are working. Two older kids working in tech. And one younger just graduated from college and now working actually in sales in New York. And he and I are now have a more active dialogue as he's using AI more in his day-to-day flows as he should. But my other two I think just got sick of me – 

[00:38:22] JU: They unsubscribed from the family text, right? 

[00:38:24] GS: Yeah. They unsubscribed from the family text because it was dad posting. 

[00:38:27] HW: Seriously, dad. Stop talking about Sam Altman.

[00:38:30] GS: They started shit posting back about AI. 

[00:38:33] HW: I have one question that I'm curious about. Because, obviously, you're such an expert in education. It does seem very obvious that both the texting format, the going back and forth, but also being able to completely personalize the education for that specific individual has the potential to change education quite a bit. Maybe as we're finishing off and you're looking to the future, could you give us a few sentences on what's your thought on what this will mean for education?

[00:39:02] GS: Yeah. For sure. First of all, having been someone who's tried to disrupt business schools, it's harder than it looks. And I always reminded of the conversation I had with an associate dean at a top 10 US business school who said, "Greg, your product's great. Because we've had people on my team take your courses and your price is too cheap. I hope you go to business or run out of capital." 

These guys have strong brands and just culturally. And we instill so much value in that brand, right? Whether it'd be undergrad or business school. It's harder to disrupt education than you realize I think. And most edtech entrepreneurs I think would agree that it's a tougher sell or it can be. 

I would say – and back to how we talked about what's Section doing with AI? We're moving faster now. The November 6 Dev Day at OpenAI for me and the lowering of the token cost, the release of the GPTs and kind of the robustness of the model, the improvements to the model to me really were a signal that we need to accelerate. Meaning, I do think that – particularly if you're sitting on what I call dumb video libraries of training, and we're not that, because we're a live learning platform. But we're next, right? 

But if you're certainly an asynchronous learning platform today, I have to believe that someone's going to create a much better experience. That is because it's personalized and relevant. Because it's the number one question we get from our students, which is what about my industry? What about my country? What about my job in terms of I'm learning this? But how do you make it contextually relevant? And clearly, AI can do that. 

I'm optimistic in that respect. And we're going to have to move quickly. I think we will, to build it. I'm also thinking about a little bit differently, which is a lot of people don't want to learn. Let's be honest. We burn people out on education I think probably by saddling them with $40,000 a debt at least in the US. And most of us are not lifelong learners. If you do any kind of segmentation analysis of US customers, US consumers rather, you'll get a 5% to 10%, "Oh, they're the lifelong learning segment of the market." 

Everybody else has day jobs. And then they want to go home, and watch Netflix and look after the kids. And so, I'm thinking about this more as can I reinvent what we are building basically not as a course but as a co-pilot essentially? Right? 

[00:41:15] JU: As an example, I think I heard you say this the other day, Greg, just for listeners to make it very pragmatic. Some people want to learn how to make a product strategy. Most people just want a product strategy. And you can run a course to teach the people. But as you said, maybe 10% to 15% who actually want to learn how to create a product strategy. For the other 85% who just won a product strategy, that's a fascinating, to me, evolution of the brand and evolution of the business to say, "Why don't we just deliver a better product strategy for them?" 

[00:41:44] GS: Yeah. That's right. I think that's how I'm starting to think about it. And there's hundreds of tasks like that. Some as small as how do I do a good one-on-one? Or how do I do a performance review? And some is infrequent but more strategic. How do I build a product roadmap or a product strategy? How do I do business strategy? How do I do competitive analysis? And I think that we can teach people or we can actually sit alongside them and get the output. Now the question will still be then, if all of us are getting good V1s out of AI, then what? That's the question we're going to have to answer next. 

[00:42:14] HW: Yeah. It definitely seems very interesting when you look at the graph of where AI can take you. There's going to be people that's going to be under that either can be redundant or can be lifted. And there's people going to be above. But at one point, the graph is going to change quite a bit.

[00:42:27] GS: Yeah. Absolutely. Yeah. We might all be looking at okay V1s of product strategies in a couple years where no one's really – we need some of those to be V2. To be better and differentiated to actually invest in them. 

[00:42:40] HW: This has been so inspiring not only because you're inspiring but what you guys have done already. I'm sure a lot of people who are listening too are going to be able to take a lot from. Very much appreciate you taking the time to talk to us today.

[00:42:52] GS: Likewise. Thank you for inviting me. I've enjoyed it. It's a great conversation. 

[00:42:55] JU: Tell folks how they can find you, follow you, engage with you. 

[00:42:58] GS: Yeah. Sure. Follow me on LinkedIn. And I'm starting to post more frequently. I've been unimpressed with the growth in my followers but impressed with the engagement that I'm getting on LinkedIn. I'm enjoying it. You can find Section at sectionschool.com. Yeah, find us on sectionschool.com and we'll make it easy to check us out and experience a live learning course. It's nothing like LinkedIn Learning.

[00:43:20] JU: It's nothing like LinkedIn Learning. Greg, thank you, as always. Look forward to continuing the conversation. Henrik, thank you. 

[00:43:26] HW: And best of luck with everything. 

[00:43:26] GS: Thanks, guys. 

[00:43:29] HW: Okay. Jeremy, give me your first impression after we had this conversation. What was the thing that you remember? 

[00:43:34] JU: First thing that comes to my mind when I reflect on Greg conversation is the rhythm of acknowledging explicitly with the team when they used AI. I thought that's great. Doing AI shoutouts just like you'd shoutout a human being. Shoutout when someone uses AI. I think it's a great kind of mechanism to normalize. And then the other, he described it as an early light bulb moment, finding an infrequent but high-value use of AI, like reviewing the board deck, I think is a magical way not only of demonstrating value high up the food chain but also for the CEO to be open about the way that AI can affect anybody's job and the way it can amplify anybody's job. A couple of things. What about you Henrik? What did you take away?

[00:44:15] HW: Two things. Obviously, hot not to fall in love with can be testosterone for your brain, which is just a funny thing for the middle-aged man. But I do think that this idea of seeing it as not something that's replacing you or something that you are necessarily tasking but something that is becoming an Iron Man suit or becoming something that you can use to make yourself better, I thought was interesting. 

The second thing, which I think we've heard before but is just I think important, is that most of the people that seem to be knowing how to use this and are using it a lot, they keep talking about it as a thought partner. Not necessarily that's like somebody you send tasks to. And so, I remember you said the other day, when you come out for a meeting you have a great idea, you basically ramble into the ChatGPT app and then it basically structured the thinking back to you. I have exactly the same thing. 

I like to use a lot of words. I tend to be a little bit philosophical in the way that I compute and say things. And so, being able to just get the bullet points back from something that you just vomited into your phone has been very useful. And I think he seemed very much to be of that kind of a school of thought also. See that something that you're having a conversation with when the paper's empty. 

[00:45:28] JU: Yeah. Yeah. And then the last thing probably is the regularity of review. Don't assume that an AI strategy is set for long. But actually, one of the things that I've done because of Greg's influence is I've put calendar alerts on my calendar 3 months out, 6 months out, nine months out to revisit my own AI strategy. And I think it's a really good practical thing is recognize the world is changing so fast. What you think about it should probably be regularly updated deliberately. 

That's all for this episode of Beyond The Prompt. But, hey, before you go, would you do us a quick favor? Would you hit subscribe? We've got a bunch of amazing advice coming your way and we don't want you to miss any of it. We'd be grateful if you'd like and share this episode with someone you know who's also curious about how to add AI to their life and their organization.

Until next time, take care. 

[OUTRO]

[00:46:17] JU: By day, I'm a professor. But I absolutely love moonlighting as a front-row student next to you during these interviews. One of my favorite things is taking the gems from these episodes and turning them into practical tips and lessons for you and your team. If you want to share the lessons you picked up from this episode with your organization, feel free to reach out. I'd be thrilled to do a keynote on the secrets that I've gleaned from creative masters or put together a hands-on workshop to supercharge your next offside adventure. Hit me up at jutley@jeremyutley.design for more information.

[END]

Previous
Previous

Episode 19: Josh Ruff & Marcus Hollinger

Next
Next

Episode 18: Billy Oppenheimer