Episode 7: Astro Teller
Judging The Experiment, Not The Outcome with Dr. Astro Teller
Episode 7: Show Notes
Ever wondered what it takes to revolutionize the world through innovation and moonshot thinking? Enter Dr. Astro Teller, co-founder and captain at X, Alphabet's renowned moonshot factory, who’s on a mission to invent and launch technologies that can change the world for the better. In this enlightening episode, Dr. Teller gives us a fascinating peek behind the curtain of the moonshot factory and how they design their environment to foster creative thinking and innovation. Dr. Teller shares his expert insights on the power of intellectual honesty, the art of unlearning limiting beliefs, and the essential skill of judging experiments rather than outcomes. We explore the strategies that foster creativity, the delicate balance between audacity and humility, and the value of maintaining a beginner's mindset in the journey of innovation. Dr. Teller also shares his thoughts on how to create internal urgency, his predictions for generative AI, and why he would love to see more companies with a moonshot division, like X. Join us as we embark on a fascinating exploration of the transformative power of innovation with a true visionary, Dr. Astro Teller!
Key Points From This Episode:
Dr. Teller’s work at X, Alphabet's moonshot factory for developing breakthrough technologies.
The deep-rooted beliefs that most people need to unlearn after they arrive at X.
‘Bad idea’ brainstorming and other exercises to help overcome limiting beliefs and habits.
Dr. Teller’s strategy for getting people to be honest with him as a CEO.
Why innovation is not a game for loners.
How to judge the experiment, not the outcome, and why this is essential.
A breakdown of how Dr. Teller grades experiments.
The five principles at X for ensuring intellectual honesty and innovation.
Insights into the criteria that X institutes for their projects.
How they approach quarterly reviews for an X project.
Understanding X’s role at Alphabet and how they monitor their pipeline.
Dr. Teller’s approach to balancing a “beginner’s mind” with input from experts.
How he determines whether someone is a good fit for a learning intervention.
Why having high audacity and high humility is such a powerful combo.
Using processes to innovate reliably and how it differs from gambling.
Why being willing to let go of a project at X is crucial.
Dr. Teller’s answers to our rapid-fire questions!
Quotes:
“Alphabet is a huge place. There are lots of innovative aspects of Alphabet. We're not the only one. But X's job is to be at least one of the important places where meaningfully new aspects of business could emerge for Alphabet.” — @astroteller [0:23:32]
“I'm a big believer in the beginner's mind, and the power of a beginner's mind [in] setting out on a journey.” — @astroteller [0:30:21]
“It's almost impossible to have really radical innovation without breaking some of the basic assumptions that the experts would all agree to.” — @astroteller [0:30:42]
“Experts can practice a beginner's mind. But it is extra hard for them to do in their own area of expertise.” — @astroteller [0:33:00]
“There is a process by which you can reliably innovate. But just believing you have the right answer and sticking to it no matter what, that's gambling.” — @astroteller [0:47:33]
“It doesn't make sense for Alphabet, or for the world, for me to let people start on super unlikely adventures without a commitment that [they] will stop it when it looks like it's not the right adventure.” — @astroteller [0:50:07]
Longer Quotes:
“We cannot judge you on the outcome of the experiment. It's not an experiment if we're judging you on the outcome. We have to judge you on how smart you were. How clever you were in picking just the right experiment to run. How scrappy and efficient you were at running the experiment.” — @astroteller [0:15:10]
Links Mentioned in Today’s Episode:
Deviate: The Science of Seeing Differently
Jeremy Utley
EPISODE 7 [TRANSCRIPT]
"AT: We believe that the process we're doing leads over very long periods of time to much more efficient innovation than the way it is often otherwise done. And that's why we're very focused on the process. If someone was going to come to thrive, one of the questions back to their manager would be, "Are you going to be grading this person on the how or the what? On their outcome or on their behaviors?"
[0:00:35] JU: You're listening to Paint & Pipette. I'm your host, Jeremy Utley. I teach innovation and entrepreneurship at Stanford University. Thanks for joining me to explore the art and science of bringing new ideas to life. Let's dive in.
[INTERVIEW]
[0:00:59] JU: All right. Astro, thank you for joining us today.
[0:01:01] AT: Thanks for having me.
[0:01:03] JU: It feels weird to say that because you're actually having me in your space, which is very cool.
[0:01:06] AT: Well, welcome to X.
[0:01:07] JU: Yeah, this is great. It's great to be here. I've been excited to have this conversation since we met last year. Right before we started recording you said, "Okay, I used to do innovation lecture in 60 minutes. Now I do it in 60 seconds." I feel like we got to hear what's the 60-sec innovation lecture before we dive into the rest of the conversation.
[0:01:26] AT: I'm happy to. The full story was I don't really give lectures on innovation anymore because it's really awkward for the other 59 minutes. I don't know what to say. But here it is in 60 seconds.
[0:01:35] JU: Okay.
[0:01:36] AT: I used to meet with like CXOs from big companies all over the world 60-minute lecture on innovation. And I would start – I got to the place where I would start with it's a test. Choice A. Choice B. Choice A, you can give a million dollars of value to your business this year guaranteed. Or choice B, you can give a billion dollars of value to your business this year but it's not guaranteed. It's one chance in 100. A, million, 100%. B, billion, one chance in 100.
Who's choosing choice A? It's an innovation lecture. Nobody raises. Right? Who's choosing choice B? Choice B has 10 times the expected utility of choice A. Everyone raised their hand. Big smile on their face. And I said, "Congratulations. You've all passed the math test. Now leave your hand up. If in your wildest dreams, on their best days your manager, your CEO, your board of directors even remotely supports you choosing choice B." And every hand in the room goes down. And then I say, "You don't need a lecture on innovation. You need a new manager." And then I don't know what to say for 59 minutes.
[0:02:47] JU: Yeah. Well, maybe we fill in the next 59 minutes. Because there's a lot of folks in our community who are in a position probably like you're describing. CXO, a hand up on a math question and then they are in the position where you say you need a new manager.
I would say from the executive programs of the d.school, which I had the pleasure of co-leading for the last 13 years, a number of people have left their jobs. I've often felt like I don't know if that's a success or a failure that, after a program, they leave their job. But it's related to what you're saying.
My question is this, I need a new manager, what do I look for? If I do the math right and if I'm humble enough to acknowledge, the problem is I'm not in the right environment. If I'm going to interview my manager, what have I asked?
[0:03:34] AT: What about this? Imagine that I'm interviewing for a job to report to you and I give you that choice A and choice B. And I say, "Do you understand that choice B is more valuable by a factor 10 in this totally made-up hypothetical example than choice A?"
[0:03:50] JU: Right. Right. Right. Yes.
[0:03:52] AT: Great. Is this a place where, not everybody, some people may have to keep the lights on by doing some choice As? But is my job going to be choice A or taking Choice Bs?
[0:04:04] JU: I think, as a manager, I feel like I know what the right answer is.
[0:04:09] AT: No. I want the real answer. Because we're going to torture each other, Jeremy. Torture each other. After I take this job, if we're not straight with each other now, do you want me doing choice Bs? And do you have air cover for me doing those choice B?
[0:04:25] JU: Okay.
[0:04:27] AT: If you do not, tell me now. Because I don't want to torture you after I join your team. Because I'm only going to join if you say yes and you mean it.
[0:04:35] JU: And so, say, I say yes?
[0:04:37] AT: And mean it presumably, hopefully.
[0:04:39] JU: What is the evidence for you as a prospective employee that would lead you to believe that my yes is true?
[0:04:46] AT: Well, I would say, "Awesome. I've been looking for you for a while. Tell me more about how you got that air cover. Because this is incredibly rare." I mean, it shouldn't be incredibly rare. It should be incredibly widespread. It's a way to make lots of extra money. But it requires mostly getting the answer no, which is what the process of innovation is. Tell me how your organization ended up giving you the right to have your people do choice Bs all day long. I believe it. I desperately want to believe it. Just tell me how.
[0:05:19] JU: And so, you want examples of whether it's board support, whether it's resourcing, whether it's budget, et cetera, that reinforces the fact that we're going to get a lot of things wrong in order to get something really big. Right.
[0:05:30] AT: Exactly.
[0:05:31] JU: Okay. What do you find happens in the other 59 minutes of that lecture? When you say you only need one minute now – or you don't do it anymore specifically because it gets awkward?
[0:05:42] AT: Yes. What I suggest we do is we spend our next 59 minutes presuming we're in that rare circumstance where people can choose choice B. And then ask the question, "What does that mean to support choice Bs? Most of the world, the vast majority, even of people who say they want innovation and do innovation have absolutely no serious interest in choice Bs and in no way support choice Bs, as I'm sure you know.
They're maybe not hopeless. But let's focus on the ones who are sufficiently serious they would change their culture to support the choice Bs and talk about what does that look like? Because that's everything at X. We spend all of our time saying, "Okay, now that we've committed to the choice Bs, what does that mean to support that? How does that play out in terms of what you see in the lobby? How does that play out in terms of how we compensate people? How does that play out in terms of how do we promote people? How does that play out in terms of which projects live and die? Everything has – because what I've just described to you never happens.
And there's a reason it never happens. It's because even the people who say they want it can't actually tolerate the results. All the mess and the chaos that comes from doing choice Bs.
[0:07:00] JU: Right. You mentioned actually last time we talked that, on average, it takes somebody – I don't know if this is actual data. But you said it takes somebody three years to unlearn all of the conventional thinking. What have you seen to be the deepest-rooted wrong beliefs that somebody has to unlearn after they arrive here?
[0:07:22] AT: Let me give you some examples. You know the sort of animal studies of learned helplessness where you sort of shock treatment animals or something horrible like that? Every time they go near the door, even though it's open, until they learn to just never try that again, you can leave their cage door open and they'll never leave.
Even though the shocks are off, the poor dog never actually tries to get out anymore. That's kind of what it feels like here at X. People have learned stultifying habits from the rest of their lives since they were six years old. We can say, "Go be creative." But when you really look behind the curtain on how people feel about saying something that is almost certainly wrong, they feel horrible about saying something, which is almost certainly wrong. But brilliant ideas only come from that pool of things that are almost certainly wrong.
How long even – if we're all going to celebrate them to high heaven every single time they say something, which sounds nuts. How many years will it take for them to really get in the habit to even start testing envelope and discover that they would be celebrated when they say that? It's hundreds of little things like that.
[0:08:36] JU: Are there practices or tools that you give people to help them? I give the simple prompt, for example, everybody write down a bad idea.
[0:08:45] AT: Yes, we do bad idea brainstorming.
[0:08:47] JU: It's like you see people going, it's like, "Oh." And they go, "Well, why bother? I'm just going to throw it away." The price of a bad idea is zero. But the psychological price of it is hard to overcome. My point is that's a bad exercise. It illustrates how people can't do it. But are there practices that actually help people go, "Oh, I'm going to say something stupid."
[0:09:07] AT: That's interesting. I've had exactly the opposite experience.
[0:09:10] JU: Okay. Same more.
[0:09:11] AT: When we say let's do a brainstorm. Everyone, write down your best ideas. That's when they freeze up. Because they're now in a prove that they're smart routine in front of me and all of their peers. And that causes this brain freeze that completely kills creativity.
[0:09:27] JU: Because I've got to be smart.
[0:09:29] AT: If say we're going to have a bad brainstorm, right? Climate change. How are we going to fix it? Bad ideas only. I'll throw the first one on the table. Let's genetically engineer much smaller people. They'll all have smaller carbon footprint. What's your bad idea?
[0:09:43] JU: We should give everybody cigarettes so that they can constantly be emitting carbon.
[0:09:48] AT: Okay. And, by the way, they would kill themselves with the cigarettes, which will also lower the carbon footprint of humanity. Great. What's the next one? As soon as you're out of the "I've got to prove I'm smart", the ideas start coming like this. And we're not going to do the idea I threw out. And hopefully, we're not going to do the idea you threw out.
But you'd be surprised what a high percentage of the ideas, once they start coming, are actually really great. Surprisingly great. And it unleashes people to start putting ideas on the table. Because that thing that was holding them back from opening their mouths in the first place was the I got to only say smart things feeling.
[0:10:25] JU: Right. One of our colleagues at Stanford, Dan Klein, is the head of the Stanford Improvisors. And he says, "Don't be creative. Be obvious." If you just say what – he says dare to be obvious. If you just say what's obvious to you, chances are, because others have a different background experience, they're going to go, "That's super creative." And then if they have the permission to say what's obvious to them, it's wildly creative to others, right? Not because any one of us feel the pressure to be creative, but because we all have the safety or the permission to be obvious.
Okay. A bad idea brainstorm is one technique. Are there other techniques to get people out of that? As you said, that sense of learned helplessness?
[0:11:05] AT: Sure. There's lots of things that people have a hard time taking to. I am serious about getting better myself. I hope I have a real growth mindset. But in meetings or in private, telling me that I'm wrong because I'm the CEO is hard for people. They're not going to do that unless the first thing I say is either high-five them or get up and hug them every single time they tell me I'm wrong. And I may tell them after I do that that I disagree with them. But the first thing I do is high-five them.
[0:11:39] JU: Do you have that kind of almost, for lack of a better word, code of ethics? Meaning, in your mind, you're going, "If somebody disagrees with me, knee-jerk reaction has to be I got to high-five them. Because I have to reinforce the value of that."
[0:11:50] AT: Yes.
[0:11:51] JU: Okay.
[0:11:52] AT: I mean, it would take too long if I say, "Hey, I think we should turn right," and you say, "Hmm. Hey, let's consider going left." That's less scary than saying like, "Astro, you're being a little bit emotional right now. You need to take it down a few notches." That is much more kind of personal and hard to say. It's really the second ones where I'll stop and high-five them and say, "That's how we're going to get better. When we can be –" you just helped me. You're right.
By the time you are brave enough to tell me I need to chill out or something like that, there's a 100% chance I need to chill out. Right? And I'm like, "Thanks. That helps me." And now it's easier for me to do that to you because you did it for me. Now you can feel that's reasonable reciprocity. It's important. Because none of us really know what we're doing, especially in the innovation game.
The innovation game is not for loners. And none of us comes with the answer. It's my ideas and your ideas hitting against each other. And the sparks that fly off from that, that's what's valuable. Not my idea or your idea. Almost 100% of the time.
[0:13:00] JU: Right. Right. It's this weird thing that we can't attribute ownership to because we go, "Where did that –" it just almost emerged.
[0:13:07] AT: Yes. Other things that kind of help send people the right signals – I don't know if they can see it in the background. But you see with the cement there has graffiti marks. Not like cool graffiti. Like the notes from whoever built this building. We left a lot of this building, but using that cement as an example there. Because we want people to feel even at an unconscious level the work-in-progress nature of this building.
If there was mahogany on the walls, that sends you an unconscious signal that your ideas, the way you present yourself, the way your project shows up, has to be as done as the building. If we're a work in progress physically, that's another tiny, almost unconscious signal to you.
I think it's the addition of lots of these things that help people come to the conclusion it's safe for me to try something. It's safe for me to be wrong. I'll get more cool points here for being intellectually honest than for being right, which is not the same thing.
[0:14:16] JU: Right. Right. One of the things that you mentioned in a recent interview that I saw was grading people on the quality of their experiments.
[0:14:25] AT: Yes. If we can agree that this is going to be the land of choice Bs, this one in 100 chances, the first thing you're going to come to the conclusion, emotionally as well as intellectually, is this is totally not safe. Even if Astro said that, even if all of X is saying that, I'm going to run an experiment. There's a 99% chance I'm going to get a zero. And then what happens to me? If we grade you on the output after 5, 10 –
[0:14:54] JU: Meaning success or failure.
[0:14:56] AT: Yes. Exactly. After 5 or 10, chances are good. They will all be zeros. How is that good for your career? How can you even distinguish yourself from someone who is less good than you at playing the innovation game?I cannot, we cannot judge you on the outcome of the experiment. It's not an experiment really if we're judging you on the outcome. We have to judge you on how smart you were. How clever you were in picking just the right experiment to run. How scrappy and efficient you were at running the experiment? And then how intellectually honest you were at harvesting the results of the experiment for what to do next? Including just stopping. If you do those things, I can grade you on those three things totally independent about whether there was something where you went looking.
[0:15:47] JU: Say the three things again. What are you grading?
[0:15:49] AT: The smartness of the experiment. There are lots of experiments that pay off less well than others. A versus B. I am paying you to go find choice Bs for us. Don't bring me a choice A.
[0:16:02] JU: Right. Okay.
[0:16:03] AT: Number two, how well do you run the experiment? You could pick a choice B and then spend a million dollars in a year running the experiment. I'm going to be more impressed if you can find out the answer for $10,000 in a month, right? That's just worth more of X to Alphabet of the world. And then the third is –
[0:16:22] JU: A lot more of that stuff.
[0:16:24] AT: Exactly. Right. You're just getting more learning per dollar, which is the number one metric here at X, is learning per dollar. But then the third one is, great, you picked a great thing to try, a choice B. You ran it in a really scrappy and efficient way, you have the data now. Are you going to be intellectually honest? If you and the five people who work with you are so busy being proud of the time machine team you work on or so afraid of what will happen next in your life if the time machine team ends because it doesn't work. That you can't get out of your mouth, the data says there's nothing here. You've failed. In fact, you're stealing from X, from Alphabet. And, honestly, from the world. Because we could otherwise be getting on to something else we can work on together.
[0:17:09] JU: Right. Right. There are so many biases that keep people from being honest. Is there a cultural – are there shorthands or techniques to help people be honest? Because I imagine that's something that similar to learning that it's okay to fail, that's a learned behavior. It's hard to be honest when you really hope something's going to work. Our capacity for rationalization is almost infinite, right? We can say why, you know?
[0:17:33] AT: The five principles at X. Number one, smart risks. That's the choice B. Number two, perspective-shifting, which is usually how you get to an interesting thing to go try. Iterate, that is tight learning loops. Number four is dispassionately assessed. Intellectual honesty. And number five is reinforce the principles.
And I understand that that sounds a bit recursive, but exactly what you just said. If those four things are the core of what we do here and why we're maybe even a little bit better than others at doing this particular thing, we won't be. Unless every one of us in every interaction we have each other is reinforcing the fact that we value those things.
[0:18:21] JU: Right. Right.
[0:18:22] AT: And that's why it's one of the principles.
[0:18:24] JU: Right. No. It reminds me – I don't know if we're allowed to use the word Amazon here. But it reminds me of something that I've read about Amazon, is they have a habit like the leadership principles, right? Those are kind of famous. But they're often mentioned and they're often reinforced. And the thing is what you reinforce ends up being valued. It ends up being common parlance, right? I love that you have a principle of reinforcing the principles. That's great.
[0:18:48] AT: And it's for the reason you exactly noted, which is this is as much social norms as anything else. And it takes a long time to get into it because of all of these headwinds to just doing things somewhat differently. Even if they're totally obvious. But once you get there, once it's been normalized, it's as hard not to do them as it used to be to start doing them.
[0:19:11] JU: Are there mechanisms? For example, at the d.school, we have things like design reviews. Where, at a certain kind of cadence, everybody will come together and share their problem statement, right? For us as instructors, it's actually super – the work is done because it's manifestly obvious what's a well-crafted problem statement and what's not by comparison. Are there mechanisms like that you use to reinforce principles or as kind of principal moments?
[0:19:37] AT: Sure. Let me give you two examples. When we have a team here and it's sort of reasonably well-developed, it's out of the early pipeline. It's an X project. We have what you could think of a little bit like baby board meetings for them. We meet quarterly. But we call them QLRs. Quarterly learning reviews. And the focus on them is what did we learn?
We see ourselves as buying options for the world, for Alphabet. That's what we're doing. And options either become very valuable or very not valuable depending on what's going to happen in the future. The more you can kind of figure out the sort of shape of the landscape you're heading into, the more wise you can be about which options you keep buying and which options you stop buying.
And so, that's the focus. That's an example of how we make sure that they feel each quarter they're showing up. Not to say, "Here, I got some revenue. Here's the bell I rang." I mean, we take the revenue. And projects here get revenue. But if it's not forwarding the long-term value production of the thing or if it's a ladder to the moon, saying I got closer to the moon in the last quarter is not actually useful.
Another example is have you seen the three circles that X has? Huge problem with the world. Science fiction-sounding product or service and then breakthrough technology. Exactly. And the intersection of those three is where we want to spend our time. There you go.
[0:21:13] JU: Huge problem. Science fiction technology and breakthrough technology. Right?
[0:21:17] AT: Yeah. That's right. Science fiction-sounding product or service. It's a radical solution is what we call it in the upper right-hand corner. And so, that's another way. If you come to anyone at X and you say, "I've got an idea for a frictionless surface." Everyone here knows those three circles well enough to say, "Hmm, that's not a whole – we want to get to the place where we have a moonshot story hypothesis that we can go test. Tell me more about the parts that you haven't said. What problem are you trying to solve? A frictional surface sounds like a breakthrough technology. But what product or service would you build with that even hypothetically? And then what problem would you solve with that product or service?"
Because until you have those things – I mean, we could go test the frictionlessness of your surface. But since that's not the business we're in, that's an academic exercise. We're chasing impact. We're not chasing sort of academic knowledge here. We really want things to be in that form of a huge problem with the world, a radical proposed solution and a breakthrough technology before we can even start testing the hypothesis. That's another kind of microcosm way that we help people to sort of recontextualize, "Oh, I need a hypothesis. Then I test the hypothesis. The hypothesis better be pointing at least a tiny bit towards a technical road map, a product or service road map and a business road map. At least in the very long run. Or why are we doing this?"
[0:22:48] JU: Right. Right. Doesn't meet the criteria for just even. You use a phrase I'd love to drill in a little bit. You said the early pipeline. When something gets out of the early pipeline. This is one thing I would love to understand. When you think about the funnel – it's one thing we talk a lot about in innovation is there's a funnel and there's a lot of stuff that doesn't work. Most people have no concept of the scale of the funnel. Thousands of ideas, right? Can you talk about how you monitor your funnel? When you say early pipeline, do you think in terms of scale? Do you think in terms of number of projects? Number of problems? What does it take to exit the early pipeline? Et cetera. Et cetera.
[0:23:25] AT: Let me take a step back. Because I think in order to answer that understanding what X's job is within Alphabet would help.
[0:23:31] JU: Okay. Yeah.
[0:23:32] AT: Obviously, Alphabet is a huge place. There are lots of innovative aspects of Alphabet. We're not the only one. But X's job is to be at least one of the important places where meaningfully new aspects of business could emerge for Alphabet.
Things like Wing, drones for package delivery. Or Waymo. Or self-driving cars. Or, very recently, Mineral, which is our moonshot for computational agriculture. Sort of getting automation to the farms of the world to help us get to a sustainable agriculture future that we really need to for climate change reasons.
Each of these things hopefully is broadening what Alphabet can do to help the world. And our job is to find even very rarely one of those things and give it to Alphabet. In order to do that, because I don't believe that I'm much better than random, or that you are much better than random, or anybody anywhere in the world honestly is much better than random predicting the future. I don't think that our sort – and I owe it to Alphabet to make sure that this is done efficiently. Otherwise, why would they keep giving us money?
We need to be very open-minded at the wide part of the funnel. We look at well more than a thousand ideas a decade in some meaningful detail. A lot of them. Then once we have these moonshot story hypotheses, some hypotheses to test, some don't make it past the first month. Some we end in a year or in 18 months. Because it takes that long before we've really discovered, "Ah, this is why this won't work."
And so, we're always looking at the reward-risk ratio of each of the things that we're looking at. And the question at any part in the funnel is not, "Could this work? Lots of stuff could work." The question is, "Given what we've learned so far," back to learning per dollar, "is this raising or lowering the value of X's portfolio for Alphabet if we keep working on this unlikely-looking idea as opposed to this unlikely-looking idea?"
And then all of the questions we ask around either raising the reward, lowering the risk or even just tightening the error bars on the reward or the risk. We have a very wide part of the funnel where we're looking at ideas. And at some point, roughly when it starts being a full-time job for at least a few people, it becomes part of the early pipeline for X.
[0:26:18] JU: Everything you described is pre-early pipeline.
[0:26:22] AT: well, I mean, it takes many years to go from early pipeline to something like Wing.
[0:26:28] JU: Right. Right. Commercialized.
[0:26:29] AT: But at the very beginning, we're looking at all of these things.
[0:26:33] JU: Spending a month on something long before it's even considered early pipeline.
[0:26:37] AT: Exactly. And then at some point, when it becomes early pipeline, it usually stays we happen to have two people who, between the two of them, oversee each about half of our early pipeline. And that heterogeneity we have found to be very healthy actually.
And I'm like their customer. I tell them what I think. But it's really important that they have the opportunity to prove me wrong just like ultimately Alphabet is my customer. Right? But they give me a lot of space to surprise them. Because sometimes things turn out to be better than we thought. But, ultimately, if I don't make something that the world or Alphabet wants, then what was the point in spending all that time on it?
They're working through this process in the early pipeline. And once it gets to a place where we have learned enough and removed or shaped enough of the risk, that we can say with more confidence, "Wow. This really might be a once-in-a-generation opportunity for Alphabet and for the world." That's roughly the time where it would become an X project. And, roughly, there might be 10 or 15 people when that happens.
[0:27:50] JU: There are multiple – what I'm hearing you say is there are multiple stages that precede becoming an X project. And then there's stuff that happens not always an X project, right? You get other – what is that?
[0:28:02] AT: Right. I mean, from an X project, we sometimes – even when they're X projects, we stop them. We kill them. Because it just turns out it's not the best way for us to spend our money to help the world or to help –
[0:28:14] JU: Even despite all of the investment at that point. All the hope that you had on that risk-reward You might get to X project and go, "After 18 months, the answer is no."
[0:28:22] AT: Yeah. Sure. Sometimes these things are really exciting. But it turns out we thrive best and help Alphabet or the world best if we move them back to Google. We do that sometimes. Sometimes, as you were saying, they become other bets. They move from X and they become a part of Alphabet that's like a wholly-owned subsidiary. But then starts to have its own trajectory that's very different from X's. And sometimes they spin outside of Alphabet and become sort of venture-backed businesses.
[BREAK]
[0:28:58] JU: How many ideas have you tested today? How about your team or organization? Idea Flow is a set of tools that help you test more ideas faster. I've worked with both high-growth startups and global organizations, and success comes when you test more ideas faster.
Want to learn how better Idea Flow can help your organization? Check out my website, jeremyutley.design or reach out to me at jutley@jeremyutley.design. I'd love to talk with you.
[INTERVIEW CONTINUED]
[0:29:32] JU: There are a thousand directions we can take this in. One thing I can't help but wonder at this point – and my friend, Josh, who's monitoring the chat line. Thank you, Josh. I'm giving you a shoutout. Just mentioned some folks are asking a question that has been on my mind. If you see me look at my watch, it's not because I'm bored. It's because I want to make sure that we're kind of integrated here. What's the role of expertise versus naivete or novice? Right?
I would preface that statement by saying I'm a big believer in the value of a novice perspective complimenting expertise. How do you think about whether it's team composition or domain of impact in pairing expertise with that or not pairing expertise with that?
[0:30:12] AT: First of all, if you came out and went over to my huddle, which is just around the corner here, you would see that there's a big tarot card of the fool taped up on the door. I'm a big believer in the beginner's mind and the power of a beginner's mind setting out on a journey where you want to have an extraordinary adventure, but not starting out with a beginner's mind essentially guarantees that you won't have an extraordinary adventure. I would frame it this way. It's almost impossible to have really radical innovation without breaking some of the basic assumptions that the experts would all agree to.
[0:30:54] JU: The keepers of conventional wisdom.
[0:30:57] AT: On the other hand, of the hundred most important assumptions in any industry, in order to get to some radical innovation, you may need to break three or four of those hundred things. And the experts may have a particularly hard time breaking any of them because it's their Bible.
But once you've picked the right three or four, even if you did pick the right three or four, it then becomes a foot race between the extra value created by breaking the right three or four of those assumptions and all of the badness and waste you will produce by not being an expert at the other 97.
And so, getting experts in to complement that beginner's mind so that you can break the right assumptions, but then not sort of lose all that value back by being bad at all the stuff where you should just do it the normal way I think is sort of the give and take.
[0:31:50] JU: And is that existing in a team at the same time or is it bringing it in strategically kind of targeted interventions?
[0:31:58] AT: Typically, we will have people who are somewhat expert, but not at all world experts in a particular domain from relatively early in a project's life cycle. And soon after that, we will start to enlist world experts as consultants. And that's good. Because it's a sort of try before you buy. And not every team and every consultant work great together.
But when you find a great rhythm and the project continues to make progress, not infrequently, we end up hiring those consultants as full-time people over time. And so, we build up over time a staff of real experts on the team. They never come to dominate the team.
[0:32:39] JU: Right. And what I'm hearing is you credit – going back to your idea of a perspective shift, oftentimes, the perspective shift actually comes from the beginner's mind. But then you don't want to throw the baby of expertise out with the bad water of conventional thinking. You actually want to get all the benefits of expertise while not having it limitations.
[0:32:59] AT: That's right. And to be fair, experts can practice a beginner's mind. But it is extra hard for them to do in their own area of expertise. I certainly wouldn't rule it out. And I have seen a few people who are good at it. But it takes an incredible amount of discipline to hold your own beliefs about what's so right and obvious I don't even think about it anymore at bay. Allowing yourself to say, "Hmm, not necessarily." But the things you hold most obvious.
[0:33:32] JU: Right. Yeah. A good friend of mine named Dr. Beau Lotto, who's a neuroscientist, wrote a great book called Deviate. But he said to me once that the beauty of an expert is they can identify a great question, but they can rarely ask one. Novices ask great questions, but they don't know it. And so, if you put them together, you essentially get this ability to ask and identify great questions, which is kind of what you're getting at.
Okay. I want to switch gears just a little bit. Because you said something to me when we met at the conference last year that has just stayed in my heart and mind. Most people who are members of this community know that, as a full-time profession for more than a decade, I ran design thinking and creativity workshops that are short format engagements designed to get people rapidly up the learning curve on the way designers approach creative problem-solving.
And so, I have a very strong point of view about workshops and things like that. When you and I were talking, I mentioned I did workshops. You said, "Oh, we've got a great workshop at X." And it's oversubscribed. There are lots of people asking about it. I'm very reluctant to admit people. I would say, as kind of a mea culpa at Stanford, we're not reluctant to admit people. We love admitting people, right? Because they're customers, right? We go, "Yeah, anybody wants to come can come." But it sounded like you were very much more careful about who you admit. I would love to understand what drives that thinking and what are you considering when you see whether someone is a good fit for a learning intervention.
[0:35:02] AT: Yeah. To be clear, you're calling them workshops. And I think maybe we were differing on terms. I'm talking about a 10-month commitment of 200 hours on your part if you're going to be part of this.
[0:35:15] JU: 200 hours over 10 months.
[0:35:18] AT: Roughly.
[0:35:18] JU: Okay. Okay.
[0:35:19] AT: These are typically kind of midcareer people. And this is not things like design thinking. Arguably, your boss – not your boss, but I mean a random boss, may not understand design thinking. But if you have design thinking as a new arrow in your quiver, you will just get better results. And your boss will care about that whether or not your boss understands or even likes design thinking. That's why it's safe to teach someone design thinking.
But I'll give you an example. I don't run these things. A woman named Maya, who works here at X, who's our cultural alchemist, runs the Thrive Program and the Thrive For All Program, which are sort of two different cohorts that we do. Shoutout to her. She's amazing.
[0:36:01] JU: Shoutout, Maya.
[0:36:03] AT: Exactly. But these are about trying to help people really get past some of their deep fears, the weight they carry with them that's slowing them down tremendously. Let me give you the example of the thing that I do. I'm sort of a guest lecturer for each of these. And I talk about the masks that we wear and the armor that we carry with us. And the ways that we develop those masks and that armor. And the shocking level it slows us down to be presenting all the code-switching, all the armor that we have. How much it drains our energy? How much it drains our ability to be creative? How much it actually drains our charisma and our ability to get other people excited about our ideas or who we are? And then at the end, we actually do a face-painting exercise together. It's a really joyful couple of hours. I enjoy it and I hope they do too.
[0:37:03] JU: You mentioned that as an example.
[0:37:05] AT: Here's the point. What I just said is true in isolation. Finding a way to put down your armor, to take off your masks should just feel like this incredible release and like a supercharge like you can't believe. And I've seen people go through that process.
[0:37:25] JU: I can imagine. Yeah.
[0:37:26] AT: But if they're about to go back to a place that is not safe for them –
[0:37:30] JU: Right. They got to armor back up.
[0:37:32] AT: – it was actively unkind of us to encourage them to take off the mask. Not only will they not get –
[0:37:39] JU: Now they're more vulnerable.
[0:37:40] AT: They're more vulnerable. And they'll just get hurt for it. Some people aren't ready using this particular example to take off their mask or put down their armor. And we all have layers. It can be a multiple-step cycle.
Number one, if you want to join the Thrive Program, one of the questions is are you ready? Do you even really want to do this enough to do the hard work? And it's okay if you're not. It's not a good/bad thing. But the other question is are you in a context? Especially if you're not at X. We work really hard to support these kinds of things at X. But if you are outside of X, then the question, "Tell me about the context you're in," matters. Because we don't want to spend all that energy on you only for you to go back to a place that isn't really signed up for that new version of you.
[0:38:24] JU: Okay. There are a couple interesting things there. Laszlo Bock and I, years ago, taught a class together at Stanford GSP. Well, you probably wouldn't even remember it. It's so long ago. I'll never forget one thing he said. For those who don't know, Laszlo is the former Head of People Ops at Google. One thing he said is, "Having studied the data, they expect people to revert to the meme." After an intervention, there almost always is a reversion to the meme. He said the biggest variable is whether the team that someone's going back to expects their behavior to be different. If they don't, they perceive their change to be a blip on the radar and the old Astra will be back soon, right?
All that said, you reminded me of that when you mentioned that you want to examine the context of the person in order to determine whether something like Thrive would be supportive to them.
[0:39:12] AT: And we actually talked to them about this. That if they are coming back to their team in a very different way, they have an opportunity and, in a certain sense, even an obligation to explain to the team, "Hey, this is who I am. This is what I'm working on. I want you to expect me to show up in these new and different ways. I'm asking for your support. When I do show up in these different ways, I'd love it if you could give me some positive reinforcement. Because, presumably, it's going to be hard for me. Or I would have already been doing it." And I want you to hold me accountable. When I don't show up in these new ways, I'd love for you to very lovingly call me out on it.
[0:39:54] JU: Okay. One thing that you want someone to do is explain to their team what their expectations should be. What else are you – if you're responsible for stewarding this program and ensuring the success of people –
[0:40:05] AT: Which I'm not. Maya is.
[0:40:05] JU: You're not. No. No. And we need to get Maya in here quickly. Is Maya around? The question is what else are you examining?
[0:40:13] AT: They would push her through the door.
[0:40:14] JU: Yeah. I mean, welcome. Please, come in, Maya. If you're around. What else are you evaluating in someone's context? What would be a red flag for you? If someone told you blank about their context, you go, "I don't think this is the right – it's not going to help you succeed."
[0:40:28] AT: Well, for example, I mean X is very focused, as we were just describing, on the process rather than the outcomes for the reasons we just talked about. We often refer to it here as focusing on the how rather than the what. If you were an investor in Vegas and you were giving two people money to bet on your behalf. And they were supposed to be card counting. That's why you were giving the money of yours to bet. You went, you looked over one of their shoulders and they were losing, but they were doing the count perfectly and they were playing the hands absolutely perfectly statistically. You're still going to invest in them because their how was perfect.
And if you went to the other table and they were just gambling, they were totally ignoring it, and they had a huge pile of money in front of them, you wouldn't give them money again. It doesn't matter that they're winning. Because you believe that the process –
[0:41:23] JU: Well, in this case.
[0:41:22] AT: – in that particular case, we believe that the process we're doing leads over very long periods of time to much more efficient innovation than the way it is often otherwise done. And that's why we're very focused on the process.
If someone was going to come to thrive, one of the questions back to their manager would be, "Are you going to be grading this person on the how or the what? On their outcome or on their behaviors?" Because we're not focused on improving their behaviors. Or, sorry. We're not focused on improving their outcomes.
And in fact, we're going to be telling them things that will probably be detrimental short-term so that you can get more in the long run. Choice B is a bad deal relative to choice A. If you need absolutely have a million dollars this year and you just can't live without that million dollars, Choice B is a horrible deal. It's in the long run that B crushes A. But if we're teaching people to play the long game and their manager wants them to play the short game, then we're actually at odds with their own manager.
[0:42:33] JU: This is something that I know we are in this imaginary bubble where we're in the B realm. But, say, for a moment we leave that realm into the world of reality. And imagine for a second, you've had an illustrious career at X and you go, "I'm on to my next moonshot adventure." What would be the non-negotiables? Or, say, by the way, there are members of the community who are either CEOs or board members who potentially have the influence and ability to create structures like this and/or have the ability to negotiate for them, right?
Or, say, I am being recruited by an organization, what would you advise me to say, "If you want to bend the odds of success in your favor of moonshot thinking, what are the non-negotiables at the kind of call it negotiation table level that you go, "If I can't have this, I'm prepared to walk away." You mentioned time horizon, and that's what made me think –
[0:43:25] AT: Yeah. Time horizon would be one of them. Each of us is maybe going to respond somewhat differently. But for myself, I don't think I'm very good at playing the short game and I'm not very interested in playing the short game. And there's something to be said for playing the short game. There are lots of circumstances. If your house is on fire, there's no long game. You need to get your family out of the house, right? It's not good/bad. But they have different things going for them. And I am interested in finding ways to harvest the arbitrage between the long game and the short game, which there very much is one. That's why I like the B/A example. That would be one of my non-negotiables.
Another one is I wouldn't be interested in just purpose or just profit. I believe that a pure focus on profit can lead to – let's call it some of the things that people outside Silicon Valley think about Silicon Valley fairly. That's not really how I'd like to lean.
On the other hand, leaning just towards purpose repeatedly, in my opinion, leads to a place where you feel good about your behaviors. But the thing you're working on, because it isn't self-reinforcing – and things that make money tend to get bigger over time and things that lose money tend to get smaller over time. If it is purpose without profit, it tends not to end up having the good impact on the world you were hoping for. Being at that intersection of purpose and profit, good for the world and good for the investors at the same time, I think that would be one of my non-negotiables. And that's why X is set up the way it is.
[0:45:03] AT: Okay. Another one that I think is pretty intimately tied in with innovation is having very high audacity or being lowed by the organization. Very high audacity and very high humility at the same time.
[0:45:19] JU: Can you say more about that?
[0:45:21] AT: If you don't have high audacity, it's very hard to say about something unlikely, "Maybe it will work. Let's try."
[0:45:29] JU: Right. It's almost optimism in a sense.
[0:45:31] AT: Right. But right after you start, if you think you're likely to succeed, you're just going to waste a huge amount of your own time and your investor's money. The chances are extremely good that it won't work. And humility is the part that allows you to say, which is sort of the core here at X, "Okay, we've just started out white part of the funnel. On this very unlikely thing, what is the cheapest way we can verify this won't work so we can kill it and get on to the next thing without losing the spirit that over long periods of time this will be right?" That it will pay off better than the choice As.
[0:46:08] JU: How do you balance persistence with that? Because on the one hand, some of my friends in New York, they have a venture accelerator called Prehype. And they talk about their mindset towards a new venture as it's default dead. Meaning it shouldn't work. Can we prove ourselves wrong? But our default assumption, which is totally opposite the default assumption, "It's a great idea." Right? You tend to confirm that wrong.
So, default, that I'm curious to know about persistence. I mean, James Dyson famously made 5,126 versions of the Bagless vacuum. I was talking to Annie Duke the other day, who wrote the book Quit.
[0:46:41] AT: I know Annie.
[0:46:41] JU: She said she hates that example. She hates that people invoke it. Because it gives people permission to spend a lot – how do you balance that?
[0:46:50] AT: People like Annie and people in the finance field would call that survivor bias. Do you know Dyson's name? Because he happened to be right.
[0:47:00] JU: Right. How many other people made 5,000 things? Yeah.
[0:47:03] AT: You do not know exactly the name of people who were just wrong. And it turns out there are more than enough of them to make that the wrong way to do things. The confusing thing to most people is you can gamble and make money. There's nothing wrong with gambling. And you can't tell someone after they gambled that they didn't make money if they did. It's just not a reliable way to make money. That's why I use the card counting example.
There is a process by which you can reliably innovate. But just believing you have the right answer and sticking to it no matter what, that's gambling. You might be right with your flying car or whatever it is you want to make. But I wish you well. But here at X, we're committed to this process of letting the evidence tell us what we should do instead of letting our gut tell us what we should do.
And because we're so inflexible about that later on, in the very early stages, we'll let your gut entirely load up the wide part of the funnel. Because –
[0:48:12] JU: You're rigorous about testing.
[0:48:13] AT: Yeah. It's a hypothesis. We don't care where it is. If your gut says this is a good hypothesis to test, cool. Let's try. But if we've tried 10 vacuums and they all are just looking terrible, could this work? Sure. But help me understand. We've got this other project over here. You're trying to do number 11 of your vacuum. We've got the teleporter and it's starting to work. Don't you think we should maybe put the money here? Maybe you could just like kill your vacuum project and join the teleporter team.
[0:48:42] JU: It's got to be really challenging to know how to manage those trade-offs though, right? Because maybe it gets back to intellectual honesty, right? But this team, they love the teleporter and they're convinced, "We know what was wrong. If we just change the flux capacitor, then –"
[0:48:59] AT: Right. But here's the deal. At the very early – I mean, literally the deal. Like, we talk about this at X. At the very early part of the process, in the widest part of the funnel, you have this bonkers idea about a teleporter. Awesome. I freaking love it. You've got the huge problem of the world we're going to solve. Maybe it's the carbon footprint.
[0:49:18] JU: Transportation. Yeah.
[0:49:19] AT: Yeah. Yeah. Right. I bet you're wrong, but high-five for such a creative idea. Let's see how fast, how efficiently we can kill it. Fast forward two years, we have some evidence. It's a little bit shaky. But the reason I can let you start is because I'm depending on you, I am paying you to be intellectually honest. Before we start, do you understand that that's why I'm letting you start? And to varying degrees, everybody does.
Now, three years in, there are 15 people on the teleporter team and the evidence is looking weaker and weaker. Yes, I understand that it's emotionally challenging to say that this thing might not work. But this was the deal. If you don't like this part of the deal, then I actually couldn't have ever started on this journey with you in the first place. Because it doesn't make sense for Alphabet or for the world for me to let people start on super unlikely adventures without a commitment that will stop it when it looks like it's not the right adventure.
[0:50:17] JU: I've heard you mention before that a failed project, everybody gets bonused. Everybody gets high-fives. Everybody gets hugs and celebrated, et cetera. And then you say, "I give folks a little bit of time to figure out what they want to do next." How do you think about the individuals' self-selection versus assigning someone with known skills to a known skill gap so to speak?
[0:50:41] AT: First, I just want to be transparent. We are much better about giving – we're great about giving bonuses when people kill their own projects. If we intervene –
[0:50:52] JU: Good clarification.
[0:50:54] AT: – we actually occasionally give out some bonuses, but they're smaller. And often, we don't, right? Because then you haven't lived up to your side of the deal if we have to be like helping you off the ledge, right?
[0:51:05] JU: Right. Right. Right. It's not intellectually honest.
[0:51:07] AT: Exactly. That was part of your job.
[0:51:08] JU: Yeah. Yeah. Yeah. That makes sense.
[0:51:09] AT: But that's why we celebrate people so loudly when they kill their own.
[0:51:12] JU: But how do you decide where they go next?
[0:51:15] AT: We don't.
[0:51:15] JU: Is it it's totally their decision?
[0:51:17] AT: No. What we do is if your vacuum cleaner project, you've had the grace to say, "Hey, I think 11 is too much. Let's just turn off the vacuum cleaner project." The teleporter people have some open slots where they're looking for a programmer. And you might be a programmer. You can apply for a job. I haven't said you have to work on the teleporter project. We have lots of projects here. I haven't said they have to hire you. It's possible that you won't find a job at X next. But it means you have autonomy. You can pick from them. They have autonomy. None of them feel like they just ended up with someone on the team that they didn't want or they can't use. And most people most of the time, especially the ones who were really great X-ers, do find jobs here at X. The dynamic stability is quite good doing it that way. And neither you nor the projects feel like you were just sort of like forced into a situation that you didn't have autonomy over.
[0:52:18] JU: Yeah. That makes sense. We've got just a couple minutes left. Rapid questions from the members of the community. I want to make sure we get to a couple of them. These can be short answers. I'll start with a fun one. Generative AI. Tech is well ahead of policy it would seem at this point. Is generative AI going to destroy us or help us reach utopia?
[0:52:39] AT: I'm going to choose to answer the question this way. I see lots of things that we're doing here at X in the form of elevating people. So that instead of doing more of the minutia, they can spend more times defining the problem.
For example, intrinsic, which is working on robotics and automation in the manufacturing space, is trying to build a platform, I think has successfully built a platform, that allows people with much less robotics expertise define how a product can be made so that it's cheaper, lower carbon footprint. It's more reliable. It comes off the line faster. And that is democratizing the ability to make things for people in the world. Right now, highly automated lines are basically just for cars and a little bit for laptops and phones. We can do better than that.
I think there's a huge amount of opportunity for us to use technology to raise up people's ability to help to the next level. Rather than being in the nitty-gritty, they can be at the specification level for what they want to see in the world. And I think we're going to see a lot of really exciting outcomes from that.
[0:53:51] JU: The extent to which generative AI can start to write the specifications is quite mindboggling as well. I wonder in the race to the kind of higher-order cognition. What I heard in that question was is it going to destroy us? Meaning, it's going to outstrip our ability to keep operating at the higher level. Or we're finally going to experience the utopia that whoever the invisible hand – and folks have been hoping we would experience for hundreds of years but haven't.
[0:54:18] AT: I think technology, especially over the last 70, 80 years, has been an increasing good lever for our minds. And I don't see that turning particularly fast. It might be speeding up, but I think it's going the same direction.
[0:54:34] JU: Okay. Question number two. Is corporate venturing worth it? What's the future of venture innovation inside of public companies? Is it a viable undertaking?
[0:54:44] AT: I'm mostly not going to answer that question except to say we don't do corporate venturing in the sense that they mean. I mean, side not.
[0:54:52] JU: Most organizations trying to launch new businesses is like that.
[0:54:55] AT: Google Ventures, GV, makes a lot of investments. And I'm actually not positive, but my sense is that they do very well for Alphabet. I don't think that that way of doing things is bad or is dead in the water somehow. But to be clear, we're not doing what corporate venturing really means. We don't make investments.
[0:55:14] JU: Yeah. I don't think this in regards to CVC so much as building businesses inside of public companies. Is that a worthwhile pursuit?
[0:55:22] AT: Well, I'm very biased. But if that's the question, I do believe that that is a very efficient path if done properly. I think there are lots of antibodies – yes, there are lots of antibodies and large companies that can make that difficult or impossible. And I think it can be set up so that it is a very good way to spend the corporates' money.
[0:55:45] JU: On that, just one drill down on that, are there – this sounds bad, but copycats. I heard you say somewhere, we're the worst moonshot factory around except everybody else. Or something like that, right? Are there others?
I mean, the big if, it stands for reason, that if you can get the corporate antibodies at bay, in so far as this is a model, I've wondered why aren't others replicating the model? What's standing in the way?
[0:56:09] AT: It's hard to be transparent. We have reached out in a very friendly way to the ones that are created from time to time. They seem to have a hard time sticking. But there have been a number, and we welcome them into the very small community with open arms. We have them come over here. We talk to them about how we do things. I would like to see a lot more organizations do something like X and have something like X.
I would like to believe, in the long run, either this will all turn out to be a mistake, or a fluke, or something. Or people will start to copy it. I don't know how we don't end up in one of those two –
[0:56:49] JU: Right. Right. In the long run. Yeah.
[0:56:51] AT: It's possible that the jury is still out and we need more runtime for the world to decide how much they want to sort of install an X-like thing in their own organizations.
[0:57:00] JU: All right. Last question. Did it help the US moonshot program to have a race against the USSR? And if so, how do you create the same urgency for moonshot projects that lack such a clear competitor?
[0:57:13] AT: Unquestionably, things like the Space Race, Manhattan Project, Bletchley Park, the beginning of sort of cryptography and computation in England. All of these things had this enormous back pressure, which was the wars, right? Or the Cold War in the case of space race. That, we don't have.
And I'm sure in some ways that that takes down some kinds of urgency. But it may open up other opportunities. And that is honestly one of the challenges that we work on a lot here, which is if you come to work at X, it sounds really great. You're going to get to work in a place that has sort of spot-welded pigs in the corner.
And so, you might imagine, "Oh, there'll be no quarterly pressure. I can just mess around. Free of stress." I think if you ask the people here, that is not their experience. Because we need to create our own internal urgency. And finding a way to keep the urgency up, it's not intellectually hard. But it is emotionally exhausting when you're keeping the urgency up and you've killed 10 things in the last five years, a big part of you thinks, "Why bother?"
And the answer is you have to really believe in the process or you won't keep that urgency up. But if you really believe that this is how to do the radical innovation equivalent of card counting, then you don't worry about the outcomes and you say, "Am I playing this hand as well as I can with this team that I have? And I'm just going to take pride and joy in getting to try each of these radical innovation opportunities?" And if this one doesn't work, maybe the next one will.
[0:58:59] JU: I think that's a perfect place to stop. That just triggered five more questions. But I want to be respectful of everyone's time and be respectful of your time. Astro Teller, thank you so much for joining us today.
[0:59:08] AT: Thanks for having me. That was really, really fun.
[0:59:10] JU: Thank you, all.
[OUTRO]
[0:59:12] JU: By day, I'm a professor. But I absolutely love moonlighting as a front-row student next to you during these interviews. One of my favorite things is taking the gems from these episodes and turning them into practical tips and lessons for you and your team. If you want to share the lessons you picked up from this episode with your organization, feel free to reach out. I'd be thrilled to do a keynote on the secrets that I've gleaned from creative masters or put together a hands-on workshop to supercharge your next offside adventure. Hit me up at jutley@jeremyutley.design for more information.
[END]
The quality of our thinking is deeply influenced by the diversity of the inputs we collect. Implementing practices like Brian Grazer’s “Curiosity Conversations” ensures innovators are well-equipped with a variety of high-quality raw material for problem-solving.