This summer, we're revisiting some of our favorite shows. Including this discussion with Harvard Business School professor Karim Lakhani, which reflected Lakhani's view that almost every company is now an AI company. Even if founders and CEOs aren't always thrilled to hear it. In fact, he says, artificial intelligence is upending the way business is done, and shifting who has the advantage. We talk with Lakhani about why traditional models don't work anymore, and where the opportunities are now. Plus, questions over data, and China's growing influence over AI.
Welcome to Instigators of Change, a Khosla Ventures podcast where we take a look at innovative ideas, the people who come up with them and those who invest in them. I'm Kara Miller, and this week, how AI is changing everything and why the corporate dislocation is going to be huge.
There's a class of people that get it and going full ahead. There's another class of people that just don't get it, don't want to get it and don't want to pursue it. And then there's a middle layer which is sitting there going, "Crap, I see this. Do I want to hear this? And oh, what this means is I have to both retool myself. I have to retool my company."
One way to understand how AI is challenging companies is to consider the case of photos.
We are distributing the act of photography. We are enabling many more people to take photos and enabling many more reasons to take photos and many more occasions to take photos. Before photos were about special things, and now I'm photographing my lunch. Right?
Right. Right. Because why not?
Well, because why not? Because it's free and it's cheap.
That's Karim Lakhani a professor at Harvard Business School who says, "Photography has an interesting lesson to teach us about AI." That lesson began in the mid 1970s when Steven Sasson hit the jackpot. Now, digital photography may not have seemed like a million dollar idea, but that's because it wasn't, it was more like a trillion dollar idea, though Sasson could not have known it at the time.
The sad part is that Kodak, which was the place for photography, Kodak invented all of digital photography, had all the patents in place and they could not imagine a world where chemistry would be undone by silicon, even though they invented every single thing.
Lakhani is a co-author of the book Competing in the Age of AI. And he's written about Sasson's breakthrough. A camera that ironically would so shape both photography and the world around it, that it would create some pretty tough times for the company where Sasson worked as an engineer, Kodak.
They had invested all their managerial staff, all their executive staff, was all about running large chemical plants or running photo processing centers. The Kodak moment, right, was all about the photography assets and the chemical engineering assets and not about silicon enabling these things. So even though they invented all of this stuff, they could not adjust their business model to play in this world.
But Lakhan says, "There are companies that could and that did adjust quickly, Samsung, Apple, Google." And remember, and this is important when it comes to how AI's upending business, people didn't just switch from using regular cameras to using digital cameras. Their whole concept of photography changed.
Most of us in our generations grew up with the Kodak cameras. I'm dating myself, so I apologize.
No, I totally remember it. Like you've got 24 shots.
Yeah. And film camera. Exactly. And you would like sit there and be, "Oh my God, I've wasted that shot. Oh my God."
And then all of a sudden digital cameras get invented and they were expensive and people started to do things with it. But then the phone and the camera merged and then boom, the world changed because now digital photography basically said, you can now squander the art of taking photos.
Photos themselves were much less valuable, but the fact that they could be so easily taken it spawned a raft of new companies. Since photographs were cheap and they were everywhere, well, why not use them for things no one had ever thought of before. But the companies that figured out how to navigate this new landscape and extract what was really valuable from the avalanche of new photos, they tended not to be the companies that understood traditional photography.
Lakhani worries the same thing is happening in the AI revolution, which by the way, is enriched by your photos. Some companies, young and old, get the massive paradigm shift that's happening, but others really don't.
This is work that Marco Iansiti, and my colleague at Harvard and I have sort of been thinking a lot about for the last, gosh, now eight years. We started a course in the MBA program in 2013 called digital innovation and transformation, before digital transformation was a thing.
And what we realized was that, when we dug deeper into these companies and what they were doing, was that AI was not just the feature set that it was enabling, right? Like, oh, the latest Google pixel phone can auto erase somebody that you don't want in your photo, right? It's not just a feature set.
It's a way of actually organizing your company in the sense of, your investments are now in data scientists and data pipelines, right. You are now thinking a lot about how these data pipelines work, how you can scale to billions of images, right. And that's just the scale we just have not thought about before, so how do you scale to these kinds of, lots of people, you are now running organizations that are really self-service.
Right. I can't really call 1-800 Facebook and get help for my photos, right? I can't call 1-800 Instagram or Snap or TikTok. It's a very different model the way in which you serve customers, what you serve them with. So your operating model is very different.
But also, importantly your business model is very different now too, because any of these companies have gone to one where you create the core service and core value generation for free for consumers, let's say, and then you are now monetizing in other ways. And one monetization that is very controversial now, but has been very powerful and very profitable is advertising. Right?
I can now advertise because I've got all these users engagement platform around these photos, around these kinds of activities. And so the very nature of what the product does, what the product serves, how you make money of it, the value you generate, but then how you actually produce the proper service has changed a lot because of AI.
One thing you talk about that really struck me is that when AI is kind of running through the core of a business, one way in which it changes things is, yeah, there are humans around, but they are not the people in real-time or they are not the entities in real-time that are thinking like, how much should that flight be? Or how much should that product be? Or what product would you like next? Right. They set it up, but they aren't the real-time actors anymore.
Yeah. Yeah. And I think this is one of the big things that I think executives in all touch organizations need to get their head around, where you want to build a end-to-end digital system where all of your operations are done digitally and done through algorithms. And this goes back to the argument we make about what AI enables, and you want to build for sort of scale that is now unimaginable.
And so I asked this question in exec ed when I'm working with companies and their executives when they come to our campus, or nowadays on Zoom, I say like, "How many stock keeping units or SKUs do you think Amazon has?" And people say a million, 2 million, 3 million, some will be brave and say 50 million. Well, the global, the Amazon has 600 million SKUs.
I mean, that scale is just unimaginable. How do you run a business with 600 million SKUs? You can't run this on a spreadsheet. You can't send this on an email file to people saying, here are all my SKUs. You can't even have enough product managers making decisions about pricing for 600 million SKUs.
It's so many, the number is so high that I'm trying to think like, what is that? That's like a product for every person in America twice, like two Americans, right?
This is crazy to think about.
Yeah. Yeah. And so when you think about that scale, there's two things to note. One is there's no way we can serve that scale with our traditional business, because we would need half of America working for us, right, to be able to do this. But secondly, this is what digital technologies enable. You can get to this kind of a scale so quickly.
And so in order for you to do this at Amazon, right, they set guidance and they set parameters, but the day-to-day decision making around pricing and what happens is done mostly through machines. The managers are keeping track of the machines and anomalous stuff pops out to them, but the pricing all the way to the warehouse fulfillment is being done through machines.
And now, even as we know, in Amazon warehouses this is all robotics even more and more so. And so because the scale is such that we just can't throw enough bodies at it and we can't throw enough intellect at it, we actually have to enable the algorithms to do that. And so for companies then it means a totally different view of how you would run your organization.
The managers are the ones who are designing the system, parameterizing the system, making sure it's working in the way it wants to work, making sure it's not drifting into crazy territory. And we can talk about that soon, but the work is being done by machines.
And that forces you to have a fully digital system because you can't have digital, digital, digital, analog, analog, digital, digital, digital, that'll be an analog system. You need a fully digital system to be able to pull that off.
And what's striking too and what kind of indicates this sort of turning upside down of the way business has been done for a century is that normally big has meant sometimes unmanageable. I think you write about like, you go into a store and if they just sell fishing poles, they might be able to help you out.
But if they sell fishing poles and baby clothes and ear warmers, the person's like, "Yeah, I don't know. Right. I can't help you. There's too many stuff. I can't get my mind around that." And yet big in this sort of AI world is good, big instead of being unmanageable and getting to the place where quality goes down and everything, big helps you. Explain that a little bit, but I find it such an interesting turning on its head of the way things used to be.
Yeah. And what I would sort of say is that there's three elements to the big, right. One is the scale side. You have much more variety, many more customers you can serve. Then there is the data side, right? You know our customers better, you have better information about those customers, and then you personalize.
And the big is one of personalization in data and, let's say, stock or supply of both products and services and customers. Right? So for example, I say is like, my Spotify is very different than your Spotify. Right? And Spotify itself is massive. And your experience with Spotify is very different than my experience with Spotify.
Same for thing with your experience with Netflix and my experience with Netflix. So big by itself is bad because we know from lots of marketing and psychology research, that when you give customers way too many choices, they get overwhelmed and they actually buy it less, they consume less, which may be good for the environment.
But if you're just profit maximizing people, you go, "Okay, that's bad." But now if I can customize, if I know you better, I know exactly that when I'm having a stressful day, I should be listening to some '90s hip hop, right? And that's what Spotify will serve me. Well, then that's good.
Then out of the whole collection, they've fined tuned my preferences and have given me the songs that I want to be listening to in the moment. And so the big becomes a feature and not a bug, if it is married with data and with personalization through the algorithms.
And what's happened in the past is big has been impersonal, big has been overly standardized, big has been a bad experience. You want the boutiquey experience. You want personal experience that the concierge knows about you. And now you can do big and personalization at the same time because you have the data and the algorithms to pull this off.
So I mean, obviously if there was no internet, we wouldn't have had social media and online retail. And we're now talking about Amazon, like that's a whole business, but build top of a technology. So when you think about AI, do you think about sort of whole segments of business that are waiting to be created that we might not even be able to imagine right now?
Yeah. And look, I had my good colleague at Flagship Pioneering, Jim Gilbert, and we were chatting a bit about this. And what he said is like, "If in 2001 you went to the leading companies and you said, I'm a time traveler, in 2022 now, I'm going to come back to you and tell you what the future is. You're going to have companies with trillion dollar valuations. You're going to companies with billions of customers. You're going to have hyperpersonalization."
People would look at you and go, "Are you nuts? This is unbelievable. What are you talking about?" And just imagine the world of AOL that we were living in. And the world that we're living in now, we've sort of inched way into it, over 20 years, but it's a fundamentally different operating model and a business model.
And smart executives, smart academics at that time, smart VCs at that time, smart entrepreneur, would not have imagined what we are living through now. And so my belief is that if, again, I can't do the forecast 20 years out, but what lies ahead is even more so, even I believe that the rate of change in technology, but also businesses and business models is at such a pace that we can't even imagine what that would look like, but it'll be very different.
But it's going to be abled through, again, these core technologies of digital data and algorithms and AI, but it's going to be profoundly impacting the way we run our companies and how we run our companies.
I know that you and your colleagues at Harvard Business School have talked to people in leadership at very small companies, big places like Disney and Microsoft and NASA. How good do you think leaders are at adjusting their minds to, I think, what you see as a big, big shift in how companies are run?
So when I was doing my PhD in the early 2000s, one branch of folks interested in digital were talking about the digital divide. And they were really talking about the have and have nots when it comes to access to the internet access to compute. And there was an inequality story basically driven by income and race. And that became the term. I think there's a similar digital divide among executives.
There's a class of people that get it, that understand it, are embracing it and going full ahead. There's another class of people that just don't get it, don't want to get it and don't want to pursue it. And then there's a middle layer which is sitting there going, "Crap, I see this, do I want to hear this? And, oh, what this means is I have to both retool myself. I have to retool my company."
And so I see now, before it was those that got it and those that didn't get it. I think in 2022, there's those who get it and are embracing it and are going full speed. But then there are a bunch of people who get it, but need sort of the red pill with lots of courage to drive the change they need to do. And change is hard. We can't laugh at them because change is super hard. It's really hard because it's up and down your whole organization, yourself, you board, your executive team, your customers, everybody has to change in this world. And the thing is, well, before maybe it was a media sector or before it was maybe in a small patch of retail or before it was a photography industry. Right.
So it was like, the change was sort of limited it to particular sectors. And then we would sort of look at them, go, "Oh, poor cell phone business. Oh, look at what happened to Nokia," and so on and so forth. But now it's endemic, like it is everywhere. All economies, all companies are facing the same kinds of pressures.
And so that's the thing that I think I see many executives are get the sense that this is important, but now they need both the courage to do it. And then sort of a toolkit on how to make that change happen.
Let's take a quick break here while you consider the notion of whether to swallow the red pill. I'm going to be right back with more from Karim Lakhani the co-author of Competing in the Age of AI. He's also professor at Harvard Business School and principal investigator of the NASA-Tournament Lab.
Lots of people are thinking about making a career change right now. If you are one of them, take a look at one of the companies in the Khosla Ventures portfolio, KV companies seek to fundamentally change how industries work from health finance, future of work, to transportation, energy and even. Space check out khoslaventures.com/jobs. That's khoslaventures.com/jobs. And now back to Instigators of Change.
I'm Kara Miller, back with Karim Lakhani from Harvard Business School. We're talking about AI. And I want to return to this notion of big and the idea that maybe once big was a disadvantage like, "Oh gosh, I've got this behemoth company. How do I deal with it?" And now it gives you an edge.
And I wonder how significant is that advantage? Because a company like Amazon knows a lot about me, Google knows a lot about me. Facebook does too. So how tricky is it for new startups to kind of get in the mix here?
Yeah. That's a great question. And I think, look, over the next decade, sort of this both in Europe, in the US and China and India and Africa, Latin America, we're going to have these big battles about too big to fail, too big to actually create consumer surplus kind of questions about these companies both from a business perspective, from a econ perspective, from a law perspective, from a policy perspective.
These questions are going to be center stage. And VC already with FTC and what they're trying to do and so forth, like these questions aren't going to go away regardless of the administrations we have, regardless of political environment we have. So that's the first caveat that I think we're going to be in the middle of trying to determine what the costs and benefits are of this size.
Then I think I would sort of step back and say that, of course you don't want to take Amazon head on. Right? That would be foolish. And one of my former students, he's running this big AI company now in autonomous driving, Qasar Younis at Applied Intuition. He gave this great example. He said, this analogy, he goes that, "When we look at the map of the solar system, we see that everything looks close by."
The earth is close to Mars and Venus and Saturn and the sun is, isn't that much distance, right, on a map. It looks, everything is crunched together. But the actual distance is millions of miles. There's a lot of space between all these planets. And his view was that there is so much opportunity, right? There's so much things is to be filled out.
That we're just at the beginning stages of this revolution. And so the room for startups is to be, business 101, you differentiate, you come up with something different. You recast a service. Because again, the demise of Kodak was not Fujifilm, right. It was Instagram. It's a totally different conceptualization. A Snapchat and now a TikTok. And so the creative destruction isn't going to be happening in the world of direct substitution.
It's going to be recasting problems, re-imagining customer journeys, re-imagining pain points and driving that, right? Who would've thought that sort of a social media influencer would be a job or career or that we would have this massive ad tech business that's both algorithmic, but also lots of people engaged in creating that structure? Those careers and professions did not exist 20 years ago.
And I expect the same kind of change. So as Qasar would say, there is a lot of distance among the planets, and you can imagine Facebook and Google and these guys are planets. There's a lot of distance among the planets. And of course you wouldn't go head on. And so I think the example I like to give, it takes us out of tech and brings us to the current pandemic.
And you look at mRNA, right? For 30 years, a defunct boring science that people thought was BS. Nobody believed it. The scientific establishment didn't believe it. And you basically had two startups invest in it, Moderna out of Cambridge in our neighborhood and then BioNTech out of Germany. And you basically have two startups now responsible for creating billions of doses of the vaccine.
It's called the Pfizer BioNTech vaccine, but it's a BioNTech vaccine. Right. Pfizer just basically produced it and did the clinical trials and now distributes it. It's a BioNTech vaccine, right? And same thing with Moderna, Moderna doesn't even have... And by the way, again, I a conflict of interest because I worked at Flagship as an academic partner.
So Moderna came out Flagship Pioneering, but I spent a bunch of time with Moderna and Moderna did not partner with any of the large companies to create the vaccine. They did it themselves and they've done the production themselves and created partnerships with that. So at the time of the crisis at 800 person company, we can't imagine an 800 person company scaling to serve a billion to 2 billion doses of vaccine.
And so what we are seeing is that for Moderna, which is a, Stéphane Bancel, their CEO says, "We're a technology company that happens to do biology." Their view is one of purely digital end-to-end processes and thinking about exponential scale and what they do and how they serve their patients and all the different drugs that they're going to produce with this foundational technology.
And I think that's what I think the startups will be at. It'll be sort of taking what we've learned in the last 20 year of the internet era, the Web 2.0 era, right, the AI first era, that sort of the Googles and the Facebooks of the world sort of created for us and saying, yeah, yeah, I might have a social media competitor or a TikTok, right, and that might be interesting.
But there more interesting thing is like, hey, what's going to happen in the environment with these technology? What's going to happen with climate change? What's going to happen in finance, right? Just as mRNA, I always say this, like what mRNA is to pharmaceuticals, crypto is to finance. Again, like crazy wonky technology, programmable, does all these things that we didn't think was possible and interesting nichey use cases.
And then boom, we get this pandemic and mRNA proves out. Same thing is basically happening with crypto. And so I think the entry places will be in all the spaces between the planets and not necessarily taking on the planets directly.
My understanding is China has put a ton of money into AI. And I also have heard that because their population's so large, that can be helpful in sort of refining AI. Are we sort of being parochial in not realizing how well old China's doing here?
I think there's probably a blind spot. What I would say is the Chinese have excelled at building large scale AI factories and attacking the consumer segment relentlessly in a range of commerce, finance, entertainment, news, and so on relentlessly. And have built these massive AI factories that are doing amazing things.
I think the worry is not so much that these companies can take the models that they've trained in China and bring them to the US or bring them to some other country, the worry is they have capability in thinking imaginably about the AI factory at the core of their businesses, but also how to build them and scale them and then get new data and train new models at scale.
So I would say like, that's the strategic worry I would have. I do believe that in the US there is certainly, from my cousin's school, the Kennedy School, from a Kennedy School perspective, the sort of the strategic nation state actor AI strategy question. But I think in the US, we do have great capabilities, both in research and understanding of the AI factory.
And I believe that there's more of an advantage around us around industrialization of AI in sort of non-consumer settings that we are seeing more and more of. And so I imagine that there will be two scenarios. I do think that the geopolitical tensions are going to mediate much of what kind of learnings we can have from each other and of the ability of these companies to come to our shores.
But we can still learn by example and see what Ant Group is doing and then say, why don't we have a similar kind of a company here doing that kind of a low cost finance model and low cost convenience model than the US banks are doing and so forth. So I think there's going to be some the lessons that we can cut and paste over through native of the US companies.
But I think the worry I do have is that the there'll be two different ecosystems of AI, a Chinese one, and a US only one with the Europeans trying to figure this out repeatedly. And then I think the big tension point actually is going to be about data and it's going to be about the silicon, right?
Because right now our silicon footprint is actually all in Taiwan and China, and we need more and more silicon to do all this work. The cloud actually is pretty heavy with silicon. And so where will the next generation of chips be designed and manufactured, because all these models are compute heavy. And it's going to be both a great source of new innovation for us, but the transition is going to be rocky for us.
Let me step into a couple of controversies. One is related to kind of what you just talked about, which is obviously there's been a lot of tension in the US about who has data and what they do with this data. Do you think that that scrutiny is warranted? And then even if it is, is the fact that it exists going to put us in a different place than China because, but like those hearings aren't happening in China?
I think data, data privacy, people's view about data is still an emerging sensibility that we have. So I sit on the board of Mozilla Corporation, we make the Firefox browser, we think it's the wholesome organic browser that you should be using. Right? And we've been touting privacy for 15 years yet Google Chrome and Safari have been the ones that have sort of beat us over the head many times over.
And if you're using Google products you are, and I use Google products, I use Gmail and so forth. I am willingly giving away my data for the convenience it offers. Okay. So one thing is that I think people have this conversation around data and it's very fuzzy. So let's just sort of break things apart first, right?
First is the common use case of data right now is, again, you're using Google or Facebook or TikTok or Amazon, whatever, and what are they doing? They're saying we can create a better experience with our products if you give us access to your data. Right. We can give you a better experience for your product...
I want Gmail to alert me that my Uber is coming around the corner when I was taking reverse or when I was going to miss that fight or the traffic was, I want them to know that. So I'm trading my data for the convenience. So that's the first thing. And so when companies ask me like, "Should I be giving all this data?" I go like, "Yeah. You should be giving this data, but only if you're actually offering value to your customer."
Data collection for the sake of data collection, because you think you're going to have some super duper new thing five years down the road is awful because data can be asbestos as well. Because if you have a breach and you expose your data to hackers, whatever, customers are going to be pretty upset at you and so forth. And so create liabilities for you in the future.
So data for value matters. Second is it's not clear that people actually understand privacy. So again, back to my example, we've been preaching privacy as a core feature of Firefox for some time. But customers preferred the convenience that the Chrome browser offered and other more data intensive products offered them.
And then in a range of social psychology experiments that colleagues at MIT and at Duke and at HPS have done, they've shown that literally you can get very personal data from people for cookies, literally for cookies, right? Social security number, birthdays, offer smart undergrads at MIT chocolate chip cookies or oatmeal.
Oh, for actual cookies, I thought you meant-
Yeah. Yeah. No, literally for actual cookies they will give you a-
Now be fair, I hear you when you say chocolate chip cookies. Okay. Go ahead.
Right. Exactly. Right. And so one of my colleagues, Leslie John, a faculty member at HBS, she's done some amazing studies with real life people where she shows them a you're so bad website and a very official looking website. And she's asking very personally intrusive questions.
And guess what, when you have a official looking website, people don't reveal as much, but when you have the use you're so bad crappy website with the devil horns, people are revealing everything. I mean, it's scary. And so I think we have to think about data as convenience, then data as asbestos and then data privacy as this weird good that even consumers don't have clarity of preferences.
Now let's go to what's been sort of in the news so much with Meta Facebook and so forth, which is the same data that allows us to scale the benefits. Right. I want targeted ads because I don't want to see the Viagra commercial when I'm watching the Super Bowl with my family. Like really, I don't want to, like come on guys. Stop it. But you are doing this blanket thing, I want a more targeted ad.
If you know that my family is watching this thing, then give us an ad that makes sense. Don't show the buyer the commercial. And so I prefer personalization even in ads. So the same data scales us and generates tremendous value for customers and for companies, but the same data have misused all our filter bubbles, all our politicization and so forth can lead to as much harm as well.
And that's, I think the world where we're getting to, because the same algorithms, the same basic scientific concepts that allows Netflix to keeps showing you more and more rom-coms over and over again, to me, over and over again, can then also be used at Facebook to radicalize you, right. Or on YouTube to radicalize you.
And I think this is where sort of like have we thought through the consequences of all this data about individual users and preferences being then exploited. Our book came in January 2020. We have a chapter around the ethics of digital scope and learning. And we start with the vaccine controversy pre-pandemic, right?
Yes. That's right. That's right.
And what we said is like, there's no anti-vaccine cabal inside of Facebook or Google or a YouTube or at Amazon, it's literally their algorithms basically saying, Oh, you are interested a bit about learning more about vaccines. So we've seen that people engage more with content that is anti-vaccine content. So we'll show you a little bit of that.
Oh, you read more. Oh, we'll just give you more. Oh, we'll give you more. It's the same set of models that work at Spotify and Netflix, right. It's literally the algorithms doing the thing, but algorithms are in more of that sense. But the engineers, the managers, the folks that are responsible for these algorithms did not think through those consequences.
And I think from the Facebook, first of all, again, an alum from our school of Frances Hogan, right. She would say that, "No, in fact, it was willful. Some of the senior management went ahead and did it knowing what the bad consequences were going to be." And I think that's where the ethics around AI and data become so, so important and interesting.
And that now you as an executive, not only have fiduciary responsibility for this data, you also have this ethical moral responsibility for the data. And morality, we can't just leave it for our legal departments, right. This is a board level issue. This is a senior executive level issue. And it's a product management issue.
A final question about a controversy, and I could ask easily about 50 more questions, but I'll just ask you this one more which is I feel like for the last easily 10 years or so, we've talked a lot about the sort of coming apart of the haves and have nots.
And I wonder if, when you think about the change in firms that we've talked about as AI comes into the core of small companies, big companies, does that really change the employment picture in the US? And if it does, what do you think the ramifications of that are?
Yeah. And I'm no macro economist, so whatever I'll say caveat mTOR, it's purely speculation. What I will sort of say is I like this saying that, Pedro Domingos out of University of Washington who wrote this great book called The Master Algorithm, explaining all the different tribes of AI that are out there. He said, "Machines won't replace humans, but humans with machines will replace humans without machines."
Okay. So that's the first thing. So that says to me is, oh, we need to educate our workforce. We need to think of AI as a compliment and not a substitute. But there will be some substitution going on. My asterisk to that quote is, and maybe we'll need fewer of those humans. Okay. So will there be displacement? Yes. Do we need to reeducate and retrain people? 100% yes.
Is it the retraining for just one class of people? No. It's in fact society-wide, economy-wide and so forth. That's the first thing. Then let's add on what's happening currently. We are in this world of the great resignation. We can't hire enough people. And so if there's now these structural shifts in the US labor market, and also that the US labor market is going to be shrinking over time. Right.
And so what we'll be thinking about is less and less workers available for more and more work, then the way to solve that problem is through productivity. And again, I think AI is going to help us with our productivity. So then that's like, oh, interesting. Now that I can't hire people to do room service, I'm going to bring a robot to do room service. Okay.
Those jobs are going to be gone for sure because once hotels and your friendly neighborhood Marriott has robotic room service capability, right, we're not going to have many people doing room service anymore. And so great, oh, but who's going to maintain those robots? Who's going to make those robots? Who's going to update the software in those robots, blah, blah, blah? Right.
So then a new category of work is going to get created that is not the dishwasher, not the room service personnel, but a new type of workers are going to come in. And that is the third element which I believe is important, which is that it's really hard to anticipate what new types of industries are going to show up, that's going to take advantage of these problems.
What's happening is that more and more, we are recasting all problems as AI problems, right? We've now think about room service as an AI problem because we have a robot go do it. And so increasingly, more and more companies and more and more occupations will be thinking about given this technology, what can I now do with it?
What new jobs are created? And that's really hard to anticipate in our macro models of labor force, employment, so forth. But very smart people at prestigious universities are thinking a lot about it, and I think they will have a way better answer than I would, but I just sort of start from first principles that there is a displacement effect.
We do face, at least in the short-term, this great resignation. We need more people. If we can't have more people, and if the long-term trends are that there aren't going to be enough workers anyway, then technology will help. And then we'll have a new set of employment, a new set of companies coming up that'll demand new sets of skills.
Karim Lakhani is a co-author of the book Competing in the Age of AI. He's a professor of business administration at Harvard Business School. So great to talk to you.
Always fun, Kara, thank you so much for inviting me.
And thank you for being here. Remember you can pick up our podcast on Apple Podcast, Spotify, Google. The show is produced by Matt Purdy, I'm Kara Miller. Talk to you next week.