THE BIG AI RESET: The Next Global SuperPower Isn't Who You Think | Ian Bremmer | Transcription

Transcription for the video titled "THE BIG AI RESET: The Next Global SuperPower Isn't Who You Think | Ian Bremmer".


Note: This transcription is split and grouped by topics and subtopics. You can navigate through the Table of Contents on the left. It's interactive. All paragraphs are timed to the original video. Click on the time (e.g., 01:53) to jump to the specific portion of the video.


Intro (00:00)

You said these are dangerous times, the world order is shifting before our eyes. We also both know that with hyper-disruptive technologies like AI on the horizon, a good outcome is not guaranteed. Why do you think Big Tech will become the third superpower and what are the dangers and opportunities if it does? - Big Tech is essentially sovereign over the digital world. The fact that former President Trump was de-platformed from Facebook and from Twitter, when he was president, isn't, you know, most powerful political figure on the planet and he's just taken off of those networks and as a consequence, hundreds of millions of people that would be regularly engaging with him in real time, suddenly can't see it. That wasn't a decision that was made by a government. It wasn't a decision made by a judge or by a regulatory authority or even by a multinational organization like the UN. It was made by individuals that own tech companies. The same thing is true in the decision to help Ukraine in the war, in the early days, the US didn't provide much military support. Most of the military capacity and the cyber defenses, the ability to communicate on the ground, was stirred up by some tech companies, that they're not allies of NATO, they're under no obligation to do that. They've got shareholders, right? But they still decided to do it. I think that whether we're talking about society or the economy or even national security, if it touches the digital space, technology companies basically act with dominion. And that didn't matter much when the internet was first founded because the importance of the internet for those things was pretty small. But as the importance of the digital world drives a bigger and bigger piece of the global economy, a bigger and bigger piece of civil society, a bigger and bigger piece of national security and even increasingly defines who we are as people, how we interact with other human beings, what we see, what we decide, what we feel, how we emote, that is an astonishing amount of power in the hands of these tech companies. And yes, there are some efforts to rein them in, to break them up, to regulate them. But when I look at artificial intelligence in particular, I see these technology companies and their technologies vastly outstripping the capacity of governments to regulate in that space. So does that mean that suddenly you're not gonna be citizens of the US, you're gonna be citizens of a tech company? No, I'm not going that far. But certainly in terms of who wields the most power over us as human beings, increasingly you would put those companies in that category. And that none of us, even five years ago, were thinking about this seriously. And certainly when I was studying as a political scientist, this is my entire career, you know, the geopolitical space is determined by governments, right? Like them or hate them. And some of them are powerful, some of them are weak, some of them are rich, some of them are poor, some are open, some are closed, some are dictatorships, right? Some are democracy, some are functional, some are dysfunctional, but they're in charge. And that increasingly is not true. - As you look at that potential, or not potential, as you look at that growing reality, how does that play out? Does this become, the one thing, when I look at that, that I really start getting paranoid about is that AI, especially quantum computing, I'm maybe less familiar with, but sort of lingers in the back of my mind, become one of two things, either weapons used by governments, even if it's not against their own people, though I do, especially with authoritarian governments, I get very paranoid about that. But even if they're just used as warfare against other countries, that sort of quiet, invisible battle freaks me out. And then also, I worry very much about this becoming the new battlefield for a Cold War between the US and China specifically. Do you see us as moving towards that because the tech will make that increasingly easy to fight an invisible war? - I do think, of course, that all of these technologies are both enabling and destructive. And it all depends on the intention of the user and in some cases, it's someone who's just a tinkerer that makes a mistake or that's playing around and it explodes.

Discussion On Artificial Intelligence

Safety Nets (04:35)

I'm not particularly worried that the robots are gonna take over. I'm not particularly worried that we're on the cusp of developing a superhuman intelligence and that we're suddenly irrelevant or we're held hostage to it. That's, in other words, I mean, I know that you love the matrix. We talked about that a little bit before the show. This is not my five, 10 year concern. But the idea that this technology is going to proliferate explosively, I mean, vastly beyond anything we ever were concerned about with nuclear weapons, we're 80 years on, it's still just a handful of countries and no corporations, no terrorist groups, no individuals that have access to those nukes. No, no, no, AI with both its productive and destructive capacities will not just be in the hands of rogue states, but will also be in the hands of people and terrorists and corporations and they'll have cutting edge access to that. So, I mean, it would be easier to deal with if it was just about the United States and China and we can talk about the United States and China and how they think about that technology differently and how we're fighting over it and how it has become a technology cold war. I think that we can say that that exists right now, not a cold war overall, but a technology cold war. I think that exists. But I think the dangers of AI are far greater than that. It is precisely the fact that non-governments will act as principles in determining the future of the digital world and of society and national security as a consequence. And governments, right now, governments still seem to think that they're going to be the ones that will drive all this regulation. And in the most recent days, the United States is taking just a few baby steps to show that maybe they recognize that that's not the case. But ultimately, either we're going to have to govern in new institutions with technology companies as partners, as signatories, or they're not going to be regulated. And I think that that reality is not yet appreciated by citizens. It's not yet appreciated by governments.

The good side of an unregulated AI (07:11)

- Ooh, okay, so tell me more about that. What does the world look like where this technology is proliferating like that and is not regulated? - Well, if it's not regulated at all, that means that everyone has access to it. So let's look at the good side first. Let's be positive and optimistic because I'm a believer in this technology. I think it does all sorts of incredible things. And I'm not just talking about chat GPT. I'm talking about the ability to take any proprietary data set and be maximally efficient in extracting value from it, helping allowing workers to become AI adjacent in ways that will make them more productive and effective. I look at my own firm Eurasia Group. We've got about 250 employees. And we did a town hall with them the other day. We do one every quarter. And we were talking about AI. And I said, I don't think there's anyone in any of these offices globally that will be displaced by AI in the next three to five years, not one of my knowledge workers. But I said, all of you will be AI adjacent. And if you're not, if you're not learning how to use AI to dramatically improve your work, whether you are an analyst or whether you're on the business side or you're in finance or you're on the IT help desk or you're a graphics person, an editor, whatever it is, you will become much less productive than other employees that are doing that. And that will be a problem for you. So we need to get you the tools and you need to learn. So, and I think that that's true in almost every industry imaginable. It's true in education, it's true in healthcare and for new pharma and vaccines, it's true for new energy and critical infrastructure. And what's so amazing about it, one of the reasons why it's taking us so long to respond to climate change, even now that we all agree that it's happening. We all agree this 420 parts per million of carbon in the atmosphere. We all agree this 1.2 degrees centigrade of warming. Like that's no longer in dispute. And yet it's really taking us a long time to get to the point that we can reduce our carbon emissions. And the reason for that is because you need to change the critical infrastructure, right? You need to move from one entire supply chain oriented around carbon to another one, oriented around something new, whether that's solar or, you know, green hydrogen or you name it, right? When you're talking about AI, you're talking about first and foremost, creating efficiencies using your existing critical infrastructure, which means you have no vested corporations that are saying, we don't want that. No, every corporation is saying, how can we invest in that to create greater profitability? Everyone, every oil company is gonna use AI just like every post fossil fuel company is gonna use it. Every bank is gonna use it. Every pharmaceutical company, whether they're using, whether they're in mRNA or they're in traditional, you know, vaccines that are developed as we have over decades now. I think that we truly underestimate the impact that will have in unlocking wealth, in unlocking human capital, and it's gonna happen fast. It's not decades as it took with globalization to open markets and get goods and services to move across the world. It's years, in some cases, it's months. And that to me is very, very excited. So that's the positive side. And frankly, that's what the positive side looks like without regulation too. Because, I mean, look, there are trillions of dollars being spent on this rollout, and it's being spent by a lot of people who are hyper smart. They are hyper competitive. They wanna get their first before other companies that are in that space. And they don't need any further incentive to ensure that they can roll that out as fast as possible. So you and I can, we can say whatever we want, but it's not, you know, further subsidies are not required. Right? Like that is just gonna happen. That is gonna happen. But what they're not doing, and I'm sure what you wanna spend more time on with me, is not the, everything's gonna be great, or you know, what they call this E-AC, the, you know, sort of exponential accelerationist who just believed that if we just put all this money in it, then we're gonna all become a greater species, and it's just gonna happen. But there are gonna be a lot of negative externalities. And we know this from globalization. I mean, the miracle of your and my lifetimes thus far, before AI, the miracle was we managed to unlock access to the global marketplace for now 8 billion people. Trade and goods and capital and investment and the labor force, the workforce. And that created dislocations. It meant that there were a whole bunch of people that were more expensive in the West, that lost their jobs as inexpensive labor that was very talented in China and India gained jobs. But that led to unprecedented growth for 50 years. There were also negative externalities. And those negative externalities played out over many decades, but it's when you take all of this inexpensive coal and oil and gas out of the ground, and you don't realize that you're actually using a limited resource and you're affecting the climate. And so decades later, we all figure out, oh, wait a second, this is a really huge cost on humanity. And on all of these other species, many of which are already extinct and no one's bothered to pay for them. Well, with AI, the negative externalities will happen basically simultaneously with all the positive stuff I just talked about. And just like with climate, none of the people that are driving AI are spending their time or resource figuring out how to deal with those problems, they're spending all their time trying to figure out how to save humanity, how to accelerate this technology. So if we don't talk about those negative externalities, they're just gonna happen and they won't be mitigated. They won't be regulated and there's a lot of them. And we can talk through what they are, but just to put in everyone's head here that kind of like climate change, we all wanted globalization. I'm a huge fan of globalization. We all hate climate change. We wish it hadn't happened. You cannot have one without the other. And the fact that we were so focused on growth and that all of the powerful forces are, let's have more stuff, let's get more GDP, let's extend our lifespans, let's improve our education, let's take people out of abject poverty, all of which are laudable goals, some more, some less, but things that we all like. But there were consequences that no one dealt with, no one cared as much about because they're not as directly relevant to us as the shiny apple that's right in front. And that is what is about to happen in exponential fashion with artificial intelligence. - All right, so we've got the shiny object syndrome.

How will AI affect the job market? (14:29)

Myself included, I am deploying AI in my company as fast as I can, but at the same time, I am very worried about how this plays out. You've already touched on job velocity, I'm not super worried about that in the three to five year time horizon, I may be a little more worried about that than you, but I gave a similar speech to my company, which is I have literally zero intention to get rid of anybody, but I do have the expectation that all of you are gonna be learning how to use AI. And I know that that is going to mean I'm gonna get efficiencies out of my current workforce, which means I won't be hiring additional people. So while the people I have are safe, it certainly creates instability in people in terms of looking for a new job, the kind of mobility. I don't think people are gonna be scaling as quickly as possible. But my real question for you is given that you have a global perspective, which I've come to late in the game and for a long time viewers of mine, I will just say the reason I become so obsessed with this, you and I were talking about this before we started rolling, I commit everything from the perspective of the individual. And I think that culture and all these knock on effects are all downstream of the individual. And if we want a good society, we have to be good individuals, but we have to take the time to say, what is that? Like, what are we aiming towards? What's our North Star? What are we trying to get out of this? So for me, the punchline is human flourishing. I won't spend time in this interview defining what that means. Certainly my listeners have heard me talk about that before, but what do you think about? I assume you will roughly, given the talk that you just gave, will roughly say something similar. We want good things. We wanna pull people out of poverty.

Peoples who are faces being left behind? (16:07)

We wanna clean up the environment. There's gonna be a lot of things we wanna do that I think more or less are about human flourishing. What then is the collision of a new technology like AI becoming so ubiquitous in an unregulated fashion that gives you pause? Is it US China? Is it a rogue actor making bio-weapons? Like, what's the thing that when you look near term, we'll say the three to five year time horizon? What gives you pause? - So there are a few things. And even though I said I don't think I'm going to fire anyone because of AI, I do worry that the same populist trends that we have experienced in the developed world, in particular over the last 20 years, can grow faster. If you are a rural, living in a rural area or you're under educated and you're not going to become AI adjacent in the next five years, 10 years in the United States in Europe. And those people will be left farther behind by the knowledge workers that have that opportunity. And so I'm not saying that they're gonna have massive unemployment, but I worry about that. - What do you think about like picking fruit and stuff like that with robots that make your radar for anything near term? - Again, not so much. So again, I would say, now let me tell you why I say no about that, because when I think about what CEOs do with their workforces, generally they take those productivity gains, they pocket them, they pay out good bonuses to themselves, to their shareholders, maybe they invest more in growth. But as long as growth is moving, they're not getting rid of a whole bunch of people. They like the people that they have. They want it, they're always thinking the trees are gonna grow to the heavens. And then when they face a sudden contraction, a recession or even worse, a depression, then suddenly they look at everything around them and say, okay, where can we cut costs? And if we've suddenly, if those workers, if a lot of those workers aren't as efficient as they used to be and you get new technologies, suddenly it's not like you're incrementally getting rid of people every year, it's that you've taken a huge swath out of the workplace. So I don't think that that's going to happen suddenly in the next few years, because we're coming out of a mild narrow slowdown right now. And the next few years should look better. I more think about what happens the next time we're in a major cyclical downturn. And combining that with where we've gotten to with the AI productivity buildup at that point. But I still think that in the interim, you're gonna have people that aren't gaining the productivity benefits from AI inside Western economies. And those are the same people that have been hit by the fentanyl crisis. Those are the same people that haven't had good investments in their educational systems. Then around the world, the people that digital have nots, the people that aren't even online, so they won't be able to use these new AI tools to improve their knowledge, to have access to better doctors. So they'll be left behind this new turbocharged globalization. And that's a lot of sub-Saharan Africa, first and foremost. So I do think that there are two groups of people that even in the next five years that will suffer comparatively and will be angry politically and will create social discontent. So I didn't mean to imply that I didn't care about that or that I thought it was off the screen. It was more that I don't see that as a firm of literally 250 people, like we're tiny. And if you tell me that we're going to have a lot more efficiency, I wouldn't actually hire less. I'd hire more because I wanna get to 500 people faster. Like there's just more things that I wanna do without taking any outside investment. But that's a tiny, tiny issue compared to the other stuff we're talking about. The things that I'm probably most worried about in the near term, three years, let's say, I'd say are three buckets. The first is the disinformation bucket. The fact that inside democracies increasingly, especially with AI, we as citizens cannot agree on what is true. We can't agree on facts. And that delegitimizes the media. It delegitimizes our leaders and both political parties or the many political parties that exist in other developed countries. It delegitimizes our judicial system, rule of law. It even delegitimizes our scientists. And you can't really have an effective democracy if there is no longer a fact space. I mean, we're seeing it right now in a tiny way with all of these indictments of Trump. And it doesn't matter what the indictments are. It doesn't matter how many they are. It doesn't matter what he's being indicted for. What matters more to the political outcome is whether or not you favor Trump politically.

Micro targeting for mind control (21:09)

If you do, then this is politicized. It's a witch hunt and Biden should be indicted. And if you don't, then Trump is unfit and every indictment doesn't matter what it is before you even get a result of it, then he's guilty. And that with AI becomes turbocharged. You can reboot your life, your health, even your career, anything you want. All you need is discipline. I can teach you the tactics that I learned while growing a billion dollar business that will allow you to see your goals through. Whether you want better health, stronger relationships, a more successful career, any of that is possible with the mindset and business programs and impact theory university. Join the thousands of students who have already accomplished amazing things. Tap now for a free trial and get started today. - I wanna get into why that happens. So my first question on that is, it's definitely pre-AI, because I think this started breaking down with social media. - Great. - How prior to social media, do you think that we were able to come to a consensus on truth? - Well, a couple of reasons. One is that a lot of people got their media from either the same source or from overlapping and adjacent sources. So you had more commonality to talk about politics, to the extent that you talked about politics. Second, it was mostly long form. So you would read a newspaper article, you would listen to a radio show, you would watch a television show, you weren't just getting the headline. 'Cause today, if you go on CNN or Fox News on their website and don't look at the headlines, just look at the pieces, the pieces actually overlap a fair amount. If you look at the headlines, and then if you look at what headlines you're being filtered to, then the news that you're getting is completely different. So I think that's a reason too. And of course, the fact that people are spending so much more time, intermediate by algorithms means they're spending less time randomly just meeting their fellow other. And that's even true with the rise of things like dating apps. I mean, as opposed to just happening to date someone you were in high school with or in college with, you meet at a bar. I mean, if you're meeting that person through a dating app, you're already being sorted in ways that will reduce the randomness of the views that you're exposed to. So in all sorts of tiny ways that add up that are mostly technologically driven, we become much more sorted, sorted, not sorted, though sorted probably too, as a population. And then you put AI into this, and suddenly this is being max. So let me get another example. You'll remember that I think it was David Ogilvy who the great advertising entrepreneur who once said that we know that 50% of advertising dollars are useful, 50% are useless. We just don't know what 50%. And of course now we know how to micro target. Now we know that when we're spending money, we are spending it to get the eyeballs of the people who are going to be affected by our message. They will be angered by it, they will be titillated by it, they will be engaged by it, they will spend money, they will become more addicted by it, all of those things. And when you do that, you more effectively sort the population as opposed to throwing a message at the wall, but everybody gets the message. And so it is not the intention to destroy democracy. It is not the intention to rip apart civil society. It is merely an unintended secondary effect of the fact that we've become so good at micro targeting and sorting that people no longer are together as a nation or as a community, an AI perfects that. AI allows you to take large language models and predict with uncanny capacity what the next thing is. And the next thing for an advertising company is how I can effectively target and reach that person and not the other person who doesn't care about money. - Yeah, and keep them engaged. So let me give you my thesis on this. This I think is one of the most important things for us to all wrap our heads around. I've thought a lot about why is there a sudden breakdown in truth? And the more I thought about, okay, what is true? How can we go about proving it? The reality is that so much of what we perceive to be true is merely your interpretation of something. So you're gonna get a perspective on something built around what I call your frame of reference. So your frame of reference is basically, it's your beliefs and your values that you've cobbled together sort of unknowingly throughout the course of your life. It becomes a lens through which you view everything, but it is a very distorted lens that is not making an effort to give you what is true. It's making an effort to conform to the things you already believe are or ought to be. And so when people confuse that for objective reality, then you have a problem. And so when you introduce AI, well, one, when you introduce algorithms, you get massive fragmentation. So now I can serve you just the things that you're interested in. So like if you go to my feed, you're gonna niche down into like really weird things around video game creation, which is something that I'm very passionate about, that somebody else isn't gonna see. And so you get already that fragmentation, you layer that on top of your perspective, which you're coming with those pre-distortions, then you layer that on top of the algorithm has an agenda that may not match your agenda. And now all of a sudden you get into these echo chambers that are feeding back to you, your same perspective.

Echo chambers promote perspective, tribalism & desired outcomes (27:04)

They're eliminating nuance by giving you like, you were talking about headlines earlier, by giving you like this is the talking point. And so now you start everything becomes predictable. If I know you're on the left, I know what you're on a basket of concepts. I know where you're gonna fall if you're on the right, same basket of concepts. I know where you're gonna fall. And so once you get rid of that nuance, now all of a sudden again, we're not optimized for truth, we're optimized for party line. And because that then feeds into a sense of tribe and I belong and ease of thought quite frankly, which is one of the things that scares me the most, is like, oh, I don't have to think through that issue myself. I just need to know what my party line is cool, got it, and now I go. And as we get more and more fragmented, now it becomes, okay, I know what my party line is in my very deep fragment here, but I don't know what's true and I no longer even know how to assess what's true. In fact, I probably think again, because that distortion reads to me as objective reality. So I think it is true. And so now you have all these people who are like, this is true. Like there's nothing you could tell me that will make me think any different because I believe this to be true. And so now the question becomes, if I'm right that truth is perspective and interpretation and you're soaked in the perspective and interpretation of others so they reinforce, so it becomes perspective, interpretation, and reinforcement. And so that becomes quote unquote truth. Outside of science for lack, no, because even science we run into the same problem. So what do we do? - We run the same problem in science. - Yeah, so in a world where the only way I can think to get on the other side of this quagmire is to go, I want to achieve this thing and I'm going to state, this is my desired outcome. This is the metric by which I will determine whether I have achieved said outcome. And then instead of asking what's true, I just ask what moved me closer to my goal? Is there any way else around that that you see? Or is this just a one way street to fragmented catastrophe? - No, there are lots of ways out of it. We're just not heading towards any of them. I mean, no, you look at your quarter fee or your X-feed and you've got the people you're following and if you're willing to spend the time, you can curate a following feed that has people of all sorts of different backgrounds, inclinations from all over the world and I do that. But it takes a lot of time and effort and you need expertise to be able to do that. You have to be able to research and figure out who those people are. You have to know some people in the field. Most people don't do that. But of course, the four U-feed is much more titillating. The four U-feed is very entertaining. It engages you, it angers you. And it soothes you at the same time. You want more of that. And that of course is driving you exactly in the direction you just suggested. Now, a lot of people will say, "Well, okay, you watch CNN all the time. "You should watch some Fox as well." No, that's not the answer. The answer's not watching Fox 'cause you will just hate watch Fox because you've already been programmed to realize that everything that the people on the other side saying is false and so they're all evil. And so all that's doing is validating your existing truth. No, what you really need to, I tell young people this all the time. You really wanna understand and get outside what's happening in the United States ecosystem. Watch the CBC or Al Jazeera or Deutsche Vella or NHK in Japan. Just watch their English language news once a week for half an hour an hour. It's not very exciting, but it's like a completely external view of what the hell is going on in the United States and the rest of the world. And that forced you, first of all, it's long form. It's not the headlines beating you down. And secondly, it's like you don't actually have your anchor of all of the things that are stirring you up. They're not even playing with that. They're just kind of reporting on the best they can tell what the hell is going on. And then they're occasionally talking to people like that are locals and whatnot, but from every side. That's very valuable. But the thing that worries me about AI, I don't believe that AI is becoming much more like human beings. They're not faking us out by just being it, by being able to replicate me. I think what's actually happening is technology companies are teaching us more effectively how to engage like computers. I mean, you and I in person in a conversation, in a relationship, a work relationship, a friend relationship, a sexual relationship, whatever it is, there's nothing a computer can do that can tear us away from that.

The human-like power of AI (32:05)

But if we spend our time increasingly in the digital world where we are driven by, where all of our inputs are algorithmic, well, computers can replicate that very easily. And so if they can only make us more like computers, then no, it's not like the matrix where you want to feed off us in terms of fuel. It's much more that we're very valuable in driving the economy if you give us all of your attention and data. And that is the way that you create a maximal AI economy. It also happens to be completely dehumanizing. Because we all know that human beings are social animals. We know if you stick us in a room or you stick us in a desert island, we're gonna like engage with each other, talk to each other, figure out things about each other. Doesn't matter what color we are, what sexual orientation we are, we will figure it out if we're stuck, if we have no choice. But if you take us and you use our most base, most reptilian impulses and you monetize those so that we're the product, oh no, no, no, then you lose everything we built as human beings, all the governance, all the community, all the social organizations, the churches, the things of the family, the things that matter to us that we're losing, that we're losing the things that make us rooted and make us sane and make us care and make us love. I mean, flourishing, flourishing starts right here. It starts at home. It doesn't start online. Flourishing starts, those are tools that we need to use to create wealth, but you can't flourish if you don't have real relationships. That takes away its strips away the essence of who we are as people. And yet we are all running headlong away from flourishing. - Yeah, so that, the only thing I'll take exception with there is the sense that we're running away from it. I think we're there are natural, exactly. That feels more right to me. - That's right, that's a better term for it. I agree. - One of the things that I feel like is really falling apart and this is the thing, I don't have a good solution for this, is shared narratives. So you've all know a Harari talked about this very eloquently and he said, look, there are other species that can coordinate in massive groups as big, if not bigger than the way that humans can do, but we're the only ones that can coordinate in these huge groups, flexibly. And he said, the way that we create that flexibility is through shared narratives. Now they have historically come most compellingly through religion. And as religion changes, I resonate with the language that God is dead, Nietzsche's sort of interpretation of that, that can hackle some people. So I'll just say that the tenor of it is changing, that in a world where I think a lot of people have alternate belief systems or things they gravitate towards or not even necessarily thinking about religion, I think there's a God shaped hole in all of us. And I am not a believer, as my longtime listeners will know, but I acknowledge that I have a God shaped hole in me that I need to fill with meaning and purpose. And as we fragment, so going back to this idea, as we fragment, this gets very scary because we don't have shared narratives anymore. And so now we're not necessarily cooperating in as large groups where at least before we would have the narrative of the nation. And so we had something that we could galvanize around, but obviously with the rise of populism cyclically throughout history, it's not like just now. But whenever that rears its ugly head, then some very dark things can happen.

AI Truth Bias (36:12)

But on the flip side of it. And so I'll say that's like a hyper shared narrative, right? Something has, an injustice has been done to me and the other person did it and we need to rise up against. Okay, cool, shared narrative can get dark, but you can also have on the other side where there is no shared narrative, you are now to your point about, you're being pulled in a direction that doesn't unite us, but only fragments us further. And I'll plug into that the reason that I don't look at that and go, oh, we just need to then come up with a shared narrative. In fact, I'm gonna put this in the framing of your book. You open your book, The Power of Crisis, with the story of Reagan and Gorbachev and Reagan says to Gorbachev, hey, if the US, this is like at the height of the Cold War. If the US were invaded by an alien, would you help us? And Gorbachev said, yes, absolutely. And that idea of, okay, there are things that we could rally around that take us out of our smaller narrative into a larger narrative, hence the title of the book, The Power of Crisis. There is a thing that can bring us together and give us that shared narrative. But what scares me is if you plug in AI bias into this equation, yeah, now I'm like, whoa, like one, who gets to decide what the AI's value system is, what the AI's belief system is, how the AI interprets truth, what the AI reinforces. And then if there are a lot of AI, which is probably the thing that protects us from an authoritarian answer, but at the same time, then you have all this competing reinforcement that again just brings us back to fragmentation. So as you look at that suite of unnerving potential problems, what do you see is our path to the other side of this? To doing it well? - Yeah. So President Biden just two weeks ago had a group of seven AI founders/c founders/ceos, the most powerful companies in this space. As of right now, that will not be true in a year or two. They'll be vastly more. Some of them are hyperscalers. Some of them are a large language model creators and some are both. And it was very interesting because those seven companies basically agreed on a set of voluntary principles that included things like watermarks on AI and reporting on vulnerabilities, sharing best practices on testing the models, all of this stuff. And the stuff that if you looked at it carefully, you'd say, those are all things we want. Those are things that will help protect us from the worst successes of AI proliferation. Now, on the one hand, they are not only were they voluntary, but they were super undefined in ways that every company that was there could already say, we're doing all of those things. We don't need to spend any more money on them. But I am told those seven companies are planning on creating an institution that will meet together and will work on more advanced, on advancing those standards and defining them more clearly, we'll see where that goes. But also, I mean, as more companies get in the space, you're creating an expectation in the media, in the government, in the population that these are things that they're committing to. And so increasingly, other companies will also want to show that they're doing that. And maybe there will be some backlash if they're not effective at doing so. But what was interesting to me about that initial meeting is the White House convened it, but they didn't actually set the agenda really at all because they don't have the expertise. They don't have the technology. They don't know what these tools do. I mean, they're trying not get up to speed and hire people as fast as they can, but they're not gonna be anywhere close to these companies. And what I think needs to happen in short order is that you're gonna need to create an approach that marries these things. You'll need the tech companies to have these institutions, that they are involved in standing up, but the governments are going to need to work with them. And they're gonna need to have carrots and sticks. They'll need to be licensing regimes, like we see for financial institutions. There's gonna need to be deterrence penalties. They need to be responsible for what's on their platforms. And if they're used in nefarious ways, there's gonna have to be penalties that could include shutting them down. And there's also some carrots that they should have as this becomes a field of thousands and thousands of companies. There's proprietary data sets that the US government and American universities have access to that you can drive massive wealth with AI. And maybe those will become public data sets that any AI company that's licensed can potentially use. I mean, all of this needs to be created, but we are nowhere on this right now. And the AI, like we've been hearing about for 40 years, but suddenly it's exponential. And exponential is not like Moore's Law exponential. It's not like a doubling every 18 months. It's like 10x in terms of the size and the impact of the data sets every year. So we don't have years on this. And that's why the urgency, that's why I mean, I've completely retooled, you know, our knowledge set to focus on what's the impact of AI on geopolitics. I mean, in the last year, because I've never seen anything that's had so much dramatic impact on how I think about the world and how geopolitics actually plays out. And so far, you and I have only talked about the disinformation piece and a little bit of the job piece. We haven't talked about what's probably the most dangerous piece, which is the proliferation piece of things like hackers and, you know, developing bio-weapons and, you know, viruses that can kill. I mean, I'm sure you've heard this.

Criminal Malware (42:37)

I've heard from friends of mine that are coders in past weeks that they cannot imagine coding without using the most advanced AI tools right now. 'Cause it's just like, it's just a world changer for them and how much they can do. I don't know any hackers, but I'm sure that criminal malware developers are saying, I can't imagine developing criminal malware or spearfishing without using these new AI tools. Because I mean, it's just going to allow them to target in such an extraordinary and pinpoint way and also to send out so much more, you know, sort of capable malware that will elicit so much more engagement and therefore, you know, bring so much more money to them or shut down so many more servers and give them so much more illicit data and so much of the illicit data that they've already collected from the hacks on, you know, all of these companies that you've heard about Target, for example, other firms. I mean, so much of that so far is just, oh, we're just selling that for people that wanna like use the credit cards. No, now you're gonna sell it to people that are empowered with AI that can generate malware against that data. And that, again, and that's like, we're gonna develop all these new vaccines and new pharmaceuticals that'll deal with Alzheimer's and deal with cancers and it's gonna be an incredible time for medicine, but we'll also be able to develop new bio-weapons that will kill people. And that's not gonna be just in the hands of North Koreans or Russians in the lab. It's gonna be in the hands of small number of people that are intelligence agencies are not yet prepared to effectively track, right? There's a reason why we don't have nuclear weapons everywhere. It's because it's expensive, it's dangerous, it's really hard. I mean, imagine the bio-hackers thinking back to the days when, oh my God, you know, hard it was, like, you know, you'd have to actually mix this stuff in a lab, you could die yourself. I mean, now we can do all this on the computer. The quaint old days, you know? So yeah, I worry deeply about the proliferation of these incredible tools used in dangerous ways. And we are not going to be able to allow the slippage that we have had around cyber tools that we have had around terrorism and their capabilities. We're gonna need to get, like, you know, our net, our filter is gonna have to be incredibly robust. - Do you have a sense of how we pull that filter off? - Well, part of it is, as I say, a hybrid organization. So there've been some people that have spoken about an international atomic energy agency model. So it'd be an international AI agency model. I think that won't work because that implies a state agency with inspectors that have a small number of targets that they're engaging in those inspections on. I don't think that works. I think what you're gonna need is an agency that involves the tech companies themselves. And so, you know, if you're developing an AI capacity in your garage, if you wanna use that anywhere, it's gonna have to be licensed. If you've got software that's going to run AI, it's gonna have to be licensed. And the tech companies that are running these models are gonna have to police that in conjunction with governments. So this is, I think this is a new governance model. I don't think it will work with the governments by themselves because they won't have the ability to understand what the capabilities of these algorithms are, how fast they can proliferate, what they can do, how they can be used dangerously. But the governments are the ones that are gonna be able to impose penalties. They will have the effective deterrent measure. I mean, Microsoft, Google, Facebook, Meta, these companies are not, what are they gonna do? They'll throw you off their platform. No, no, that can't be the penalty for developing a bio-weapon. You're gonna need to be working together around this. And together, not just in the company hands over the information to the government, the agencies are gonna need to be much more integrated. - So here's one thing that I've been thinking a lot about, be very curious to get your feedback on this.

Bitcoin (47:13)

So I'm definitely somebody who is a big believer in Bitcoin and what's going on in cryptocurrency. But as I look at it, I'm like, ooh, like this is definitely, if we have, the thing that makes me believe in Bitcoin specifically is that it's the closest thing to a digital recreation of an exploding star. So for people that understand, for people that understand how gold has become across a bunch of cultures throughout time, the thing is because it doesn't mold, it doesn't rot and it could only be generated from an exploding star. So there's no way to fake it, there's no way to make more. - I say it's interesting. - Yeah, so you have this thing that's very good about carrying wealth across time and space. It isn't that it is inherently, like people say, oh, but you can make jewelry and stuff. Yeah, but if we don't care about jewelry, then that never becomes a thing and there's no reason that we should care about gold jewelry. - Yeah, I mean, the industrial uses of gold are utterly marginal to its utility as a currency. I agree. - Exactly. So along comes Bitcoin, which same idea, there is a finite amount of it, you can never make more, it's the sort of computer equivalent of the exploding star. And it's better about going across space. So maybe it's equal to gold in terms of across time, but it's certainly much easier in terms of going across space. So I'm like, okay, cool, I really believe in that. But as you create that, you now have alternatives to government fiat currencies. And that is the slight weakening of their power. They're gonna obviously push back on that. And so we'll see how that sort of plays out from a regulatory perspective, whether they just get in on it and start buying it, or whether they get very anti-it, I think that yet to be determined. But when I think about the things that will weaken the government's hold on things, the next thing that comes into the picture is just the government's absolute inability to stay on top of AI. And so now you've got, oh, we're already having to lean on these companies. And so if it becomes the most powerful tool, the most dangerous tool, and it's not controllable by governments in the way that nuclear weapons is, that's another weakening of the power. And so now you start getting into this two paths before you. You get apologies, I don't know if you know who biology is, but you get his idea of the network state, where it's a non-geographically bound grouping. So going back to that idea of shared narratives. So people share narratives from all over the world. They come together, they have digital currency, they can sort of make their own rules and laws. And then the other one is the authoritarian version, where it's like, we just grab a hold of all of this. It is top down, and you're going to adhere, or life is going to be brutal. Obviously that would be China's take. But both of those aren't ideal for me as a child of the 80s, where it's just like, oh, this is so stable and wonderful. So one, do you think that are those the sort of two, most likely polls, or is there something in the middle that's more likely? - Yeah, so I agree with you that Bitcoin and crypto represent a similar kind of proliferated decentralized threat to governments as AI.

Crypto (50:38)

Having said that, crypto, the amount of crypto in existence, and being used compared to fiat currencies is de minimis. And I do not think that there is any plausible threat of scale against fiat currencies in the next, say, five years. And I do believe that if it became a threat of scale, every government in the world that matters would do everything they could to ensure that they continue to have a regulatory environment that maintains fiat currency is dominant. And they'll lean into stable coins, they'll lean into the technology, but they will want to have control over it. China, obvious, I mean, you've got a WeChat and lots of digital currencies that are work, but you have to use a digital RMB. They refuse to have currency that they don't have control over it because they want the information set, they want the political stability. In the United States, it's also the importance of having the dominant reserve currency globally, which matters immensely to America's ability to project power, to maintain our level of indebtedness, all of these things. So to weaponize finance, to declare sanctions and tariffs, to get other countries to do what we want, to align with us. So given that I think the timeline for AI being fundamentally transformative in governance is minimum two to three years, maximum five to 10, I only see one thing here. I'm an even climate change, which is huge and in front of us and trillions and trillions of dollars of impact and changing the way everybody thinks about spending money and governance and where they live and all of that, climate change in many ways is slower moving and slower impact than what we're gonna see from AI. Like I think AI is gonna have much more geo-political impact in the next five to 10 years than even climate will. And that was, what was one of the things that when I wrote the book, The Power Crisis, and that was before AI really took off, for me, each of the crises I was talking about were becoming larger and more existential. And I started with the pandemic because I was writing kind of in the middle of it and then I moved to climate and then I moved to disruptive technologies and AI and people were saying, how could you not put climate, you know, as the big one? I'm like, well, because climate like is, first of all, it's not existential, like we are actually on a path to responding to climate. It's just gonna cause a lot of damage. And we're gonna end up at like 2.5 degrees, 2.7 degrees warming. And it's also gonna happen like over the next 75 years. And we'll probably be at peak carbon in the atmosphere at around 2045. And then a majority of the world's energy, you know, starts coming from, or peak carbon energy use, excuse me. And then a majority of the world's energy starts coming from renewable sources. And that's a, that's an exciting place to be, where with AI, like we don't have 50 years for AI. We don't have 30 years for AI. Like, you know, we have five, 10 years to figure out if we're gonna be able to regulate this or not. And if it's going to look more techno utopian or if we're not here anymore. Like, I mean, I mean, honestly, I haven't really said this publicly, but we're having a broad enough discussion. Like, I'm, how old are you? - 47. - Okay, I'm 53. I think that knock on wood, I don't think that either of us are likely to die of natural causes. I think at our age, we are probably either going to blow ourselves up, you know, as humans, or we're going to have such extraordinary technological advances that we will be able to dramatically extend lifespans, in ways that are, I mean, you know, dealing with cell death and molecular destruction and genetic engineering. And I mean, just looking at what is ahead of us over the next 10, 20 years, this does not feel remotely sustainable. But that doesn't mean it's horrible. That means it's one of two tail risks. And I just can't tell if it's the great one or the bad one, but to the extent that I have any role on this planet, I'd like to nudge us, as I know you would do, in the better direction. And that means getting a handle on this technology and working to help it work for humanity, with humanity, as opposed to, you know, not against it, but, you know, kind of irrelevant to it. We don't want technology that does not consider human beings as relevant on the planet. You can reboot your life, your health, even your career, anything you want. All you need is discipline. I can teach you the tactics that I learned while growing a billion dollar business that will allow you to see your goals through. Whether you want better health, stronger relationships, a more successful career, any of that is possible with the mindset and business programs and impact theory university. Join the thousands of students who have already accomplished amazing things. Tap now for a free trial and get started today. - No, I agree with that.

Governmental responses & censorship (56:21)

The thing that I think that we're gonna have to contend with, though, is what is a governmental response going to be to the potential of their weakened power? So we know how China is dealing with it. So it was really amazing to watch China open up the capital markets and really just explode. And in your book, you talk about this, and I found it a really interesting insight that forced me to reorient my thinking about what China did. And so, if you've read Mao the Untold Story, it's just devastating to see how much death and destruction came out of an authoritarian government. And then at the same time, you're like, I don't know that America's approach is always the right, the most optimal answer, I forget the exact words you used, to every problem. And what you pointed out with China, when they opened up, just the growth rate was pure insanity and is really pretty breathtaking. But they learned from the collapse of Russia exactly what not to do, and now they're clamping back down. Now, with somebody that grew up in the US, man, I look at that, I'm just like, dude, that, I don't like that. That freaks me out, the thought of always being on that razor's edge of like, the individual doesn't matter, and we can just completely obliterate you. But then I watch, not even the government necessarily in the US, but the people in the US giving up on free speech, which as I think about what's like the one thing that you just can't let go of, if you want the individual to matter, and I think if you want to get to the quote unquote, right answer, you have to have free speech. Like even in my own company, where it would be very tempting to run my company in an authoritarian way, I just know I have too many blind spots. So I'm constantly like trying to get the team to be like, hey, say whatever you need, whatever you believe to be true. If what you believe to be true is that I'm an asshole and I do not know what I'm doing, you need to be able to say that. Now I'm gonna push you to articulate why I don't want some emotional statement. I want like, give me going back to truth, right? What is our goal? What's the metric by which we determine whether we're getting towards our goal? What can you show me in the math that shows that I'm doing this the wrong way? And then, you know, what's your take and why do you think it's gonna work better? But when I look at just the instability of that on both sides, so you have authoritarian rule where we just obliterate it. As soon as we don't feel like the government's in control, we kidnap those, my words, Jack Ma, re-educate him and then put him back forward, terrifying, or on our side where it's like, no, if you say something I don't like, 100% should be canceled going back to what you said about Trump. So how do we, as two people that wanna nudge this in the right direction, what's the right pressure point? Is it the government? Is it the individual? Is it the algorithms? Is it making sure that AI has the right biases?

Analysis Of Global Situations

China racing to catch up (59:34)

Like, what's the right pressure point? - I don't know that the right biases are the issue. I mean, you know, again, there's a lot of whack-a-mole going on tweaking these models as you roll them out. I think it is more in trying to ensure that you have clarity and transparency in what these models are doing and then the data that's being collected as it's being collected that has to be shared. These are experiments that are being run real-time on human beings. And we wouldn't do that with a vaccine, even in an emergency, we would have a lot more testing. We wouldn't do that on a new GMO food because we'd be concerned about, you know, sort of disease cancer, you name it. But we're doing that with these algorithms. That's very interesting to me and a little chilling that the Chinese who have done everything they can in the last 20 years to catch up and surpass in some areas to the Americans in new technology areas. They look at AI in large language models and they've said, "Okay, we're going to have control over these. We're gonna have full censorship over these. We're not gonna give them data sets that they can run on in the public because they think it's too dangerous." And that means that the LLMs that the Chinese are running right now are crap. They're nowhere near as good as what the Americans presently have. And that's because the Chinese are willing to accept the economic disadvantage to ensure they have the political stability. And I think that the United States, again, we're not gonna be able to simply stop this progress. The progress is gonna happen. There's too much money. It's too fast. We don't know what we're doing as a government in response. And also there are too many things we're focused on. Yes, you're focused on proliferation. But what I say is fake news and what I see is disinformation. Someone else is saying, you're trying to politicize it, right? And then you'll have a whole bunch of people saying, we can't slow down our companies because we need to beat the Chinese who are gonna be the largest economy in the world. Just like Zuckerberg did with Facebook 10 years ago. And for all of those reasons, I don't think you can slow this. I don't think you can stop it. I think what we need is a partnership between the technology companies and the governments. And that is gonna have to be regulated at the national level. It's gonna have to be regulated at the global level. By the way, the financial marketplace is not so radically different from this. But you have algorithms, trading algorithms that run and they need to be regulated because you wanna know that certain types of trading is not allowed. Now the types of trading is. And the 2008 financial crisis, when it hit, even though it started in a small part of the economy, we were all worried, oh my God, this could explode the whole economy. What happened? All the banking CEOs and the Fed head, the chairman of the Fed, the secretary of treasury, they got together and said, okay, what are we gonna do to ensure the system can stay stable and in place? And that happened in real time. And one of the reasons it works relatively well in the financial space is because the central bank governors are technocratic and somewhat independent from government. Like they know that you wanna avoid a bad depression, a market collapse. They know that you have monetary and fiscal tools that you can use to respond. We're going to need to create something like that in the technology space. We're going to have to create regulators who are in government but are working directly with the tech companies as partners to avoid contagion, to respond immediately to crises when they occur. And they won't just lead to market collapse. They could lead to national security destruction. They could lead to lots of people getting killed, but it's gonna be the same basic kind of model. And we gotta start working on that now.

The Goldilocks crisis of our age (01:03:29)

- All right, so let's talk about then the central thesis of your book. So using my words, the book kind of wants for a crisis. Hence the title of the power of crisis. You call it the Goldilocks crisis, something that is devastating enough that people stop and pay attention, but not so devastating that we can't respond well to it. Is that the only way to get people to act, to cooperate in the way that we would need to cooperate? And does it, like when you think about the ideal state of the world, is it globalized or sensibly de-globalized? - First of all, it's a great question. And it's not like you can never make progress outside of crisis. Progress happens all the time outside of crisis. We see new legislation that gets passed. We see new companies that are started. We see all sorts of, we see good works by people of other people on the street. But it's one thing to say, can't we get the progress we need in a family you can, in a community you can. When you're working together well, within an alliance, you frequently can. In what I call a G zero world, where there's not a level of functional global leadership, where countries aren't working together well, they don't trust each other. They don't have the institutions that align with the balance of powers today. So it's not a G seven or a G 20. It's really an absence of global leadership. I think in an environment like that, the most, by far the most likely way to get an effective response, just like with the Soviets versus the Americans, Reagan versus Gorbachev in the opening of my book, is if you have a crisis, if the aliens come down. And it turned out that the pandemic wasn't a big enough crisis. Didn't kill young people.

Why America 's COVID response was a mess (01:05:39)

It wasn't. I mean, look at what happened. The Americans pull out of the World Health Organization, the Chinese lie to everybody about not being transferred human to human. The relationship got worse between the two countries. The Americans, we didn't provide vaccines to the poor countries around the world, even though we had people in the United States that didn't need them and were waiting on it, that already took them and were waiting on boosters. Like, it was a complete clusterfuck, pardon my French. And it's because it didn't feel like an existential crisis. It wasn't big enough to force us to cooperate to a greater degree, January 6th in the United States. I mean, maybe if Pence had been hung, maybe if some, so I mean, God forbid, maybe if, if, you know, members of the House or Senate had been killed or injured or kidnapped for a period of time, but as it stood that evening, a majority of Republicans in the House voted not to certify the outcome, why not? 'Cause they're focused on the jobs because they knew it wasn't a constitutional crisis. They knew it wasn't a coup. So I do think that in this environment, in a dysfunctional governance environment, where people don't trust each other at the highest levels that are in power, where we don't have the institutions that can work, are proven to work to respond to the crises in front of us, yeah, we need a crisis. And the good news is that climate is clearly not only a big enough crisis, but also one that humanity, I think, is up for. And so that is forcing us. Every year, we actually are exceeding, radically exceeding in renewable energy production and reduced cost from what the international energy agency is predicting. Every year for decades now, we've been exceeding that. And that's because this crisis has been big enough and it's affecting everyone to mobilize our assets into action. And the question is, is AI a crisis that we can actually effectively respond to? There's no question. The size is suitably great that it should motivate us. And when I talk to government leaders around the world today, they are focused. They are focused on it. They're focused on it because of the size of the crisis, but also it's very interesting. So the US government, it's not just because they're suddenly all experts in AI. It's also because the three things that they are most concerned about is national security priorities, which is confrontation with China, war between Russia and Ukraine and proxy war with the Russians, and threat to the US democracy, they think, and they're right, that all of these are dramatically transformed by AI developments. So not only is AI coming as a big new thing, but also all the things they're already worried about, spending a lot of time and money on and blood, are things that are, they better figure this out or they're in trouble. So I do think the motivation to get this right is gonna be there. I just, I hope we're up for it. And again, I'm an optimist. I'm hopeful. I mean, at the end of the day, I mean, the fact that we're here and we're talking about it means that we're capable of doing some. - My only fear is that with global warming, you can't win global warming and get a leg up over China or Russia, but you can win AI and get a leg up and be better. And I think that one thing that people aren't talking about enough for sure is that AI is gonna be an adversarial system, meaning bad guys are gonna have AI and they're gonna try to do things to hurt me with that AI. And then others are gonna build AI that is protective and try to stop the bad guys. And so you will have, just like with normal hacking, you'll have an ever escalating arms race of AI. And so even if only with the best of intentions, we will end up getting to AI super intelligence because we're trying to stop somebody from doing a bad thing. And this is- - Yeah, go ahead. - I was gonna say, that's a really good point. And I've given a lot of thought to that because look, we don't trust the Chinese at all. They don't trust us. They've invested billions and tens of billions of dollars into next generation nuclear, wind, solar, electric vehicles, and the supply chains for all of that. Now, there are a lot of people around the country that are not particularly focused on climate, but they're focused on China. And they're saying, hey, we cannot let those guys become the energy superpower post carbon. We've got to invest in it so that we're gonna be the energy superpower. But the good thing about that is, hey, that's virtuous competition. Like if we end up investing more so that we're the dominant superpower, that just means cheaper post carbon energy faster for everybody. But in the AI space, it is absolutely unclear that there is a virtuous cycle of competition if we are not working together. The proliferation risk is much, much greater. I couldn't agree with you more on that point. - Yeah, so now the question becomes, when you look at what we get on the other side of the crisis, the cooperation, the banding together to focus on one problem, does that lead us back to globalization? So we opened this up with globalization.

Australia, Vietnam, Ten Year Lag (01:11:09)

Amazing, we were lifting some on, got like 160,000 people out of poverty every day for like nine years, which I'm just absolutely crazy, the number of people that we pulled out of poverty. But you get the Rust Belt pushback rise of populism. It's not good for everybody. And so needing to really be honest about that. But in this world, let's say that we get the right crisis. What are we steering towards? Is it re-globalization or is it what I'm calling thoughtful de-globalization? - I think we are trying to move back towards globalization but thoughtful globalization, where you are using the resources you have to more effectively take care of the people that are left behind, that you are constantly retooling your institutions and reforming them because the technologies are changing that fast. And that's something governments by themselves won't be able to do. Again, they'll have to do in concert with these new technology companies or governments will have to change what they are. They'll have to integrate technology companies into them. And that scares you. That's more of an authoritarian model, frankly. But I do think that one of the reasons, you've steered me a couple times now in a direction that historically I'd be very easily steered, which is to talk about US versus China. And I've resisted it. And the reason I've resisted it, even though US-China is in a horrible place right now and the relationship is getting worse, it's not getting better. But I think it is more likely within three, five years that AI companies cutting edge in all sorts of fields will actually be all over the world. I think this is going to be a proliferating technology for good and for bad. So I'm more concerned about individuals, rogue states, terrorist organizations doing crazy things as opposed to the US versus China that ultimately wants stability in the system. But I'm also hopeful that it's not going to be a small number of dominant companies in the United States and China that control all of the next generation AI. Actually, if you're at a position where you can run a near-cutting edge AI on your own laptop or on your smartphone and millions and millions of people have access to that intelligence and they can do things with it, I don't think that a small number of megatec corporations are going to control it. I mean, they may have platforms that they'll be able to charge taxes on, basically tariffs on, but I think so much of both the value, the upside and the danger will be distributed all over the world. And that's again, very different than the way we think about geopolitics today. So I don't think the US, I don't, on the AI front, I don't think the US-China fight is the principal concern to worry about in the next five to 10 years. - Oof, okay, well, so this is very interesting. One of the things that you talked about in the book is that when Russia invaded the Ukraine, one of the things that they did to try to appease the West and keep them calm was like, "Hey, we know you're really worried about hackers, "we're gonna go round them up, arrest them." And what happens to the ability to use political means to get these bad actors in line if they are proliferated everywhere and we have varying degrees of ability to influence. - Yeah, it's one of the reasons why I think you don't have an interpol model or an IAEA model. It's why I think it's gonna be, it's gonna have to be much more inclusive with the technology companies. I keep coming back to this. I don't think that the US government by itself or the Russian government would be able to make that kind of a promise as easily. Russians are a little bit different here, right? If you're a authoritarian state and you have real control of the information space, maybe the vast majority of people working on hacking are under your authority, maybe. But if AI really becomes as explosive and as decentralized as I believe it will, then the governments by themselves and are gonna have a hard time even maintaining control of the AI space. I'm not sure the Chinese model on this is gonna work. I mean, in five and 10 years time. Remember, they gave up on the great Chinese firewall and instead, because it was too porous and instead what they did was they used the surveillance mechanisms and they had a whole bunch of people that were online that were basically nudging Chinese citizens towards better behavior and towards certain things that they should say and certain things, again, certain things they didn't say. And that turned out to be more effective. AI, I think, is going to become, if it becomes a much more decentralized space, it's gonna be much, much harder for an authoritarian state to do that. But certainly it'll be impossible for democratic states to do it. Now, the question you haven't asked me is does that mean that democracy is sustainable? I mean, the US government feels immediate national security threat from all these tech companies and they can't regulate it. The Americans start finding the Chinese model on AI much more attractive. I don't think so. And I don't think so because our system is so entrenched, it's so slow moving, it's so receptive to money. The companies are so wealthy, they have the ability to capture the regulatory environment. Like, again, I mean, never say never. It can happen here. If things are incredibly dangerous, yes. I mean, you can take desperate measures, but short of the worst scenarios, I think that the United States is closer to kleptocracy than it is to authoritarian regime. If there's a way that the Americans are going to move away from democracy, it's probably not a Chinese model, right? - Well, that's horrifying. I doubt my hope, it's funny, my brain tried to fill in what you were going to say, and your answer is probably more true than what I was hoping you were going to say. But what I was hoping you were going to say was that we have such a strong shared narrative around freedom that we wouldn't make those he laughs, ladies and gentlemen, he laughs. Yeah, man, I don't-- - Oh my God, that used to be true when my dad was alive. And after World War II, I just don't see it anymore. I mean, unless everyone's lying to the pollsters all the time, it just doesn't feel that way. - Yeah, prior to COVID. - I don't think we agree in the United States what our country stands for. I don't think we do. I don't think we know what our country stands for. There's such incredible cynicism among young people that they're just being lied to, that it's performative from their governments, from their corporations, from everybody, from the media. And some of it is very understandable. It's painful, but our economy is doing so well. Our technology is doing so well. We have the reserve currency. It's not being threatened. We were in a great geography. It's very safe, it's very stable. There are so many things that are great. I saw that Jamie Dimon, few minutes that everyone was talking about standing up for America, but he didn't talk about our political system. And our political system is deteriorating and people don't believe in it the way they used to. And there are, no, I've not seen any pushback against that in the last 20 years. It got worse under Obama. It got worse under Trump. It's gotten worse under Biden. It's clearly not just about those people. It's structural. There are a lot of things driving it. And that, I don't see a, I mean, God forbid, we had a 9/11 right now. I mean, I was here, I was in New York at 9/11. I saw the second tower go down. I saw the way that New York City rallied. I saw the way the country rallied. There was 92% approval for Bush, for Bush a month after. And people will not understand how crazy that is. And I don't think that could happen today. I don't think it could happen. Even with someone who is as much of a unifier as Biden has been historically, and it certainly couldn't happen under Trump. And that's really sad. That's really sad. Do you have a sense of how we unwind that? This is the one thing my thesis has been on this that until there is enough pain and suffering, which unfortunately historically means war, you don't get the country won't come back together, right? So we've obviously been more divided than we are now, 'cause we've been an open civil war in the past.

We Are Dismantling Our Infrastructure (01:19:52)

But whoo, I don't see how you unwind these increasingly divergent narratives of left and right. Without real suffering. - Well, I mean, there was this great book that was written by a Princeton historian about the three great levelers. And it talked about how in societies, whatever the governance mechanism historically, they tend to get more unequal and people with access to power get closer access to power over time, unless one of three big things happen. Famine, revolution, or war. And that's a little depressing because that implies that you have to have that kind of great, kind of serious crackdown crash before you come out and create more opportunities for people. But I also am seeing, I mean, coming out of the pandemic, there was an enormous amount of money that was spent on poor people. It wasn't just like after 2008, when you bailed out AIG and Lehman Brothers and the bankers, this time around, I mean, you bailed out everybody. You bailed out working mothers. You bailed out small and medium enterprises. And it made a difference. And inflation has hit hard, but now finally, working class wages are actually growing faster than inflation and then the average wage. And that wasn't true for decades. So maybe there is a bit of a lesson in that. Maybe there is a bit of a lesson when people are seeing that it's the wealthiest with their legacy capabilities that are getting accepted to the major universities, the best universities and not others. And there's a backlash against that. And maybe that forces greater transparency. Maybe it turns out that AI becomes, with all the wealth it can generate, becomes more of a leveler for people in the United States that will have access to opportunities they hadn't had before. Maybe it allows globalization to pick up again and not everybody's boat will rise at the same speed, but at least everyone's boat will be rising for a while.

New Globalization Opportunities (01:21:59)

Coming out of the pandemic, we had 50 years, if we look at humanity as this little ball of 8 billion people, we had 50 years where overall we had extraordinary growth. And if you watched Stephen Pinker and Hans Rosling and all of these pro-globalization folks, it is true we created not just very, very wealthy people, but also a global middle class. And anyone looking at the globe, without a nationality, just like you're an average person, you don't know where you're gonna be born, you don't know what family, would you wanna be born in the last 50 years? Yes, yes you would. And hopefully you win the lottery and you're in the United States, like you and me. But anywhere, that's the time you'd pick. But the last three years you wouldn't, because the last three years, suddenly human development indicators have gone down. More people are forced migrants, more people are born into extreme poverty. And people are getting angrier as a consequence of that. Well, I mean, I think there's a good chance that with AI, we will have a new globalization that will create far more opportunities, but we need to be very careful about those negative externalities. And so far, it's very early days, but we're not addressing them yet. Mm. So given all of that, paint a picture for me of the near term, let's call it the next 10 years, the world is shifting and changing what does the world order look like as we look out into the future. And I'll contextualize that with you've got things we've talked about here. You've got the war in Ukraine. You've got a dynamic between the US and China being radically upended by the proliferation of AI creating potentially powerful, or at least destructive entities anywhere, which make it harder for us to yank levers of political persuasion. With all of the unique cocktail that's brewing now, how does one begin to conceptualize where the world is heading over the next 10 years? Well, I can't imagine wanting to be alive at any other time. I mean, we talk about the Anthropocene, where human beings, first time in history, we have the ability to actually shape the future of humanity and our role on the planet that we're on. That's pretty extraordinary. And what does that mean? I think that means that governments and governance will look radically different than anything that we have lived with. We've lived for all of our lives for 50 years, you and I on average now. We've lived in a fairly stable system. The Soviet Union collapsed, US was in charge, China's had an extraordinary rise, but generally speaking, the global order today still looks more or less like the global order you had 50 years ago. Henry Kissinger recognizes it. He was 50 now, he's 100. But it feels like geopolitics still function the way they used to. You've got heads of state, you've got governance, you still have the UN, you've got the IMF, you've got the World Trade Organization, you've got these big things that more or less, I mean, I was just at the Security Council, Security Council, kind of the same Security Council we had before, from the '70s, but whatever, it's not. It's the rules, the UN charter, it's all there. You could have been born a long time ago. In 10 years' time, I think we'll still recognize the tectonics on the planet. I think the demographics we can talk about, we can talk about how Japan will be smaller and how China's peaked out, and how India's growing, and that we pretty good sense in that climate, we've got a pretty good sense of what climate's gonna look like and extreme storms and the rest. But government, how government works, how the geopolitics work, how the world is ordered, ruled, I think it's gonna look radically different in 10 years, I really do.

Reformation Of Government

Redefining how government works (01:25:45)

Certainly in 20, but probably in 10. I think that a big piece of the power that determines who we are and how we interact with people will be driven by a very small number of human beings that control these tech companies, that may or may not know what they're doing, and that may or may not be with intentionality, and we don't really know what their goals are, and those goals can change. I talked a little bit in my TED talk, which I haven't really talked much about, which is kind of good, I talked a little bit about how when you and I were raised, it was nature and nurture, and that determined who we were, and that now for the first time in humanity, we are being raised by algorithm, and that we have a whole generation of kids whose principle understanding of how to interact with society will be intermediated by programmed algorithms that have no interest in the education of that child. That is a subsidiary impact of what they are trying to do, those algorithms, what it is trying to do. And a lot of the interactions that will take place with those kids will be AI interactions, not just intermediated, but the actual relationship will be with AI, which by the way, if I could wave a magic wand and do one regulation in the world today, I would say anyone under 16 cannot interact with an AI directly as if it were human being, unless it's under human, direct human supervision. Because I just don't want people to be raised by anything other than people until we understand what that means. - It seems fair. - I mean, it's a level of education. - I want that to be directly controlled by supervised by a person. So yes, I think education, I think a doctor, I'd love to have AI being used for medical apps for kids, but I'm saying if you're having a relationship with something, including with a teacher, I don't want kids to have a relationship with an AI educator unless it's overseen by an adult until we know what it does to the kids. We just don't know.

Generation Gap Analysis

Young people today are already different humans (01:28:14)

We just don't know. And I worry about that a lot. I wouldn't want, I mean, I don't have kids. If I had them, I'd worry about that. I know my mom wouldn't have allowed that and thank God for it. So yeah, I think that we're gonna be different as human beings. I mean, you talked about Yuval Noherari recently, who I find very inspirational as a thinker. And this Homo deus concept that he comes up with, I think that young people today are already something a little different from Homo sapiens. And I don't know exactly what that is. None of us do, 'cause we're running the experiments on them now. I'm not comfortable with that. - That's a good summary. Ian, this has been incredible. Where can people follow you? - They can follow me on Twitter at Ian Bremmer or LinkedIn at Ian Bremmer or even Threads. The few people that are on that, but it's kind of fun. Ian Bremmer, what else? I mean,, gzero, all one word where we have a little digital media company that we reach out to people all over the world and they can get our stuff for free, which hopefully it's engaging and useful. Just like I really enjoyed this last hour or so. So this was a lot of fun. - Same, man. All right, everybody, if you haven't already, be sure to subscribe. And until next time, my friends, be legendary. Take care, peace. If you wanna learn more about this topic, check out this interview. I actually wanna start with a quote of yours. So for anybody that doesn't know, you're a former CIA legitimate spy, which is crazy. And the reason I find that interesting is because you would have to be a master of psychology, your own and others.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Wisdom In a Nutshell.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.