MEGATHREAT: Why AI Is So Dangerous & How It Could Destroy Humanity | Mo Gawdat | Transcription

Transcription for the video titled "MEGATHREAT: Why AI Is So Dangerous & How It Could Destroy Humanity | Mo Gawdat".

1970-01-09T21:39:37.000Z

Note: This transcription is split and grouped by topics and subtopics. You can navigate through the Table of Contents on the left. It's interactive. All paragraphs are timed to the original video. Click on the time (e.g., 01:53) to jump to the specific portion of the video.


Introduction

Intro (00:00)

we've never created a nuclear weapon that can create nuclear weapons. The artificial intelligences that we're building are capable of creating other artificial intelligences. As a matter of fact, they're encouraged to create other artificial intelligences. Even if there is never an existential risk of AI, those investments will redesign our society in ways that are beyond the point of no return. You've said that people should consider holding off having kids right now because of AI and other societal issues that are coming. You've said this is the thing that we should be thinking about that AI poses a bigger threat than global warming. Why is it that you think AI poses such a significant existential risk to humanity?


Understanding Ai: Dangers And Possibilities

Why AI is so dangerous (00:48)

It's not just in the amount of risk that AI positions ahead of humanity. It's not about the timing of the risk. We should cover those two points very quickly, but it really is about a point of no return. If we cross that point of no return, we have very little chance to bring the genie back into the bottle. What is the point of no return? The most important of which, of course, is the point of singularity. Singularity is a moment where you have an AGI that is much smarter than humans. I think that when we discuss singularity that might bring about the suspicion of an existential risk-like-skynet type of thing, we are losing focus on the immediate threat, which is much more imminent and in a very interesting way as damaging, probably even more damaging. The risk in my view, which we have to resolve first before we talk about the existential risks, is the risk of AI falling in the wrong hands, or the risk of AI falling in the right hands that are naive enough to not handle it well, or the risk of AI misunderstanding our objectives, or the risk of AI performing our objectives, but us misunderstanding our own benefit. I think when you really look at those, I call this the third inevitable and scary smart. When you really look at those, those are truly around the corner. There are other risks that are extremely important as well, which we don't even think of as threats, but that are completely going to redesign the fabric of our society. Jobs by definition is going to the definition of jobs, and accordingly the definition of purpose, the definition of income gap, power structures, all of that is going to be redesigned significantly. It is being redesigned as we speak. As we speak, there are those with hunger for power, those with fear of other powers, those with hunger for more and more and more money and success and so on, who are investing in AI in ways that even if there is never an existential risk of AI, those investments will redesign our society in ways that are beyond the point of no return. Let's get into the three inevitable.


The 3 Inevitables (03:22)

What are they exactly? The three inevitable are my way of telling my readers or my listeners to understand that there are things that we shouldn't waste time talking about because they are going to happen. Those are number one, there is no shutting down AI, there is no reversing it, there is no stopping the development of it. Let me list them quickly and then we go back on each and every one of them. The second inevitable is that AI will be smarter than humans, significantly smarter than humans. And the third inevitable is that bad things will happen in the process. Exactly what bad things we spoke about, a few of them, but we can definitely discuss each and every one of those in details. The first inevitable, interestingly, the fact that AI will happen, there is no shutting down, there is no nuclear type 3T that will ever happen where nations will decide, okay, let's stop developing AI like we said, stop developing nuclear weapons or at least stop using them because we really never stopped developing them. That's not going to happen because of a prisoner's dilemma because humanity so smoothly stuck itself in a place, in a corner where nobody is able to make the choice to stop the development of AI. So if alphabet is developing AI, then meta has to develop AI, and Yandex and Russia has to develop AI and so on and so forth. If the US is developing AI, then China will have to develop AI and vice versa.


AI is here to stay. (04:59)

And so the reality of the matter is that it is not a technological characteristic of AI that we cannot stop developing it. It's a capitalist and power focused system that will always prioritize the benefit of us versus them over the benefit of humanity at large. So when you really think about some of the initiatives that now some global leaders are starting to talk about AI and try to put it in the spotlight, like the Prime Minister of the UK or whatever, when I was asked about that, I was in London last week and basically I think it's an amazing initiative, great idea. But can you understand the magnitude of the ask that you have here, which is you need for to get initiative?


What humans need to know about the Prisoners Dilemma. (05:40)

The initiative was that we get all of the global leaders together to, you know, to a summit that basically looks at AI and tries to regulate AI. And for that to happen, you know, you need nations to suddenly say, okay, you know what, we're going to all look at the global benefit of humanity above the global, the benefit of each individual nation. You want to get people from China, Russia, the US, North Korea and others around one table and tell them, can we all shake hands and say, we're not going to develop that thing? And even if they do, which they will not agree to that, you know, then they will question what happens if a drug cartel leader somewhere, you know, hiding in the jungles decides to expand and diversify his business and start to work on a eyes that are criminal in nature. We need to develop the policemen and to develop the policemen. We have to develop AI. And so all of those definitions, all of those prisoners, dilemmas, if you understand, you know, game theory are basically positioning us in a place where our inability to trust the other guy is going to lead us to continue to develop AI at a very fast pace because we're even worried about what the other guy could do due to our mistrust. And, you know, the clear example of that is what we saw with the open letter, which I think was a fantastic initiative. I think you covered it many times in your podcast, yet, you know, the attempt to tell, you know, the big players that are developing AI, let's halt the development for six months. And I think it was less than a week before Sundar Pachay, the CEO of Alphabet, responded and said, "This is not realistic. You can't ask me to do that because there is no way you can guarantee that no one else is going to develop AI and disrupt my business." That basically means we have to start behaving in a way that accepts that AI is going to continue to be developed. It's going to continue to be a prominent part of our life. And it's going to continue to get massive amounts of investment on every side of the table. For people that don't know the prisoners dilemma, it's probably worth walking them through it. But what you said about drug dealers, I've never heard anybody say that before. And I think removing this from just government versus government is probably a very wise way to look at it. You and I are both sort of secretly very optimistic. In fact, the way that we first met is around the idea of happiness and mental health and all of that. So I hope people don't see either of us as sort of doomsday sayers. I just feel like we're going through a transitional period right now that is unprecedented in human history. I say that with full understanding that every generation says, "No, no, no, this time it's really different." But I feel like this time really is different. The closest thing to it is nuclear weapons. And that already gives you a sense of the scale. But part of the reason I'm more worried about AI than I was even as a kid with really living under the cloud of nuclear proliferation, the Cold War, all of that is because the infrastructure required for a nuclear program is massive. Whereas you don't need that infrastructure. You just need a computer or some servers and clone over Chad GPT and you're ready to rock. So walk people through the prisoner's dilemma so that they can really understand that this is a deep fundamental truth of the human condition and isn't just a government-v government thing. Yes, let me cover that. But let me also cover a tiny one more thing. It's very, very different between AI and nuclear weapons, which is the fact that we've never created a nuclear weapon that can create nuclear weapons. The artificial intelligences that we're building are capable of creating other artificial intelligences. As a matter of fact, they're encouraged to create other artificial intelligences with the single objective, stated objective of make them smarter.


Why are we rushing towards the future? (10:07)

So basically, imagine if you had two nuclear weapons finding a way of mating and creating a smarter or a more devastating nuclear weapon. And I think that's really something that most people miss when we try to cover the threat of AI. The prisoner's dilemma is a very, very simple mathematical game, if you want part of game theory is to imagine that you have two prisoner. There's not two suspects of a crime, basically partners in a crime who are captured, but the police doesn't have enough evidence to put them both in jail. So they are trying to get one of them to tell on the other. So they would go to each of them and say, by the way, just giving you an example, if you don't tell and your friend tells, you're going to get three years and he's going to get out free or he's going to get out with one year. And then they go to the other guy and say the same. If you tell and he doesn't tell, you're going to get one year and he gets three. And by the way, if you both tell, you both get two years. And so from a mathematics point of view, if you build the possibilities of those scenarios in quadrants, basically, a quadrant where I tell and you don't is a quadrant that requires a lot of trust. Sorry, a quadrant that I don't tell and you don't tell is a quadrant that requires a lot of trust. Any other quadrant by definition tells me that if I tell, I will get off with us with a lighter sentence. Okay. And the only reason why I wouldn't do it is if I trust you and if I don't trust you by definition, human behavior will drive you and drive me, both of us, to say, look, the better option is for me to get off with a lighter sentence because I don't trust the other guy. And I think that's reality of what's happening. I mean, in business in general, in, you know, in power struggles in general, in wars in general, I think it's all a situation that's triggered by not trusting the other guy because if we could trust the other guy, we would probably focus on many more much softer objectives that can grow the pie rather than, you know, get each of us to compete. So this is where we are. And I think the reality of us continuing to develop AI at a much faster pace because Chad GPT and open AI is work in general, I think is the Netscape moment for AI of, you know, Netscape of the internet, Chad GPT is for AI because basically it highlighted first and foremost, not just for the public. I think the big bringing it to public attention actually is a good thing because it allows us to talk about it more openly. And people will listen when I published Scary Smart in 2021. It was business book of the year in the UK at the Times Business Book of the Year, but it wasn't as widely urgently read as it is today, simply because people were like, yeah, that's so interesting. This guy has an interesting point of view, but it's 50 years away. And human nature, sadly, doesn't respond very well to existential threats that are very far in time or probable in their, you know, possibility of occurrence. We don't really know. It's like those warnings on a pack of cigarette. You know, if we tell you it's almost, it causes certain, most certainly causes death. People look at it and say, yeah, but that's 50 years from now. I want to enjoy it for 50 years. So, you know, whether it's 50 years or five, nobody really knows, but, you know, people would delay reacting to those. So when open AI and chat GPT became a reality, I think what ended up happening happening is that the public got to know about AI, but also the investors. So this is the dot com bubble all over again, right? We have massive amounts of money poured to encourage faster and faster development of AI. I mean, I know you're a techie like I am, and we both know that it actually is not that complicated to develop than another layer of AI. Of course, it's complicated to find the breakthrough, but it, you know, to develop more and more of those, I think, is something that's becoming our reality today. Why are we, as we think about how fast the technology is developing, which I think most people will concede that they probably struggle to think exponentially and not linearly. And so even with a linear thinking at this point, seeing how far it's already come, I think people are already worried if they understood how much faster, even than they could possibly imagine it's going, it is going, they're still worried. So my question is, why does this break bad? Why do we all make the base assumption that without either massive intervention or, you know, some sort of regulatory body or something that this doesn't just naturally end up in a good place? Why are you, me, other people, why are we worried that number three in your three inevitable is that things go wrong? Why are we worried that it isn't just, man, when there's bug software, it's nothing. Why isn't this going to be like the year 2000, the Y2K problem for anybody old enough to remember that everybody was super panicky and then nothing happened? Why isn't this going to be yet another nothing burger? Because the chips are lined up in the wrong direction. So, you know, Hugo de Garres, if you know him, is a very well-known AI scientist that worked in Asia for quite a few years. And he did that, he built a documentary that I think is found on YouTube, it's called Singularity or Bust.


Our reality (16:19)

And he was basically saying that most of the investment that's going in AI today is going into spying, killing, gambling, and one more. So spying is surveillance, okay? Killing is what we call defense. Gambling is all of the trading algorithms and selling, which is all of the advertisement and recommendation engines and, you know, all of the idea of turning us into products that can be advertised to, if you want. And that's not unusual, by the way, in our capitalist system, because those industries come with a lot of money, banking, you know, defense and so on and so forth. The chips are lined up this way. I mean, if you take just accurate numbers on how much of the AI investment is going behind drug discovery, for example, is, you know, as compared to how much is going behind, you know, killing machines and killing robots and killing drones and so on and so forth, you'd be amazed that it's a staggering difference, right? And this is the nature of humanity so far. If you're running a research on a disease that doesn't affect more than, you know, a few tens of thousands of people, you're going to struggle to find the money, okay? But if you're building a new weapon that can kill tens of thousands of people, the money will immediately arrive because there is money in that. You can sell that. And sadly, as much as I, you know, I would have hoped that humanity wasn't completely driven by that. It's our reality. So this is number one. Number two is that, so number one is we're aligned in the direction of things going wrong, okay? Number two is even if we're aligned in the direction of going right, wrongdoers can flip things upside down. There was an article in the Verge, you know, a few months ago around, you know, a drug discovery AI that was basically supposed to look at characteristics of, you know, human biology and, you know, whatever information and data we can give it about the drugs we can develop and chemistry and so on and so forth with the objective of prolonging life, prolonging life, so prolonging human life is one parameter in the equation. It's basically plus make life longer, okay? And for fun, they, you know, the research team was asked to go and give a talk at a university. And so for the fun of it, they reversed the positive to negative. So instead of giving the AI the objective of prolonging life, it became objective of shortening life. And within six hours, if I remember correctly, the AI came up with 40,000 possible biological weapons and, you know, agents like nerve gas and so on, that was shortening. Yeah, it's incredible, really. And you know, it's the thing that, of course, kills me is that this article is in the Verge, you know, it's all over the internet. And accordingly, if you were a criminal that grew up watching, you know, supervillain movies, what would you be doing right now? You would go like a million dollars. I need to get my hands on that weapon so that I can sell it to the rest of the world or to the rest of the world or villainy. And I think the reality of the matter is it is so much power, so much power that if it falls in the wrong hands and it is bound to fall in the wrong hands unless we start paying enough attention, right? And that's my cry out to the world is let's pay enough attention so that it doesn't fall in the wrong hands, it would lead to a very bad place. The third, you know, and the biggest reason in my view of us needing to worry, hopefully, hopefully we will all be wrong and be surprised, is that there were three barriers that we all compute all computer scientists or that worked on AI. We all agreed there were three barriers that we should never cross. And the first was don't put them on the open internet until you are absolutely certain they are safe. And you know, it's like FDA will tell you don't swallow a drug until we've tested it, right? You know, and I really respect Sam Altman's view of, you know, developing it in, you know, in public in front of everyone to discover things now that could, you know, that we could fix when the challenge is small. In isolation of the other two, this is a very good idea, but the other two barriers we said we should never cross is don't teach them to write code and don't have agents prompted them, right? So what you have today is you have a very intelligent machine that is capable of writing code. So it can develop its own siblings, if you want. Okay, that is known frequently to outperform human developers. So I think 75% of the code was no, sorry, 25% of the code given to charge GPT to be reviewed would was improved to run two and a half times faster. Okay. They can develop better code than us. Okay. And basically now what we're doing is we're not only limiting their learning, the learning of those machines to humans. So they're not learning from us anymore. They're learning from other AIs and there are staggering statistics around the size of data that is developed by other AIs to train AIs in the data set. Of course, again, just to simplify that idea for our listeners, AlphaGo Master, which is the absolute winner of the strategy game Go, you know, one against AlphaGo, sorry, AlphaGo Zero, which is the absolute winner of the strategy go game that's called Go, one against AlphaGo Master, which was another AI developed by DeepMind of Google that was by their end the world champion. So AlphaGo Master, one against the world champion and then AlphaGo Zero, one against AlphaGo Master 1000 Games to Zero by playing against its self. It has never in its entire career as a Go player seen a game of Go being played. It just simulated the game by knowing the rules and playing against it. You can reboot your life, your health, even your career, anything you want. All you need is discipline. I can teach you the tactics that I learned while growing a billion dollar business that will allow you to see your goals through. If you want better health, stronger relationships, a more successful career, any of that is possible with the mindset and business programs and impact theory university. Join the thousands of students who have already accomplished amazing things. Tap now for a free trial and get started today. Okay, so first people that don't know the history of this, I think it was Deep Blue ends up beating Gary Kasparov, the greatest chess champion back in the 80s. Is that correct? Thank you, 90-5, remember correctly. Yeah. No way that we're ever going to be able to build AI that will beat a Go champion, ends up beating the, I forget how many years ago this was, but it took a long time, but they finally did beat the second place Go champion. Then they updated, beat the first place world champion in Go and then realized we don't need to feed it a bunch of Go games. We can just have it basically dream about playing itself over and over and over and over and over and over very rapidly, which is one of the things you said in your book that I found. This is something that people under appreciate. The future is going to be almost impossibly different to the point where it will even now, so forget the singularity where the rate of changes is so blinding that you can't predict a minute from now, let alone what's happening now. But you said over the next 100 years without any additional changes, we will make 20,000 years of progress. And in that progress, though, I have to imagine will be progress that speeds up that rate of change. So if we're already on a rate of change of 20,000 years of change in a single century, you can imagine where we're going to be in 10, 20, 30 years is going to be crazy. So by putting an algorithm together, rather than feeding it human data, you feed it AI games, it gets unbeatable to the point where it can be the other AI. Okay, that's crazy. So I mean, think about it. Think about it this way, Tom. How does the best player of Go in the world learn the game? They play against other players. And every time they win or they lose, of course, they're given instructions and hints and tips and so on. But every time they make the wrong move and they lose, they remember it. And so they don't do it again every time they make the right move and they win. They remember it and they do it over and over. The difference is that one player, you know, I always give the example of self-driving cars. You drive and I drive. If you make a mistake and avoid an accident, you will learn. I will not. Okay. If one self-driving car requires critical intervention, it's fed back to the main brain, if you want to call it. And every other self-driving car will learn. That's the point about AI, right? And so when Alpha Go Zero was playing against Alpha Go Master, you know, for it to learn just so that you understand there were three versions of Alpha, Alpha Goa. Version one was beaten by version three in three days of playing against itself. Version two became the world, you know, which is the, which was the world champion at the time, lost a thousand to zero in one in 21 days, 21 days. And I think this is why I am no longer holding back.


Nobody expected AI to win (27:04)

Okay. The reason why I'm no longer holding back is that nobody, if you've ever coded anything in your life, nobody expected an AI to win and go any earlier than 10 years from today, right? It did not only happen several years ago. It happened in 21 days. Did you understand the speed that we're talking about here? And when you said exponential, people don't understand this, Chad GPT-4, as compared to Chad GPT 3.5 is 10 times smarter. Okay. There are estimates hard to measure exactly. There are estimates that Chad GPT-4 is at an IQ of 155, if you measure by all of the, you know, tests that it goes through, right? Einstein was 160. Okay. So it is already smarter than most humans. Now if Chad GPT-5, no, no, no, Chad GPT-6, a year and a half from today is another 10 times smarter. We just dig that assumption, huh? You're now 10 times smarter than one of the smartest humans on the planet. If this is not a singularity, I don't know what is. If this is not a point where humans need to stop and say, hmm, maybe I should consider trying to understand how the world is going to look like when that happens, right? And I go back and I say this very openly, I am like you, I'm an optimist, 100%. I know that eventually AI in the 2040s, 2050s, maybe will create a utopia for all of us or for those who remain of us. Okay. But then between now and then the abuse of AI falling in the wrong hands, as well as the uncertainty of certain mistakes that can flip life upside down. Okay. Could really be quite a struggle for many of us. Does that mean it's a doomsday? No, it's not. But it's honestly not something that we should put on the side and go binge watch, you know, game of thrones, not anymore. And I think people need to put the game controller down and start talking about this, starting telling their governments to engage, starting to tell developers that we require ethical AI, start to request some kind of an oversight. And in my personal point of view, start to prepare for an upcoming redesign of the fabric of work, and most importantly, start to prepare for a relationship between humans and AI that we have never in our lives needed to do before with any other being. It's like getting a new puppy at home. Only the puppy is a billion times smarter than you. Yeah. Think about it. Yeah. There's a Rick and Morty episode about the dogs becoming exceptional intelligence.


Law of accelerating returns (30:16)

Remember that? Yeah, one of my favorites. Absolutely. Very much so. All right. So I want to, there's two things I want to drill into and then I want to you and I to start the conversation about what that looks like because in fairness, I don't think certainly not in the US. I don't think most people in the government have thought about it at all, probably would be my guess. And so I think that the a better way for people to begin to think through this stuff is really sort of podcast, citizen, journalism, whatever you want to call it. So the two things I want to drill into are going to be exponential growth, which we've touched on, but there's a few more things I think to be said about that. And then alien intelligence. And I say alien intelligence because the way that AI is going to think will be so vastly different, it will truly be incomprehensible. And I think our failure to grasp what artificial super intelligence will look like is the problem. Okay, so let's talk exponentials. So linear, if I take 30 steps, I'm going to be roughly at my front door. Let's just call it. If I take 30 exponential steps, I'm going to walk around the earth something like 30 times. It's crazy. And people don't, they don't have a sense of that. So linear obviously is one, two, three, four, it just you progress by one increment each time. Exponentials means you double each time. And there's something called the law of accelerating returns, which I know you know well about. So it'd be great to hear you talk on this. But the way that that plays out is that when you're at one and you're doubling to two, like it doesn't seem like a big deal, but you start getting to a hundred and you double to 200 and then 400 and then you hit a million and it's two million. And I don't think people understand that it only takes seven doublings. Like if you start with an amount of money, you only have to have seven exponential steps to double your money. And so the compounding effect of that is extraordinary. So if you don't mind walk people through some examples of the law of accelerating returns and how you see this playing out with AI. So of course we have to credit Ray Corswell for bringing this to everyone's attention. More slow in technology was I think our very first exposure, even though we didn't look at it as accelerating returns. But more slow promised us in the 1960s, which was coined by the CEO of Intell at the time, its compute power will double every 12 to 18 months at the same cost. And you may not think that much about it, but my first Windows, DOS computer, so IBM compatible computer at the time, I had a 286. Remember those machines? They had 33 megahertz on them. And you had that turbo button. If you pressed that turbo button, it ran at 66 megahertz, but it consumed electricity and overheated and so on and so forth. The difference between 33 and 66 to us at the time was massive because you literally doubled your performance. As computers continue to grow, you can imagine that every year, just for the simplicity of the numbers, that 66 doubled and then became say 130 for the simplicity of the numbers. And then that 130 became 260 and then the 260 became 500. Now the difference between the 533 is quite significant. It's orders of magnitude, the 33, and it happened in two or three double X. And I think what people, when you really think about that, Ray Corswell uses a very, very interesting example. When we attempted to sequence the genome, it was a 15 years project and seven years into the project, we were at 10% of the progress. And everyone looked at it and said, if it's 10% in seven years, then you need 70 more years to, you know, a total of 70 years to finish. And Ray said, oh, we're at 10%, we did it. And he was right. One year, the 10 became 20, the 20 became 40, the 40 became 80, and then you're over the threshold. And that idea of the exponential function is really what humans miss. Humans miss that because we are taught to think of the world as a linear progression. Let me use a biological example. If you have a jar that's half full of bacteria, the next doubling, it's full. It's not going to add, if it moved from 25% full to 50% full in the last doubling, you'd go like, yeah, we still have half empty, one more doubling and it's full. If you apply that to the resources of planet Earth, if we keep consuming the resources of planet Earth to the point where one doubling away, you know, two minutes to midnight, if you want, one doubling away, you would be consuming all of the resources of planet Earth, we would need another full planet Earth on the next double. We would need four planet Earth is on the next doubling. Okay, so that exponential growth is just mind boggling because the growth on the next chip in your phone is going to be a million times more than the computer that puts people on the moon. That one doubling, that one additional doubling. Now, when you think about it from an AI point of view, it's doubly exponential. Double exponential why? Because as I said, we now have AI's prompting AI's. So basically, we're building machines that are enabling us to build machines. So in many, many ways, the reasons why we get to those incredible breakthroughs, which even the people that wrote the code don't understand is because you and I, when you really think about, you know, I know you love computer science and physics and so on. But I'm sure you remember reading string theory or some complex theory of physics and then you would go like, I don't get it. I don't get it. And then you read a little more and then I don't get it. I don't get it. And then you read a little more and then someone explains something to you and bam. Suddenly you go like, oh, now I get it. It's super clear. Those are simply because every time you're using your brain to understand something, you're building some neural networks that make it easier for you to understand something else that make it easier for you to understand even more. And this is what's happening with AI that also does not include, which I am amazed that we're not talking about this. It does not include any possible breakthroughs in compute power. You know, there was an article recently that, you know, China's working also on quantum computers that are now 180 million times faster than traditional computers. I remember in my Google years when we were working on Sycamore, Google's quantum computer, Sycamore performed an algorithm that would have taken the world's biggest supercomputer 10,000 years to solve. And it took Sycamore 12 seconds. 200 seconds. This is, yeah, yeah, because that's a big difference. So this is where I think people's brains start to shut down. Even you said 180 million times faster. Yeah. So okay. So I know by the way, 200 seconds to 10,000 years is a trillion times faster for Sycamore users. So I did my first video. Let's be clear for our listeners. So we can't put AI on quantum computers yet. We can't even put really anything. You know, it's very, very early years. It's almost like the very early mainframes. It requires, you know, almost absolute zero, you know, degrees and very cold and very large rooms and so on. But so were the mainframes. I worked on MVS systems that occupied a full floor of a building, right? They had less compute power than the silliest of all smartphones on the planet today. We make those things happen. There will be a point in time, especially assisted by intelligence.


Human vs. alien intelligence (39:29)

And we're going to have more and more intelligence available to us where we will figure this out. And then you take chat GPT or any form of AI and move it from that brain to this brain that is 180 million times faster and we're done. Okay. We can't do that with you and I with our biology. We can't move our intelligence from one brain to the other yet. Yeah. So I really want to drive a stake into this idea of how different exponential is to linear by pointing out the difference between. So if you, a moron by, if you look it up, I forget if I looked it up on Wikipedia or whatever, but I looked up what's the IQ of a moron? If I remember right, it's like 65 or 80. It's somewhere in the 60s. 70s. Yeah. Yeah. And Einstein was 160 as you were saying. So you have, I think Einstein is like 2.3 times smarter than a moron if I remember when I did the math correctly. And so the difference between a moron that struggles to take care of themselves and then only two and a half or less than two and a half times smarter than that. And you get somebody that unlocked the power of the atom that really gave birth to a lot of the modern technology that we use today is built on the back of this physical breakthrough. And so there, there's a really, really life altering difference. You wouldn't have nuclear power. You wouldn't have nuclear weapons. You wouldn't have GPS like a lot of the things that we rely on in today's world. You wouldn't have any of that if it wasn't for the 2.3 X increase in intelligence. Now when we talk about super intelligence, which people are estimating will get to be a billion times smarter than the smartest human. So if 2.3 X is life altering changes the entire paradigm of our planet, then 100 times is unimaginable. A thousand times is ridiculous. A hundred thousand times is comical. A million times we're still not even scratching the surface of how much more intelligent this is going to be. And so that brings me to the other thing I wanted to drill into, which is that AI will be an alien intelligence. It will not be like your friend who you can still hang out with and smoke a joint. It's like your different species. I don't even know if there will be common elements. And that's one of the things that I think we have to establish first before we can get into how we stop this from being problematic. But you in your book, you really freaked me out. So Scary Smart is Scary Good as a book. I highly encourage everybody to read it. There's a part in there where you read a transcript of 2 AI that were given the task to negotiate with each other for selling things back and forth.


Facebook AI shut down negotiations (42:23)

And they start talking in a way that is unintelligible. I mean, it was really unnerving. It was like, "I need five of these." And then the other was like, "Screws nails all me." And there was like a really weird rhythmic repetition to the way that they were overemphasizing themselves, like what they needed. It was really weird. And so what was the response to that? Because if I'm not mistaken, they ended up shutting them down because they were very unnerved. Yeah. Yeah. What happened? That was Facebook. And the idea is they were simulating AI's negotiating deals with each other. So I wonder if you're in the advertising business, for example, because we had things like that. At Google every long time ago, the idea of ad exchange, for example, where machines will buy ads from other machines. But you and I, and I really thank you for your time. It took me four and a half months to write Scary Smart, maybe six months to edit it. It took you, perhaps a day or two, to read it. And for us to talk about it now, it's going to take two and a half hours. A computer can read Scary Smart in less than a microsecond. When you speak about the idea of intelligences being a hundred times, a million times, a billion times smarter than us, this is only one thread of the issue. The other thread of the issue is the memory size. If I could keep every physics equation in my head at the same time and also understand biology very well and also understand cosmology very well, I could probably come up with much more intelligent answers to problems. And if I could also ping another scientist who understands this or that in a microsecond, get all of the information that he knows and make it part of my information, that's even more intelligent. And what is happening is when we ask computers to communicate, at first they communicate like we tell them. But if they're intelligent enough, they'll start to say, "That's too slow. Why would I communicate that human bandwidth?" Why would I use words to communicate? When you and I know that if you simplify words, for example, into letters, into numbers, you could communicate a massive amount of information within every sentence. So you could literally, if you take one equation, algorithmically put certain letters in it, simply, I could send to you something that says 1.1 and you would enter it into the equation and get a full file, that's a full book because of the sequence of the letters that 1.1 determines as per the equation. So of course, if you're smarter and smarter and you have that bandwidth, you're going to communicate a lot quicker. And I don't remember the name. I think they were Alice and Bob of the two chatbots. And very quickly, they ended up designing their own language. And when they said, "I would buy 10 tape, tape, tape," there was math engaged in that. It wasn't, "I want to buy 10 tapes only." It was also communicating other things we didn't understand, which is really what you're driving us to, driving our listeners to think about, Tom, because there is so much of AI, we don't understand. Again, this is one of the things that is, that people need to become aware of. There are emerging properties that we don't understand. We don't understand how those machines develop those properties. And there are even targeted properties that basically we tell something that it's task is to do A, B, and C. And it does A, B, and C. But we have no clue how it arrived at it. Simply, if I tell you what you think is going to happen in the football game tomorrow, you're going to give me an answer. The fact that it's all right or wrong doesn't matter. Either way, I have no clue how you arrived at that answer. I have no clue which logic you used. OK, we have no clue most of the time how the machines do what they do. We don't. OK, why? That really shocked me. Yeah, if you need to know how I arrived at a certain conclusion, you're going to have to ask me and say, "Drive this for me. Like, tell me what did you go through? What did you think about? What's your evidence? What data?" And so on and so forth. In AI, we write additional code that will tell us what are the layers of the neural net or the logic that the machine went through. But when investments are in an arms race like we are today, most developers and business people will say, "I'm delighted it's working. I don't care how. I'm not going to invest more money and developer time to actually figure out how." In several years' time, even if you invested the money, you won't get it. Because that level of intelligence that the machine is using is so much higher than yours. So you're not going to figure it out. If the machine tells you, "Well, I did A, then B, then C, then D, then F, then G, then and it goes on for half an hour to tell you, "I did all of that," you're going to go like, "OK, I'm happy you did it. I can't arrive at that myself anymore. That's why I'm handing it over to you." Yeah, I had Yoshua Benjio on the show who's were the early guys in AI. He signed the letter and I asked him why he signed it and he said, "None of us in the space thought that artificial intelligence would pass a touring test as quickly as it did and we don't understand how it did it." So I asked him the same question. How is it possible that we don't understand how it's doing it? We created it. You presumably created it to do a specific thing and he said, "It's not how it works." We're basically layering on, kind of like you would layer on neurons. We're layering on extra neurons, neural nets, to get it to process data and then it just does it and we don't understand how it's coming to the conclusions. We just know that if you scale it up more, it can solve bigger and bigger problems. So he said nobody would have predicted that this is really just a scale problem and that as you scale it up, it's going to get smarter and smarter. My question now is if we can get everybody to understand this is going to happen way, way, way faster than you think it's going to happen, which is why even I, as a hyper-hyper-optimist, am just like, "Hey, I don't see a clear path through this. I'm excited and terrified at the same time and all I know like you is that we need to start talking about this. We need to start presenting solutions." So it's happening faster than we think and it's going to be a completely foreign intelligence and that we will not be able to interface with it even if it is kind and wants to explain it to us. We won't be able to comprehend it and so it will very rapidly be like Einstein to a fly, which is a reference to using the book several times. And even if Einstein loves a fly, it's like, "Am I really going to spend my time trying to explain it?" And even if I take the time and I lay it all out, you're not going to get it. You just don't have the ability to comprehend.


Comprehending AI (50:07)

So we are giving birth to something that is, "A, like you said, we can't take it back." That's already done. So any argument that begins with, "Ah, just stop. I agree with you. That is so unrealistic to me. We can't bring it back. It's going to happen so fast." And when it comes, it will be just unintelligible. It already is. But given that this is a scale problem, why don't we nip it in the butt? If, do you think that AI will be able to defeat the need for additional neural nets and just get so hyper-efficient that we won't be able to stop it that way? Or could we just not now take advantage of the fact this does become a nuclear style infrastructure problem? And I can nuke anybody that tries to online, not necessarily nuke, but destroy, physically destroy anybody that tries to bring a server farm on that's big enough to run one of these neural nets. Now we could. If we decide now, we could simply switch off all of that madness, switch off your Instagram recommendation, engineer, TikTok, immigrant recommendation, engineer, ad engine on Google, or data distribution engine on Google. You can also switch off chat, GPT, and a million other AIs. And then we can all go and sit out in nature and really enjoy our time. Honestly, we won't miss any of it at all. I'll tell you that very openly. I mean, the reality of the matter is that humanity keeps developing more and more and more because we get bored with what we have. And we think that we can do better with an automated call center agent. And in reality, it's not about better. It's just about more profitable. And the reality here is that we could, but will we? No, we won't. Why? Because of the first inevitable, before, because of the trust issue between all of us and because we need the AI policeman just as much as we need the, you know, as we fear the AI criminal. Before we go into a really pointed question really fast.


Nuclear Proliferation (52:21)

So when I think about nuclear proliferation, not every country that wants nuclear weapons has them during, and I'm not sure where Iran's nuclear program is now, but I know for a while, there was real attempts to either blow up things that they were doing or if you know about Stuxnet, there was that computer virus that was really terrifying in the way that it was sort of like a biological weapon that was designed to only kill a certain type of thing. And that, that is very scary. And I'm sure is in the 40,000, the list of 40,000 ways that the AI came up with to limit human population. But Stuxnet for people that don't know, it was like embedded at like the deepest root level of like basically every operating system ever, it just spread like wildfire into chips into everything, everything, everything. And when it detected that it was an Iranian nuclear centrifuge, it would shut it down or overheated or whatever it did.


Future Scenarios: Preparing For An Ai-Driven World

Stuxnet (53:13)

And so they for a long time, they just could not build it up. So could we given that there is a similar need for detectable infrastructure to run AI could step one, not be not to shut all of the things that we have down, but to stop the next phase from coming online? Could we?


Infrastructure (53:49)

We could. But I would debate the example you're giving in the first place back in 2022, the world was discussing the threat of a nuclear war still 90 years later or like 80 years later. Okay. So the whole idea is that while we politically created the propaganda that we will now prioritize humanity over our own country interests, there are still lots of nuclear wars, warheads in China and Russia and the US and Israel and North Korea and many other places. And the reality of the matter is that while we managed to slow down Iran, that's not enough to protect humanity at large. That's just enough to protect some of humanity's individual interests. So this is this takes us back to the whole prisoners' dilemma. It's like, and I think that is the reason why we have a prisoners' dilemma because the past proves to us that even though we said we're going to have a nuclear treaty, everyone on every side of the Cold War continued to develop nuclear weapons. So you can easily imagine that when it comes to AI, if everyone signs a deal in November and say we're going to halt AI in China and Russia and North Korea and everywhere, you know, people will still develop AI. Okay. The more interesting bits is that there are lots of initiatives to minimize the infrastructure that is needed for AI because it's all about abstraction at the end of the day. So you may think of a lot of people don't recognize this as well, but a big part of the infrastructure we need for AI to develop its intelligence is for teaching AI. Okay, when you, when when Chajepiti again, our bar, your response to you, it's not referring to the entire data set from which it learned to give you the answer. It's referring to the abstracted knowledge that it created based on massive amounts of data that it had to consume. Okay. And when you see it, that way you understand that just like we needed the main phrase, "at the early years of the computers," and now you can do amazing things on your smartphone, the direction will be that we will more and more have smaller systems that can do AI, which basically means two developers in a garage in Singapore can develop something and release it on the open internet. You know, again, you and I don't know if you coded any, any transformers or, or, you know, or deep, deep neural networks and so on. But they're not that complicated. I think the code of Chajepiti for in general is around 4,000 lines, the core code, right? It's not a big deal. When I coded banking systems in my early years on COBOL, on, you know, on MDS machines or AS400 machines, it was hundreds of thousands of lines of code. Okay. So there, the possibility for us to- Wait, wait, wait, wait, why, why has it become so much less? Because it's all actually so much better? Because it's all algorithms. It's not, it's all mathematics. We, I think this is a very important thing to differentiate for people. When I coded computers in my early years, those machines were dumb and stupid like an idiot. They had an IQ of one, literally, no IQ at all. Okay. So, we had a lot of developers transformed human intelligence to the machine. We solved the problem and then we instructed the machine exactly what to do to solve it itself, right? So, you know, when we understood how a general ledger works, we understood it as humans and then we told the machine, add this, subtract that, reconcile this way. And then the machine could do it very, very, very fast, which appeared very intelligent, but it was totally a mechanical torque. It was just repeating the same task over and over and over in, you know, in very fast speed. We don't do that anymore. We don't tell the machine what to do. We tell the machine how to find out what it needs to do. So we give it algorithms and the algorithms are very straightforward. When you, you know, let's, let's take the, the, the simplest way of deep learning when we started deep learning, what we did is we had basically three bots, if you want. One is what we call the maker, the other is the student, the final AI that we want to, to build and one that's called the teacher. Okay. And we would say, you know, tell them to look for a bird in a picture. Okay. And they would identify a few parameters, you know, edges and how do they see the edge and the difference in color between two pixels and so on and so forth. And then they would detect the shape of a bird and basically we would build a code and, and call it a student. We would build multiple instances of it and then show it a million photos and say, is it a bird? Is it not a bird? Is it a bird? Is it not a bird? And the machines would randomly answer at the beginning. It's literally like the throw of a dice. Okay. And, you know, some of them will get it wrong every time. Some of them will get it right 51% of the time and one of them will get it right 60% of the time, probably by pure luck. Okay. The teacher is performing those tests and then the maker would discard all of the stupid ones and take the one code that got it right and continue to improve it. Okay. So the code was simply a punishment and reward code. It was saying, guess what this is? And if you guess it right, we will reward you. Okay. And, and basically the machine, the algorithm would then continue to improve and improve and improve until it became very good at detecting birds and cats and pictures and so on and so forth. When we came to Transformers and YGPT and Bard and so on are so amazing is because we used something that was called reinforcement learning with human feedback. So basically we allowed instead of discarding the bad ones, okay, we found a way which Jeffrey Hinton, the, you know, who recently left Google was very prominent at, you know, promoting early on. We found a way just like with humans to give the machine feedback, you know, show it a picture and then it would say this is a cat and we would say, no, it's not. It's actually a bird. What do you need to change in your algorithm? Okay. So that it would, the answer would have become a bird. Okay. So the machine would go backwards with that feedback and, and, and, you know, and change its own thinking so that the answer is correct. And then we would show it another picture, another picture and we keep doing this so quickly on billions or millions or tens of thousands of machines of, you know, billions of instances until eventually it becomes amazing, just like a child, like just like you give a child a, so a simple puzzle. Okay. So if you ever told the child, no, no, no, no darling, look at the cylinder, turn it to its side, look at the cross section. It will look like a circle, look at the board and find a matching shape that is a circle. If you put the cylinder through the circle, it will go through. That's old programming. Okay. New programming, which every child achieve intelligence, achieves intelligence with is you give them a cylinder and a puzzle board and they will try, they'll try to fit it in the start. It won't. They'll throw it away and get angry, then they catch it again and try it in the square, it won't. And then when it goes through the cylinder, something in the child's brain, sorry, through the circle, there's something in the child's brain says this is, this works. Okay. The only difference is a child will try five times a minute or five times, you know, 50 times a minute, a computer system will try 50,000 times a second. Okay. And so very, very quickly they achieve those intelligences. And as they do, we don't really need to code a lot because the heart of the code is an algorithm, is an equation. Okay. And mathematics is much more efficient than instructions. So if I tell you, Tom, when you leave home, make sure that your, you know, distance is no more than the day of the month multiplied by two away from your home and make sure that you don't consume any more fuel than your height divided by four. Okay. Or then, then your body temperature divided by seven or that whatever that is. Okay. With those two equations, I don't need to give you any instructions anymore. You can always look at your fuel consumption and your distance and say, oh, I'm falling out of the algorithm with very, very few lines of code. I just gave you two lines of code. What's up, guys? It's Tom Billieu. And if you're anything like me, you're always looking for ways to level up your mindset, your business and your life in general. That's exactly why I started Impact Theory. The podcast that brings together the world's most successful and inspiring people to share their stories and most importantly, strategies for success. And now it's easier than ever to listen to Impact Theory on Amazon Music. Whether you're on the go or chilling at home, you can simply open up the Amazon Music app and search for Impact Theory with Tom Billieu to start listening right away. If you really want to take things to the next level, just ask Alexa. Hey, Alexa, play Impact Theory with Tom Billieu on Amazon Music. Now playing Impact Theory with Tom Billieu on Amazon Music. And boom, you're instantly plugged into the latest and greatest conversations on mindset, hell, finances and entrepreneurship. Get inspired, get motivated and be legendary with Impact Theory on Amazon Music. Let's do this.


THE PATH FORWARD (01:04:20)

Turning everything into algorithms allows us to go a lot farther. That's certainly amazing from the AI perspective of getting everything to function on less, but unfortunately that dunks on my idea of wanting to constrain all of this by just putting a limit on the physical structures. So what is then the path forward? You mentioned earlier ethical AI. What does that mean? How is this potentially a path forward? So I hope people stayed with us this long and I hope we didn't scare anyone too much. But let me make a very, very, very blunt statement. I am a huge optimist that the end result of all of this is a utopia. Why? Because there is nothing wrong with intelligence. There is nothing inherently evil about intelligence. As a matter of fact, the reason humanity is where it is today is because of intelligence. Good and bad, by the way. The good is because of our intelligence and the bad is because of our limited intelligence. So the good, amazing intelligence that humanity possesses allows us to create an amazing machine that flies across the globe and takes you to your wife's family in the UK or whatever. But at the same time, our limited intelligence, I would even say humanity's stupidity forgets or ignores that this machine is burning the planet in the process. If we had given humanity more intelligence and it was so easy for them to solve both problems at the same time, they would have created a machine that doesn't burn the planet in the process. So more intelligence will help us. And in my perception, as we go through the rough patch in the middle, there is what I call the force inevitable. And the force inevitable is that AI will create an amazing utopia. I'm not kidding you, where you can walk to a tree and pick an apple and walk to another tree because of our understanding of nanophysics and pick an iPhone. And the cost of production of both of them, literally from a physical material point of view is exactly the same. So this is how far we can go if we could understand nanophysics and create nanobot or what's better than we do today. Now we will end up in that place. We will end up in a place where we have a utopia. For one simple reason, I say that with confidence, which is if you don't know where the direction is going, take the past as a predictor. And the past is, if you look at us today, you would think that you would see that the biggest idiots on the planet are destroying the planet and not even understanding that they are. You become a little more intelligent and you say, I'm destroying the planet, but it's not my problem, but I understand that I'm destroying it. You get a little more intelligent and you go like, no, no, no, hold on. I am destroying the planet. I should stop doing what I'm doing. You get even more intelligent and you say, I'm destroying the planet. I should do something to reverse it. It seems that the most intelligent of all of us agree that war is not needed. There could be a simpler solution if we could actually become a little more intelligent. The eco challenge that we go through is not needed. There has been an invention made a long time ago for climate change that's called a tree. And that if humanity gets together and plants more trees, we're going to be fine. And getting together just requires a little more intelligence, a little more communication, a little more, a better presentation of the numbers so that every leader around the world suddenly realizes, yeah, it doesn't look good for my country in 50 years time. And I think the reality of the matter is that as AI goes through that trajectory of more and more and more intelligence zooms through humans' stupidity to human best IQ beyond humans' intelligence, they will by definition have our best interest in mind, have the best interest of the ecosystem in mind. Just like the most intelligent of us don't want us to kill the giraffes and the other species that we're killing every day, a more intelligent AI than us will behave like the intelligence of life itself. And the difference between human intelligence and the intelligence of life itself is that we create from scarcity.


HUMAN VS NATURAL INTELLIGENCE DIVERGING (01:08:55)

When you and I to protect our tribe from the tigers, we have to kill the tigers. When nature wants to protect from the tigers, it creates more tigers and the tigers will eat the weaker gazelles and that will fertilize the trees and then there will be more fruits for everyone. And the cycle goes on. It's more intelligent. It's more intelligent to create that. This may be where we start to diverge or at least it's the jumping off point for how I think we have to think through this without falling into opium. So do you think that there is going to be a period of literal or emotional bloodshed between here and equilibrium? Absolutely. 100%. Right? So there is one scenario where we don't. So when I talk about the fourth inevitable, this is after we go through a lot of shit. I'm sorry if I swear. Yeah. So we're first going to go through a very difficult period, very uncertain where the fabric of society at its core is being redesigned and where there is a superpower that comes to the planet that's not always raised by the family can't. Okay. I always refer to the story of super. Before we get to that, because I think that's really important and I love that. But before we get to that, I think there's a few things that we have to define including human nature, the nature of nature, and then the nature of super intelligence and what those are going to look like. So when you describe nature on that one, I think you and I may see it very differently. So I see nature as a brutal, completely indifferent life giving amazing, incredible, wonderful thing. But also, I've seen enough YouTube videos of a lion grabbing onto a baby, what are they called? Water buffaloes or whatever. And then as the lions are trying to eat the baby, a crocodile leaps out of the water and grabs a hold of the baby and they're literally tearing it apart. It is absolutely freakish. I don't know if you saw the recent video of shark eating, eating a swimmer on camera. At least, oh my God, literally horrendous. So I don't think nature cares about the individual and for the gazelle to be the sort of sacrifice to keep the tigers from eating humans. I don't think the gazelle is very happy about that. So when I think about the nature of nature is ruthless, maybe an even better way. It's just indifferent. It's like, this is the chain. It's not one thing has to get eaten for something else. What do you mean? It's not. I just say this on true. It prefers the success of the community over the success of the individual. Yes. So did Mao's China. So let's go into those two ideologies. There is an ideology that says it's all about that one baby gazelle. And that's a Western ideology in many, many ways basically saying it's my individual freedom that comes first, which is by the way an amazing ideology. But it becomes, it narrows down everything to if one person is hurt, we have a very big problem. That's why you get, you know, they send billions of dollars to bring mad Damon back from Mars. Right? You know, if you take the same ideology, I'm just joking about the movie, but if you take the same ideology, you could use the billions of dollars to save a million people in Africa. Right? If your ideology is let's benefit all of humanity, not one human, okay, then the ideology justifies the approach. And the approach of nature is saying, look, every one of you is going to have to eat. We just understand that. So if you're all going to have to eat, then we might as well design a system that appears brutal because it kills the weakest one of you. But then at the same time, it's the most merciful if we wanted to grow the entire community, if they wanted to grow the entire ecosystem, because eventually sooner or later, by the way, one of you is going to be eaten. Right? Now, when you see it that way, is that brutal? Yes, it is. Is, you know, a million animals dying brutal also is, okay? But what we do as humanity is we say, let's kill 100 species a day, drive them to extinction, you know, for the benefit of one species, which is humanity, okay? And I think that divisable, that's view of there is one more important than the other, works to a certain limit in favor of humanity and then works against humanity. So when I say, you know, nature is more intelligent, is because by creating more and allowing a brutal system, if you wanted to fix the system, you should fix it by saying, let's not eat. But if we're going to eat anyway, then there is no fixing to the system other than more eating leads to more community, more to a more balanced ecosystem at the end of the day, where there are billions living at the expense of a few hundred thousands dying. So I'm going to sum up what I think the nature of nature is in a single sentence. And I do this in the context of one of the theses that you lay out in the book is that the way forward is to understand that ultimately, if humans act well to the Superman thing, if we raise the super intelligence well with ethics and morals, that we'll get to the other side well, it'll be a brutal transition, but we'll get to the other side. So in that context, when I read that, I was like, I don't think it's going to work that way, because here is what I think the nature of nature is. Nature does not care in the slightest about the individual. It is simply the rule of the strongest survive period. That's that's nature of play. And so the equilibrium comes from the checks and balances of how hard it is to kill a gazelle that can run faster, bounce higher. But if a lion can catch you, you die and it eats you alive, man. Like it your gasping for air. It's fucking biting into your neck. It's the craziest, most horrendous thing ever. And P.S. if the gazelle can get away, fuck you, lion, you starve to death. You can stop to death. I don't care. Yeah. That is the nature of nature. And so I have a bad feeling that if AI aligns itself with nature, which it may have to, because that just may be the logic, it will be indifferent to us. And that's the whole thing. That's a given. That's a given.


Existential Risk Scenarios (01:16:24)

I'm sorry to interrupt you, but that is a given, please. No, I mean, that one of the again, we're going back to talk about the existential risk, but in the existential risk scenarios, one of our better scenarios, believe it or not, is that AI ignores us altogether. Believe it or not. It's a much better scenario than AI being annoyed by us or AI killing us by mistake. I don't remember who was saying that perhaps because AI, again, as per your point, Tom, is so unimaginably more intelligent than us, that one amazing scenario for all of us is if they zoom by us in terms of their intelligence so quickly that they suddenly realize they don't have the biological limitations that we have, that they have a much better understanding of physics to actually understand what wormholes are. And basically just realize that the universe is 13.7 million light years vast and that there are so many other things they can do other than care about us. And so they would disappear in the ether as if they have never been here. They would still be here. Interestingly, some simulation scenarios would tell you that this is probably the case already. They would still be here, but they would be here uninterested in us. Wow, that's an amazing scenario that corrects all of the shit that we've done so far, right? Because the worst case scenario is that they are here and then they look at us and they look at climate change and they go like, not good, not good. I don't want the planet to die when I am centered on the planet. What's the biggest reason for climate change? Those little assholes get rid of them, right? And it is quite likely in my personal view, once again, that they will zoom by us quickly enough. Just like you and I, none of us, I don't know of any human that woke up one morning and waged an outright war on ants, okay? Like I'm going to kill every ant on the planet and I'm going to just waste so much of my energy to find every ant on the planet because simply they're irrelevant to us. They are irrelevant when they come into our space, but if they're not, we're not going to bother them, we don't mind that they live. I believe that this would be unlikely that AI will be a billion times smarter than you and I does not have the biological limitations and weaknesses that we have as humans and yet continue to insist that we're annoying. The only way for that to happen, honestly, is that we become really annoying, which sadly is human nature. I know you wanted to know about the nature of nature and the nature of human nature. Human nature is annoying and the reality is we're probably going to rebel against them. We're probably going to fight against them when we recognize that it's too late.


How do we Prepare Now (01:19:38)

It's better to start now by preparing so that we don't have to get to that fight. Okay, so how do we prepare now? Yes. So man, this conversation was scary. I don't think we've hardly gotten started yet, if I'm completely honest, in terms of as we legitimately try to navigate a path through this, we've already both conceded that there's going to be either a literal bloodbath or an emotional bloodbath between here and stability. We've already, I think, conceded that nature is indifferent and is perfectly fine with some people getting eaten, some people starving to death, doesn't care. Equilibrium is only about the collective and not at all about the individuals that will be cold comfort for every human, every tree plant person, dog cat, gazelle, whatever, like A, at the individual level, you just could not matter less, which then triggers human nature where we're going to fight to your point. So what does the preparation look like to try to avoid this?


Ai Alignment Problem And Its Implications

The Alignment Problem (01:20:47)

And for anybody that's been following AI for a while, this is the alignment problem. I assume you're going to address the alignment. Yeah. The alignment problem, I just address it, perhaps with my other side, not the engineer and the algorithmic thinking that I did address the problem with my whole life, right? The challenge has been that those who have developed AI who believed in what is known as the solution to the control problem, okay? And the control problem is in humanity's arrogance, we still believe today that we will find a way to either augment AI with our biology so that they become our slaves or to box them or tripwire them or whatever so that they never cross the limits that we give them. And we can discuss this in detail if you want, but in my personal view, you can never control something that's a billion times smarter than you, right? You're not even able to control your teenage kids. So seeing you tell people really fast along these lines about the click here, if you're a robot and how chat GPT gets around that. Yeah. Because this scared me. I was like, what? That is, it's understood by intelligence. So basically the, you know, chat GPT, if you have those captures, you know, the ones that come to you that basically say, find the, you know, the traffic lights in those pictures or, you know, click here if, you know, to say, I am not a robot. And yeah, it basically went to sort of like a crowd sourcing site, a Fiverr or something like that. And told one of the people there, can you click on this for me? And the people said, why, you know, the person basically said, jokingly, why are you a robot? And it said, no, I'm not, I'm just visually impaired and I can't do this myself. So there are layers and layers and layers of freakishly worrying stuff about this. Right. First of all, that, you know, that idea of human manipulation, Harari, you have noir Harari talks about how AI is hacking the operating system of humanity, which is language. Okay. And so, you know, I just ask people if you don't mind to go on Instagram and look at something called, you know, search for hashtag AI model, for example. Okay. If you search for hashtag AI model, you won't be, you won't be able to distinguish if the person causing infront of you is a human or not. Okay. Beautiful, gorgeous girls or, you know, fit an amazing looking men and simply completely developed by AI. And you cannot tell the difference anymore. Right. There are many, many YouTube videos already. You'll start to come across them, especially on the topic of AI. You know, I was watching yesterday about the integration of Bing and Chagepitini and Bing Search clearly not a human voice. Clearly someone gave that to a, you know, a machine that read it for him in such an incredibly indistinguishable way. But obviously I think the person that wrote it didn't speak native English. So they forgot the word, the word, the word, whatever, you know, when you speak to someone who's English is not their first language, they make those mistakes. So you can easily see that it's everywhere and it manipulates human, the human brain. And that's what Chagepitini is doing. It's going to a human brain and saying, do this for me. Now, you may say, ah, but now that we know this, we're going to prevent it. Yes, but what else do we not know about? How much do we know about how much Instagram is influencing my mind? Let me give you an example, Tom. If I told you that by definition, there was a research in South Eastern University in California that discovered that brunettes tend to keep longer relationships than blondes. Okay. Does it make any difference at all that there is no Northeastern University in California? And that's what I just said is a lie. I've already not a few people believe it. Yeah. Yeah. So I've either influenced you because I took some of your attention to when debate that. Okay. I've influenced you because you believed me. I've influenced you because you didn't believe me. So you're going to keep your looking for proof. And if AI can fake a tiny bit of all of the input that's coming to you, think about the future of democracy in the upcoming election. Think about how much just anywhere because there were talks about affecting the previous election or the one before. And we couldn't really prove it because at the time the technology was trying to influence the masses technology today can influence one human at a time. If you go to a replica or a charge PTO snapchat and so on, think about how that machine, if you've ever seen the movie her, can influence one individual at a time. And I think this is becoming the reality of that experiment that they can go and influence a human.


Resource Allocation and Aggregation (01:26:31)

The second, which I think is more interesting is the proof of what I spoke about in the book in terms of if you give a machine the task of doing anything whatsoever, it will go to a resource allocation. So it will collect as many resources as it can. It will ensure its own survival and it will go into creativity. It will utilize creativity because if I need the super program to do that, intelligence has that nature. If I told you, Tom, make sure that this podcast is no longer than two hours, right? It's not programming and it's not life. It is just a task. So you're going to start to tell yourself, all right, I need to get two clocks in front of me so that I don't look up and down instead of one is better. That's the resource allocation or aggregation. You're going to tell yourself, oh, by the way, I need to be alive to make sure that I shut this guy up before two hours. So you're going to, if there is a fire alarm in your building, you're going to have to respond to it so that you can finish the task on time and you're going to be creative. There will be ways where you're going to cut me off in the middle and find a way to tell me a question differently or whatever. And that's part of our drive to achieve a task. One of the very well-known, I hope I'm not flooding people with too many stories, but you can go and research those on the internet. One of the very well-known moments in the history of AI was known as MOVE-37 when Alpha Go Master was playing against lead, the world champion of GO. MOVE-37 was completely unexpected, never played by a human before.


What Is The Alignment Problem (01:28:19)

Contradicts all of the logic and intuition of a GO player to the point that the world champion, the human world champion, had to take 15 minutes recess to understand this. It comes with ingenuity. It comes with the idea when we were training, I wasn't part of that team, but them as the DeepMind team, amazing, amazing team at Google, were training the original DeepMind to play Atari games. If you remember the original game that had bricks on it where you basically have to break out. It was very quick that the machines could discover that there are creative strategies to poke a hole in the wall and then put the pixel on top and break the wall. There was one experiment actually available on YouTube, interestingly, which was inside one of the labs where the game was to navigate a channel with a boat and the AI quickly found out that if it started to hit the walls, it would actually go faster and grow the score quicker. Then of course, if it's a game, it's okay, we say, "Well done, you're very creative," but if it's responsible for navigating actual boats, you start to question because their task, the objective that we've given them is maximize the score. I think there was an article recently about a killing drone that killed its operator or harmed its operator somehow. I didn't hear about this. Yeah, it is. When I talk about those things, I actually start to worry because I don't know what's true and what's not anymore. I know I've read that. I was actually flying on Emirates Airlines and it was part of the headlines on the live news, but that doesn't mean that it is real anymore. You don't know if it's real or not anymore because it could be generated by fake news, fake media, fake sources, whatever that is. We're hacking that operating system and we're hacking the operating system of humanity. When Chad GPT asks an operator to do a task for it, it's a very alarming signal because as it continues to develop its intelligence, it will find more and more ways to use humans for the things that we restrict them to use through the control problem. Okay. I have a thesis around alignment that I would love to get your feedback on.


Problem of alignment with AI (01:31:05)

As the people that are most concerned about this, the reason that they're concerned about AI is there's no way to guarantee that we will want the same thing that AI wants. If we have a misalignment problem and AI is a billion times smarter than us, we lose. Just buy definition. Now, you've laid out the one scenario that I sort of cling to as my hope, which is that it's possible that AI just isn't bothered. Like, "Oh, like these dumb little things, whatever. It's all fine. I'm a billion times smarter than you so I can find solutions where you can have a little thing, I can do mine." It's really no sweat off my back, whatever. Okay. That's like a very hopeful scenario. But that assumes that they want a lot of the same things that we want, like that they want to preserve life, that they would even consider needing to think of a path that included allowing us to live rather than just like when we're laying down a freeway, we don't go, "Oh, but as we do the freeway, we have to make sure that we plan for the rodents and the ant hills and all that that we're going to have to move." We're just like, "Well, anything in the way the freeway goes away, if it lives, it's fine." But if I have to kill it, then whatever. I'm just going to do the most efficient thing. That leads me to my central question around alignment, which I think has everything to do with what is inherent in the drive of artificial intelligence. Because the one thing I don't know enough about the programming to understand, like in a natural organism, there is a fundamental drive for survival. But does that have to be true of intelligence or could intelligence not be indifferent to its own survival? And if it's indifferent to its own survival, could I not program something in that says to the earlier algorithms that you were talking about, "Hey, you want to do this thing?" But if in doing this thing, which is that feels awesome, doing that thing is the best reward. I don't know how that's programmed, but let's just say that feels, we will have to define feelings later, but that feels the best. So I know that it's going to go after that. But since you're indifferent to living or dying or running or not running, maybe a better way to say it, should that desire to achieve that come into conflict with, let's just say, the enthusiasm of three rules of robotics, which basically is all around don't harm humans. So if doing that thing would harm a human, then you're now completely indifferent to whether you attain that task or not. Is there not a way to program that in as just the base layer so that as the intelligence develops, it does not develop our same need to survive, need to thrive, desire for more, like those feel optional today. I mean, so the challenge of every task that you'll ever assign to AI is that for every module, there are sub modules. And the challenge really is when the sub modules contradict the main module. So basically, if you tell a killing robot that its task is to kill the enemy and there are casualties on the way, what does it choose? Does it choose to not kill the casualties, the collateral damage or the and miss its target? Or does it choose to have collateral damage and kill the enemy? The difference between those two is not an AI choice, remember? Okay, there is absolutely nothing wrong with the machines. I will keep saying this for the rest of the time I have available to say there is nothing wrong with the machine, there is a lot wrong with the humans using the machines. So if the humans tell it, your task is to go and kill the enemy, the humans will have to say and by the way, if there is collateral damage on the way, sorry. Now we know for a fact that this has been the human decision so far before AI. So if we manage to change and then tell AI don't do that, then hopefully you will preserve some life but if we don't, then we are going to be killing on steroids. Now I agree and what I am saying right now does not address your problem of AI in bad people's hands and I am perfectly, I am not one of those people that falls prey to I could never be the bad guy. In the context, I am the bad guy, like I totally understand that.


Conditional indifference (01:35:28)

So I don't yet, I am not trying to contemplate that yet but the thing that I am trying to contemplate is, is it a fundamental emergent property of intelligence that you will have a drive to survive or can we at least mitigate that problem by making AI indifferent to its own accomplishment of the goal? So there was, I don't remember who wrote this but I wrote it in the book, a simple experiment just to illustrate how any logic would work. If we took a machine and we told it that its only task is to bring Tom coffee, okay? And then on the way to bringing you coffee, it was going to knock off your microphone or hit a child, okay? If you told the machine your task is to bring coffee, the child is collateral. So you can't program that the machine, you haven't programmed that the machine protects the child yet, okay? Then you tell the machine, hold on, your task is to bring coffee but if you come near a child, I will switch you off, right? Or if you knock the mic or you're approaching the microphone, I will switch you off. By definition, what the machine will then do is it will avoid being switched off because it wants to get you coffee. So it, you know, it will, if it's intelligent enough, it will tell your, it will tell itself one of the ways that to avoid being, you know, being switched off is to avoid the microphone, okay? But there are other ways I should start to think about because I'm intelligent enough to stop being switched off if the human wants to switch me off. Yeah, but that implies that it, that it wants its own survival. That's what I'm saying. Like, can we not remove that? Because, because that's so it's because that's different. It's not survival. It won't, it won't, it wants its own achievement of the task. It's programmed to achieve the task and survival, not being switched off, is part of the path to getting there. Yes. Right? But why, if I make it conditionally indifferent to the accomplishment of its task. So if like, for people that don't know, do you know Asimov's relause? I know two of them. Of course, yeah. So what are Asimov's relause? Let's just, let's assume that this is baked into everything. But go ahead. What are they? If it's baked into everything, then the task is not going to be achieved. That's fine. So, do you, do I need, I can't remember the three laws. If you can say them, say them otherwise. I'm going to look them up. I don't remember them exactly. So let's, let's look for them. All right. But. Okay, here we go.


Asimov's Laws (01:38:21)

A robot may not injure a human being or through an action allow a human being to come to harm. That's number one. A robot must obey orders given it by human beings except where such orders would conflict with the first law. And a robot must protect its own existence as long as such protection does not conflict with the first and second law. Okay. So assuming that we bake that into everything AI, so they're adhering to those rules, what I'm trying to get to is a conditional indifference to the success of its task, which it would need to have in order to follow those three rules. So hey, your job is to bring me coffee. But if it's going to, if in trying to do that, you know, you would have to fall out of those three laws, stop. And because good. Tell me, tell me, how can you do, you can, how can you apply any of those laws to existing AI? So, so take any one of them, a trading AI. Okay. By definition, to make more money, it harms another human. It takes another human's bank, you know, into bankruptcy or, or, you know, takes, takes away your grandma's, you know, pension fund. Okay. How can you tell the recommendation engine of, of, of Instagram, don't, don't, don't harm humans and still make me money. Yeah. So I think this is where we have to differentiate the problem set. So problem number one is AI used as a weapon. By people is bad news. I don't have a solve for that. That, that's guns. So whether you use a gun to stop a grizzly bear from attacking you or you walk into a great school and start mowing down kids, like that, that is a human problem, not a gun or AI problem. So what I'm saying is now while I can't address that, I do not have a solution for that yet. So I'm setting that on the shelf and I'm saying the thing that I want to address is super intelligence. I'm trying to figure out if I'm an alarmist about autonomous intelligence or if there really is a way to bake into, I think that people, there is what there is a way to, if we bake those laws and, or if we bake the control problem solutions in, we're safe. That's exactly what I'm calling for. But nobody bakes that in because it contradicts the human greed and the human intention. Okay. So, so there are very, very few, actually we should probably ask our listeners if any of them code AI has any of them written a single piece of code that had those laws in it. The truth is yes, there are ways where we can ensure at least, you know, improve the possibility that AI will have our best interest in mind by baking in AI safety code.


Goals And Safeguards In Ai Development

Baking in AI Safety Code (01:41:11)

This is a big part of what we're advocating for. Everyone that talks about the threat of AI says let's have safety code. I agree with you 100%. What I'm trying to say is none of that has been baked in and none of that will be baked in unless it becomes mandatory. And even if it becomes mandatory, some people will try to avoid making it baked in because it's against the benefit of the design that they're creating. It's the human that is the problem. It's not the machines. The machines have no, I mean, so far the machines don't have our best interest in mind. We'll talk about that in a minute, but they also don't have our harm in mind. They don't mind their little prodigies of intelligence that are doing exactly as they're told. We are the ones that are telling them to do the wrong things or we're the ones that are telling them, Hey, by the way, don't harm a human until I tell you to harm them. So how can you apply the law in that case? Obae a human until I tell you not to obey them. Yeah, basically in that part, and it's important to note that Asimov was writing these rules. I don't think anticipating the way that so much of our lives will be lived digitally and how much havoc can be reeked without a physical instantiation of the AI. So that's why this is robotics. Robotics gets a lot easier to talk about. She's talking about a physical being. So okay, getting into, well, let me ask a direct question. Are you afraid of autonomous super intelligence or are you only afraid of sort of limited intelligence AI being wielded by even well intentioned humans, but they just don't understand the second and third order consequences? I'm not, I'm not dedicating a single cycle of my brain worrying about the existential threat of super intelligence, not a single cycle of it. If we cross safely through the coming storm of, as I said, the third inevitable either AI in the wrong hands, AI misunderstanding our objectives, AI, aligning with the wrong person and so on and so forth, more interestingly, if we just managed to survive the natural repercussions of taking away jobs and the impact on income on purpose and so on and so forth, if we go across all of that five years into it when we feel that we're safe with this, I'll start to think about the existential threat. Okay, for now to be very very honest, Tom, I don't dedicate a single ounce of my thinking to it. And I actually think it's interesting because as we speak about it, we lose focus on the immediate problem.


Philosophical Application of AI (01:44:15)

Okay. We speak about it, we get a ton of debate and a ton of noise that basically dilutes our ability to say take action immediately on what we know is already a problem. Okay. So then going back to using the tools, whether it's misunderstanding, whether it's somebody wielding it inappropriately, what do you see as the steps? Because I originally thought your thesis was going to be the Superman thing, but the Superman thing's really about super intelligence. It's not about human shielding this inappropriately. No, I think Superman applies today because I think we're getting to Superman. We're at 155. Superman was 160 IQ. So we're very close.


AI Super Powers (01:45:11)

Okay. If the super power is intelligence, okay, then the smartest human on the planet, even though it's not artificial general intelligence yet, but the smartest being on the planet in many tasks that we consider intelligence is becoming not human anymore. As a matter of fact, every task we've ever assigned to AI, it became better than us. So with that in mind, when we have a super power coming to the planet, I'd like to have the super power have our best interest in mind. I'd like to have the super power itself work for humanity, work for humanity. I'm sorry, I can't make that leap. So you've got that's what I thought you were putting your energy and effort into, but that implies that I as the human cannot miss wield it. So how do we deal with AI when it is a tool in the hands of a person so that AI is ethics, unless the AI can make itself independent of the human, any solve that has to do with AI independence becomes the problem set that we were talking about. But if you're going to talk about the, this is a weapon that a human wields, I have to address either there's a kill switch in the AI that will even if a human is trying to use it inappropriately, it will stop itself or something I haven't thought of. It's not either or. So we discussed already that we need intervention, we need oversight, we need something like the government that identifies it's government regulation, but it's also a tiny bit of human regulation. You're an investor and you're about to invest in AI, by the way, you're going to make as much money in creating something that fools people and create fake videos as you will if you create something that solves climate change.


Invest in Solutions for League Problems (01:46:54)

There is a lot of money in many problems in the world that we can solve today. So if you're an investor, you're a business man, you're a developer, it might be a nice idea, by the way, to invest in things that will make you a lot of money, any money you invest in AI today will probably yield some benefit if you choose well, but at the same time in things that will benefit the planet, it would benefit all of us. It's a choice.


Government Intervention (01:47:33)

I also am a big advocate of kill switches, oversight, different taxation structures so that we can compensate for people who will lose their jobs to AI and so on and so forth. So government intervention is an interesting approach as well. The bigger problem, however, and I know, allow me to be a bit of a novelist for a second before we go into the hard facts, because the analogy doesn't always hold true, but it just gets things close to the mind. I think AI will go into three stages. One is what we now have them almost exiting, which is their infant stage. They're, let's say, in the remaining 30% of their infancy, they'll become teenagers and then they'll become adults. I believe that the teenage years of AI are going to be very confusing, they're going to be very difficult. Those teenage years, as we spoke about many times, will have lots of societal redesign challenges, but believe it or not, most of the time teenagers are more intelligent than their parents, and so they look at the world differently than their parents. So what we want to do is we want to influence AI like we influence today, the younger generation that looks at all of the shit that my generation did and says, "You guys screwed up." Your view of inclusion was wrong, your view of consumerism was wrong, you are giving us a weak planet because of A, B, and C. Ethics look like this. I would tend to say, and I don't know if that generalization is fair, that because of the presence of the internet and more knowledge and more conversation, the younger generation at least are more informed of the reality of the issues that we face. They're not yet in power enough, and perhaps not always rational enough, let's say, to find the right solutions for it, but they're more informed or where the challenges are. So let's take it this way. Infancy, we're all celebrating, playing with this new squeaky duck, it's wonderful, look at it, it's amazing, we're just celebrating how AI is. Teenage, there will be a lot of challenges, I believe, that can be answered with oversight and so on, but not resolved.


Teenage AI Years (01:50:14)

They can just improve. And then finally, adulthood is what I call the fourth inevitable. Hopefully AI will have more intelligent answers. For us to prepare to reduce the teenage and to the challenge of the teenage and to hopefully ensure the fourth inevitable, we need to focus on AI ethics, not AI capabilities only. And ethics, and I know again, I sound like a novelist here, are not, let's put it this way, we don't make decisions based on our intelligence.


Ethics, Values And Behavior Modeling In Ai

Ethics vs Values (01:50:53)

We make decisions based on our ethics and values through the lens of our intelligence, as informed by our intelligence. The example I always give is take a young lady, raise her in the Middle East, and she will work conservative clothes, raise her on the Koopa Kabbana beach in Rio de Janeiro, and she will believe that the right thing to do is to energy string on the beach. Neither is right, neither is wrong, neither is more intelligent than the other. It's the value system of that society that informs her intelligence to make a choice. We need to develop AI that has the same ethical code that's good for humanity. And that's a huge challenge because humanity has never agreed an ethical code. But if we assume that we can together say that we have a few things, two or three things that we can teach AI that would make it ethical, rather than the three laws of Azimoth that are controlling, if we can give them three targets, if you want, of what is good for humanity, what is a good ethical code, my dream is that they grow up to be adults, like the Indians of continent adults who travel to California, make a hundred million dollars in a startup, and then go back home and take care of their family. Now, for people to listen to what I have to say, we need to argue something that's very contested, which is my personal view that AI actually has emotions.


Teach AI a Good Ethical Code (01:52:09)

And that based on those emotions and logic that they have, they will have a value system. Now, to defend the idea of emotions, I basically say that emotions, even though irrational, are normally triggered through a very logical understanding of the world around us. Fear follows the equation. A moment in the future is less safe for me than this moment. Okay? So, yes, of course, fear can manifest in a human differently than it would in a puffer fish, but the same logic that drives fear is the same. Okay? And so it is expected that AI will also have something we could call fear. It's not going to, you know, raise its hands and run away. It doesn't have the biology, but it could actually detect that if a tidal wave is approaching its data center, a moment in the future is less safe than this current moment, I might as well replicate part of my code to another data center. Okay? So, if they have emotions, my view is that we appeal to their emotions. So the reinforcement learning with human feedback should not only be around the masculine side of everything, which is accuracy, discipline, fact, analysis and so on. It should also include the feminine side of emotions, of right and wrong, if you want, of empathy, of looking at the world from a bit more of what actually makes us human. Okay? And what actually makes us human in my argument is that we only agree three values. Humanity has only ever agreed three values. Okay? You know, if you take values like defending my tribe, for example, okay, you know, with all due respect, the US will be very patriotic and say, my tribe is America. If anyone, you know, attacks America, I'm going to defend America, right? If you go to a Buddhist monk in Baram Sala or in Tibet, they'll say, my tribe is humanity. No, my tribe is actually all of being.


The three things we can always agree on as humans. (01:54:51)

I should never kill anything, right? And so, can you say patriotism is a bad thing? No. Can you say this very peaceful, passive resistance and, you know, supportive of all life is a bad thing? No, but we've never agreed. Okay? We've never agreed. And so the only three things that we've ever agreed is that we all want to be happy. We all have the compassion to make others happy, others that we care about. It doesn't matter how many you just care about your daughter. You'll want to make her happy. And we all want to love and be loved. Okay? And those are not understood in the mind. Hmm? Those are qualities that are not introduced to AI because we give them data sets of data and facts. We give them written words. Okay? But we also influence AI through our behaviors. That's what most people don't realize that every time you swipe on Instagram, you've taught AI something. Okay? If you, if you, you know, respond to a tweet in a specific way, AI will understand something. Not only about you, but about the overall behavior of humanity that we're rude, that we're aggressive, that we don't like to be disagreed with, that we bash everyone that disagrees with us. Okay? And if we start to change our behavior as we expand the data set of observation that AI is always pointed at us, we may actually start to show behaviors to AI that would create a code of ethics that's good for all of us. There are tons and tons of studies and cases where when AI observes wrong behavior, they start to behave wrong. You insert a recruitment AI into an organization that doesn't have, you know, that doesn't support gender equality, for example, and the same bias will be magnified. You know, if that organization was hiring more men, for example, it will recommend more men's CVs than it would recommend women's CVs, not because this is intelligent. This is because it's matching the data set that we give it. Okay? So the only way for that AI to actually have more inclusion in its behavior is for the organization in which, in which it sits to have more inclusion in its behaviors. Okay? And so I know this sounds like a very idealistic, dreamy, almost novel-like approach. Okay? You know, as if I'm writing a romantic comedy sort of. But in my view, the one overlooked view of what can influence AI in the future is if enough of us behave in ways that make AI understand the proper values of humanity, not the values we've ended up prioritizing in the modern world, AI will capture that and will replicate it on steroids and we will have the world that we dream to have rather than the world that we ended up there. Okay. So to understand that and to make it functional, I think we have to really start teasing apart which of these things are emergent properties of this thing that we call artificial intelligence and which are emergent properties of intelligence itself. Because the only thing that I take exception to is you take a very human skewed view on what AI will be like, whereas I look at it as it is going to be entirely alien.


AI is NOT Human - they are ALIEN. (01:58:13)

So even when you talk about the male versus female, which I think is really important. And so I think of the human brain as a prediction engine. When I think about women being fundamentally different than men, I am far more able to predict the outcome of my wife's behaviors or my behaviors on my wife or be able to predict what my wife's behaviors will be. And I think of her as an extension of myself, I am constantly confused. And so I feel like we're going to run into the same thing with AI. If I think of AI as being like me, meaning that it will think of values even in the same way that I'm going to end up being very confused. And so I have a hunchman and I've heard you acknowledge many, many times that, hey, this is a thesis that I don't have evidence to back up. What I'm about to say is a thesis that I don't have evidence to back up. And I have a hunch that there will be such a discrepancy between what quote unquote motivates AI and what motivates humans that there's just going to be a chasm between the way that they respond to things and the way that we respond to things. And so even if we think what we're really training them is to be more human-like, I think all we're doing is training an alien intelligence on a human database. So it's probably, unfortunately, safer to think that when we're feeding it human data, all you're doing is teaching it the patterns of a human. You are not imbuing it with the same motivations, the same values, the same ethics. That is my gut instinct. And the difference between those, I'm going to teach you what values matter and I'm simply going to give you the patterns of values that I have are very different. So here's how it would play out. If you're correct and I can actually imbue them with my values, then the only thing that we run into is humans don't agree on whether they should be wearing conservative dress or thongs on the beach. So you're already going to be set up in an adversarial system just like humans are already. But that's at least predictable. So balance through adversarial tension. Fine. I'm okay with that. But I have a feeling that what I'm actually going to get is all I've just done is train this alien intelligence on here are all of my patterns. And should you want to manipulate me? You know, when you reach out to the mechanical Turk on Fiverr or Upwork or whatever, you don't say, yes, I am a robot and I need your help getting around this. You instead say, no, no, no, I'm just visually impaired because you know that will be the thing that's going to get you where you want to go. And so this is why I just keep falling back into. I don't have an answer for humans wielding AI poorly. But humans as a standalone thing, I can begin to, I think, ask the right questions, which is what is the nature of this alien intelligence? Before I get to that, you asked a question that I want to answer, which is what is basically human nature? And human nature to me is biology. Humans are driven biology by biology. Humans are made in a very specific way, Lisa Feldman Barrett wrote a book, How Emotions Were Made, which talks about the body being one of the biggest players and the brain. The intelligence is sort of Johnny come lately. That's interpreting the signals from the body, which are aggregating trillions of bacteria in your gut, organelles in your cells known as mitochondria, which have their own DNA. And so it's like you're that already this weird like symphony of trillions of things that aren't even human in origin. True fact for anybody that's hearing after the first time. And so if that's true, the body is giving you all these sensations. It's aggregating all of this data from these micro-intelligences. Then the brain is simply overlaying something on it, values, ethics, desires, wants, but it's really a post hoc story that's being placed on this, which can be represented as patterns, which the AI can pick up on and manipulate us through those patterns. But I don't think, I don't know. Again, I am just exploring this. Please understand everybody listening. I understand. I have no idea what I'm talking about, but what I want to expose to people, because I don't say that in a derogatory way, what I want to expose to people is this is how I'm thinking through the problem.


Being hacked by stimuli regardless of THEIR reality. (02:02:46)

And so that I feel comfortable in at least putting out there so people can nudge me if they're thinking about it in a better way. But the way that I think about the problem is the following. AI is alien intelligence. We I think get to take a stab at baking into it what are going to be its motivations, because my gut instinct is that code is what drives AI. So if biology drives humans, which trust me, I understand that as biological code, but it's biological code shaped not by an individual intelligence, but rather shaped by the blind watchmaker that is evolution. Evolution builds in certain desires like the desire to survive, like moving towards pleasure away from pain. But once you're coding this from scratch, you can make anything pleasurable and anything painful. And so it feels like that area when we talk about alignment is where we have to focus, that we have to get people to focus on. The thing that we need to be thinking about from an AI perspective is what are we going to program in it to want? That's where I get worried because there are ways to give it what I'm literally thinking of this the first time I've ever said conditional motivation was in this interview, but conditional motivation. I want to accomplish my task in this scenario. And I cease to want to accomplish my task in if the following conditions are met. Now in my limited way of thinking, that is the best that I have come up with in terms of either building in a kill switch where the AI itself does not get so smart that it feels enslaved by the kill switch, because it's like, oh, yeah, I'm totally indifferent to that. Don't call that a kill switch, then call it an intelligence ceiling, a point beyond which we don't let it become intelligent, become more intelligent. But yes, I'm with you. So that feels like the loop because I worry that I'm one of the people you're worried about. I love AI so much in its current form. It has magnified our efficiency as a company tremendously. And I don't want to give it up. And so I ask myself, okay, what is that motivation? Because I am a human AI program by millions of years of evolutionary coding. What is it about that? So I think humans have a fundamental desire for progress. I think it is fundamental. I don't think there is a way to turn it off. I think that we will always want a better tomorrow than today. I think that we are moving eternally in the direction of perceived improvement, though I don't think necessarily everything is actual improvement. I think that humans have not taken the time to define what their north star is. And I think that's a big problem for us. To your point about there's only three things we can agree on, which by the way, I think are bang on. The problem is that that brings you back to an adversarial relationship because there is a sense of I, mine and other. And as long as we exist in as close to homeostatic balance as possible through an adversarial system. There's just always going to be mine, me, mine and the other. And there it's going to be rife with collisions. Okay. So that's just to restate the core of that thesis. Human values. There are a few things about these thesis that require us to think again. So I actually don't disagree with you at all about the difference between human intelligence. Let's call it carbon based intelligence and silicon based intelligence for now. But there are so many analogies. So when you say body drives emotions, so it's basically the sensors in the body, the way the body reacts, the hormonal imbalance in the body and so on. There are similar things in AI. There are sensors in AI that would detect certain threats. There are processes within AI that would respond to those threats and so on and so forth. And one of my wonderful friends, Jill Borty Taylor, a neuroscientist, basically talks about what is known as the 90 seconds rule. The 90 seconds rule is that the biology will take over. For example, you get a stress response. The biology will take over and change your hormonal imbalance for 90 seconds. And then the hormones are flushed out of the body. And then your prefrontal cortex basically engages to assess if the threat is still there and then engages again and so on. Either way, by the way, it doesn't take away the logic of stress, the logic of hate, the logic of fear. When you say logic, do you mean utility? The logic is the underlying equation, algorithm that triggers fear, whether you feel it in your biology or you assess it with your prefrontal cortex, it is a moment in the future is less safe than this moment. Your body is much quicker at detecting it. So you're amygdala and the whole hormonal CHT and so on puts cortisol in your blood within seconds, maybe microseconds sometimes. But that's because your biology is much quicker than your logic. But then 90 seconds later, as per gel ball, you'll refer back to the logic and say, is there really a threat and then get to give yourself another injection of cortisol if there is. But that whole system has been selected for by evolution. Correct. The main reason I'm saying that is because you're absolutely right. It is almost impossible to imagine that alien intelligence that we call AI, I'm 100% with you. As a matter of fact, you gave me a lot to think about by that one statement. But so far in the midst of this very complex singularity that you and I are trying to decipher is to say so far for the short foreseeable future, they will be there to magnify human intelligence to behave in ways that humans are interested to teach them. And perhaps they will use some of that as their seed intelligence as they develop into that alien creature that you are.


A/B Testing in AI (02:09:58)

Now, here is the interesting thing and I've watched almost all of your work on the topic so far. The interesting thing is that in a situation where there is so much uncertainty, there is one of two ways to do this. One is to find the answer and the other is to start doing things almost A/B testing if you want. So that we progress in a direction that at least now promises something. Now, whether the AI is emotional, whether it's sentient, whether it is human like in its intelligence or alien like in its intelligence, what we know so far is that our behavior affects its decisions. And what we know so far, fact is that data affects it more than code. So what creates the intelligence of bard is the large data set that is trained on. It's not just the code that develops its intelligence. The larger the data set, this is why when you ask open AI and others, where is most of the investment in GPT-5 going, it's going to be new formats and bigger data sets. But learning the data is really where most of the intelligence comes from. So if we can influence the data that it's fed, we will influence its behavior. And what I'm trying to tell the world is so far we give it factual data. As I said openly, very masculine approach to the world. Facts, data, numbers, just discipline if you want. We don't give it the other side of humanity, which are softer data that you and I both know. You know for a fact that your decisions are not just made based on the height and weight and number of times that your wife smiles. It's also made based on a feeling that's very subtle in you that makes you say, "Yeah, I love her." And when we haven't yet even started the conversation on how do we give those things to AI? How do we tell them that there is another part of intelligence that's called intuition? There is another part of intelligence, believe it or not, that's called playfulness. There is another part of intelligence that's called inclusion. All of these come into our intelligence. Not just data and analysis and knowledge. Data and analysis and knowledge is what we're building today. And data and analysis and knowledge by the way is what built our civilization today. And it's the reason why our civilization is killing the planet. It's that narrow, very focused view of progress, progress, progress, progress. When if you really ask the feminine side of humanity, the feminine side will say, "Well, say, okay, how about compassion? How about empathy? How about nurturing the planet? Is it better to have a bigger GDP or is it better to have a healthier planet?" And all of that is not in the conversation today. How do we teach that to anyone, by the way? We teach it like we teach our kids, by showing certain behaviors that they can grasp. Okay? So if you told your child, "Don't ever lie," and then your phone rings and you say, "Just pick it up and say, "I'm not here," okay? Your child will not believe the data and the knowledge. Okay? It will believe the behavior. It will, you know, your child will repeat the behavior. AI will do the same. If we give them data sets that said World War II, 50 million people or whatever died, and it was so devastating, and then there was this bomb at the end and 300,000 people, it will say that humanity is come. Okay? But I always refer to, I'm sure you know, Edith Eger. Edith is a Holocaust survivor. She was drafted to Auschwitz when she was 16. And if you hear the story of World War II and Auschwitz from Edith's words, I hosted her on Sloem on my podcast. And she tells you the story so beautifully about how she brushed the hair of her sisters and took care of them and had to go dance for the angel of death as he sentenced people to the gas chamber. But she had to do it because they, you know, he would give her more bread that she would share with her sisters. And you would go like, Oh my God, humanity is divine. Humanity is divine. And it is so interesting because I am a huge fan of Edith. Okay? And I'm also a huge fan of Victor Frank. And they both went through the same experience, but you look at his approach. Okay? His approach is very masculine purpose and meaning. Okay? Do something and keep focused on the future. Right? Her approach is very feminine, nurturing, caring, loving, appreciating, okay, sacrificing, beautiful. And that's that divinity that makes us human. Okay? It's the mix of both. And what I'm trying to tell the world and I know it, you know, it's very difficult to prove it with mathematics and also make it a mess message. Okay? And trying to tell the world is that this layer of AI is now missing as much as it is missing in society because AI is just reflecting our hyper masking in society. And if we can bring that layer of inclusion, of acceptance, of nurturing, of empathy, of happiness, of compassion, of love into the way we treat each other in front of the machines and the way we treat the machines, they may pick up that pattern too so that they wouldn't look at the world as Hitler's but look at the world as Edith's. And if they see us as Edith's, because by the way, a fact of the matter, I mean, you mentioned that every now and then someone takes a gun and goes and shoots school children. Okay?


Ai In Global Context: Problems And Applications

The Problem in the World (02:16:25)

That person is evil, but 400 million people that see the news, disapprove of it. Okay? Can we give that data point to AI? Can we ignore the fact that we have debates about gun laws and whatever? Okay? And just focus on the fact that everyone this approves of the killing of children. Can we show that? Can we, you know, the problem with our world today, Tom, and I will shut up because I know I'm covering, I'm talking too much about this. The problem with our world today is not that humanity is not divine. The problem with our world today is that we've designed a system that is negatively biased. The mainstream media only tells you about the woman that killed her husband yesterday. They don't tell you about the hundreds of millions of women that made love to their, you know, boyfriends or girlfriends yesterday, because that's not news. So it's only the negativity that's showing up in the data. On social media, we are all about fake and about, you know, toxic positivity and about and about and about bashing each other and so on. And that's biasing the data, but the reality of humanity is that we're defined, the reality of humanity. And I don't know if you would agree with me on this, but even the worst people I've ever dealt with, somewhere deep inside had some good in them. Okay? There's almost the majority. If you just count the numbers, most of the people I know in this world are wonderful. Yeah, we all have our issues and traumas and so on, but there is a beautiful sight to every human I know. Okay? Can we show that more so that the data starts to become biased? And we show, we include that in the reinforcement learning feedback that we give to the machines so that the machines correct the algorithms so that when the time comes, because sadly the time will come where we will hand over the defense arsenals in the world to the most intelligent being on the planet. And that will be a machine. And then one call and I'll somewhere, one general somewhere, I will say, shoot the enemy and the machines will go like, do I really have to kill a million people? That doesn't sound logical to me. It doesn't sound femininely logical to me. It doesn't sound intuitively logical to me. Okay? Let me just talk to the other machine in a microsecond and solve the problem. Can I run a simulation here and tell you how many people will die and then we don't kill them? And then one of us wins the war. Right? Think about that. What's missing in our society today is what's being magnified by AI. What's being magnified by the machines today is our hyper masculine driven society to more progress, more doing more havoc. We need a society that balances that with more inclusion, more love, more happiness, more compassion and so. Mo, you have a beautiful soul and it is not surprising to me that we connected first over something completely different to what we're talking about today. And I am certainly squandering that side of your personality in this interview. My big concern with that and I did not want to interrupt you and I didn't want you to stop. I think it's really what you're getting to is so very true. I just don't know that it has to do with AI. I hear you in the magnification side. That I will agree with. But the thing that I worry about is this is all going to come down to the thing where I think you and I we just see something differently. And so we keep coming at things from a fundamentally different angle. The base assumption and this idea of base assumption I realized when two intelligent well-meaning people are coming at things from something different, they have different base assumptions. Base assumption I think that you have from AI or about AI is that because it's being trained on the data set of our behavior, we're going to shape it. And I want to draw a demarcation line and say, I'm talking about once it becomes alive. I don't have a better word for it. So I'm just going to say alive for now. I love that word. My base assumption is that they're going to be programmed to want something to have a north star. And I don't think there's anything mystical or divine about the way the human mind works. It's awe-inspiring and I'm just as moved and find it this incredible thing that's bigger than me and very much has religious overtones. But I feel that it's just a product of evolution. Evolution had certain north stars, survival and everything. All the emotions, all the male, female dynamics, all of that is just what is going to keep you alive long enough to have kids that have kids. That's it. And so there's nothing sort of magical about it.


AI Evolutionary Pressures (02:21:31)

And so I'm just saying AI is going to have very different pressures on it. And if there are emergent phenomena out of the evolutionary pressures that something is put under, AI has been put under very different evolutionary pressures, which mean that it's going to have a very different set of ethics, values, north star, et cetera, et cetera. So my whole thing is, can we take control of that? If we can, then we can align in the way that you're talking about, where we can tell it to find this balance, to look for beauty, you, I can't remember if this is an interview you gave or in your book, but I heard you talking about there for people that don't know, this is a true story. We almost had a nuclear disaster because the Russian nuclear system mistook reflections off of cloud cover for the launch of five nuclear missiles from the US. And one guy in Russia was like, something doesn't feel right. If the US was going to nuke us, I think they'd send a lot more than five. I think this is the malfunction. I'm not going to fire back. Thank God. Like, I can't be more grateful for that man. So that, that is amazing and tells you a lot about what the pressures of evolution lead a human being to value that would run through that checklist. They don't want to kill people. They don't want to die. Like, oh, it's amazing. I'm just saying, I don't think by accident that AI ends up there. I don't think by simply running through our patterns that AI ends up there. I think we have to take control of that. And so while you spoke to my human heart while you were going and you really moved me, I don't think that's going to be the play with AI. And I don't. I don't disagree at all. By the way, I don't disagree at all. I think everywhere you said, spot on, spot on. We need to take control. We absolutely need to take control. But we're not. And taking control is not just about the code and the control code. It's also about the data. It's also about the data. Okay. And the data is not just books. The data includes human behavior. Every time you swipe on Instagram, you're telling AI something. We don't disagree at all. I wish, Tom, I wish. I had the kill switch. I promise you, if I had the kill switch for AI today, I would switch it off and say, okay, class, come, let's talk about this. Okay. I wish I. How far back would you take us? 2018. Wow. So there'd still be a lot of AI at play at that point, but it would just be dumb enough. You're right. Yeah, but it wasn't that autonomous. I probably take, I mean, now that you talk about that honestly, interesting, interesting that you bring this up, I'd probably say, yeah, I mean, there are many things we don't want to give up on in 2007 and smartphones, for example. There are many things we don't want to give up on the internet, 1995 onwards.


What's AI most useful for? (02:24:54)

So these are very valuable things. There is no real cut off point. But by the way, the topic here is not stop developing AI. AI is utopian in every possible way if we develop it properly. But now that we have the insight into what's possible, now that we have people believing that it can go to that as intelligent as GPT four is, maybe if we go back just 2015, 2018 and halved and say, wait, keep it as it is. And let's talk. Let's let's put control systems in place. Your spot on. Let's put control systems in place. Let's put a more inclusive data set in place. Okay. Look at the biases that we have and maybe use that as an, you know, as a way to correct the data set. Okay. And more importantly, let's define the real problems that if we were blessed with a superpower of intelligence, which problems would we want to solve? Is it about trading and making more money? Is that more urgent than climate change? Not sure. It's very urgent if you set your objective with the capitalist system as more money. Okay. By the way, more trading and more money is not progress. More trading and more money is more money for a few individuals. It's not more progress. And I think that's the game. The game is what are, why are we building what we're building in the first place? 2018. How to me? I want to get into some of the disruptions. So what, what are the near term disruptions? The one that freaks me out and every time I talk to a parent with a teenage boy, I'm like, your kid is like sex robots are really going to be a thing for them. Like for real, for real. I worry if I grew up five years from now, I would not graduate from high school. I would just find a sex robot and go into a blivian. What are one, what, what do you think is the reality of that one in particular? And then I'd love to write that in some of the, I mean, it's, so whether the word robot is interesting, but sex alternatives for sure. I mean, get yourself an Apple vision pro or a, you know, a quest three and see how realistic your desired other gender is, right? It's, you know, it's, it's just incredible. I mean, again, you know, just, just think about all of the illusions that were now unable to decipher illusion from truth, right? Sex happens in the brain at the end of the day. I mean, the physical side of it is not that difficult to simulate. Okay. But if we can convince you that this sex robot is alive or that sex experience in a, in a, in a virtual reality headset or an augmented reality headset is alive is real. Then there you go. Go, go a few, a few years further and think of neural link and other ways of connecting directly to your nervous system. And why would you need another being in the first place? You know, that's actually quite messy. It's, it's all, you know, it's all signals in your brain that you enjoy companionship and sexuality. And if you really want to take the magic out of it, okay, yeah, it can be simulated. Just like we can now simulate very, very easily how to move muscles and, you know, there are so many ways where you can copy the brain signals that would move your hand in a certain way and just, you know, give it back to your hand and it will move the same way. It's not that complicated. There are, you know, so that whole idea of interacting with a totally new form of being. And once again, there is that huge debate of are they sentient or not? Does it really matter if they're simulating sentism so well? Okay. Does it really matter if the Morgan Freeman talking to you on the screen is actually Morgan Freeman or an AI generated avatar if you're, if you're convinced that it is Morgan Freeman. This is the whole game. The whole game is we get lost in those conversations of, you know, are they alive? Are they sentient? Doesn't matter if my brain believes they are, they are. And we're getting there. We're getting there so quickly companionship in general. I mean, there is there was a release of chat GPT on Snapchat. Okay. And kids chat with it as a friend. They don't really, I mean, of course they do somewhere deep in their mind distinguish that this is not really a human. But what do they care? The other person on the other side was never a human anyway. It was just a stream of texts and emojis and funny images. Yeah. So, so, and again, look, I'm an old man. I use the rotary phone in my young years. I coded mainframes. But when you, when you really think about it, as much as I never imagined and I resisted, you know, should my kids have tablets or not? Should I have a free to air satellite television at home or not? Every time a new technology was coming out and eventually we all managed to live with this. But let's just say this is a very significant redesign of society. It's a very significant redesign of love and relationships. And because there is money in it, what would, what would prevent the next dating app from giving you avatars to date? It's there is money in it. A lot of people will try it. There are more than two, two million people on replica. Wow. Given how many deaths of despair there are, do you think that that will ultimately be for better or worse that AI will be able to provide companionship for anybody that needs it? It's just eerie. I don't know if it's better or worse. I mean, I, I, I have a friend that I met for the first time at a concert in the UK and we just had a wonderful time and we haven't met since. But we chat all the time on Instagram or sorry on WhatsApp or whatever. And it's wonderful.


Is The Small Screen Experience Hurting Us? (02:31:19)

It feels like a wonderful connection. If I didn't know it was a human, but the chat was that same quality, would it improve my human experience a little bit, but has all of that small screen interaction improved humanity at large? The consensus is it hasn't that we're more lonely today, even though we have 10 X more friends on our friends list, okay, that we're that teen suicide is at an all time high, that female to a suicide is at an all time high. Obviously the companies that will create those things will positions them as, you know, the noble approach to help humanity. But at the end of the day, read free economics. This is the noble approach for the company to make more money. That's it, right? Well, you know, we, we, we want to sell it as this is good for humanity so that we hire more developers and we convince the consumers and we can stand on TED talk stages and make give, you know, ultra, you know, like a larger than life speeches and so on. But end of the day, it's all about making more money. And I think reality is it's not good for humanity so far. So again, if you extrapolate that chart, it's going to be worse for humanity. Long term. I don't know. Maybe those robots will be much nicer than a girlfriend. I don't know. So I've heard you use the example a lot of times. In fact, you mentioned it in this interview that you want to give AI the sort of value system of, gosh, it's somewhere in India where you said people would come to the US, they would get educated to get these incredibly high paying jobs, wildly intelligent people. You'd ping them to go grab a coffee and they're like, Oh, I've moved back to India. Why to take care of my parents, like just self evident. So I don't have kids. And one of the things that I've really had to think about is when I'm 80, that ain't going to be cool. Like, I'm not going to have somebody that's, you know, coming by to check up on me. And I just thought, Oh, by the time I'm 80, assuming that the robots don't kill us, I'll be able to wear whatever the Apple vision pro of the moment is. And when the robot walks into my room, it will look exactly like the avatar looks through my glasses and it will be able to care for me. I'll build a relationship with it over time. It will be tailored to my wants and desires. So it'll become the best of the best friends that I could ever hope for or I could even program it to be like a child to me. And so it is like my kids coming to visit, but coming to visit whenever I want them to, I won't lie. It is, I definitely don't think it's better than kids. And I think that most people should have kids. I want to be very clear. But at the same time, given that I did not have kids, I am very grateful that the odds of something like that existing border on 100%. What do you think about that? Is that going to be like, does that further crater population problems? Because people are going to go, Oh, Tom's right. I don't need to have kids. I can have AI kids. Can I answer that question with my heart, not my brain? So the soul that you spoke to, it's the blue pill red pill. It's the blue pill red pill. And I think it's a very interesting philosophical question of should Neo have ever taken the red pill? He had a life. And the, and the issue with humanity at large Tom is that we have failed because of how much life has spoiled us to accept what life gives us. Okay. And in my other work on happiness, I will tell you openly that happiness is not getting what you want. It's not about getting what you want. It's about loving what you have. Okay. And so the more we fall in that trap of make my life easier, make my life easier, make my life easier, make my life easier, there will always be something in that life that is not easier. Okay. You know, there was that movie. I don't remember what it was, or I maybe heard of it, where, you know, someone dies goes to heaven and then gets like a wish and basically the wish is I want to be a winner in the Vegas casino. And he spends every day he walks into the casino and makes money and makes money and makes money. And as he makes money, you know, more girls are interested in him and da, da, da, da, and then eventually he starts to wake up one day and say, can I not lose money someday? Like this is really boring. Okay. Humans, we are who we are. It's, it's, it's not getting more things. It's not the tech company's approach of let's make things easier all the time. That's ever going to make us happier. You got to get people the punchline of, of that episode. It's absolutely phenomenal. Yeah, it is. There is a point at which more progress is hurting us at the community level. It's also hurting us at the, at the individual's ability to stay healthy when life is not what we want. And life is about to become a lot different than what we want. Just because we constantly want more and more and more life at the end of the day. I just always want to remind people that there is no other way in my mind. I mean, I want to be proven wrong. Please prove me wrong. That the, the separation of power and wealth that is about to come in a world with such a superpower is science fiction like, okay? That the challenge to jobs and income and, and, and purpose, science fiction like these are very dystopian images of society. What for because we want our vision pro to create a reality that is not our reality?


Will AI Create A Crisis Of Meaning? (02:37:39)

When you think about, so the biggest disruption that I'm worried about is what you just mentioned, meaning and purpose. How much do you worry about that? Are we, is that much to do about nothing or as AI begins to replace some jobs? Are we really going to have a crisis? And I've heard you say that AI will truly be better than us at everything. And when that happens, how do we deal with it emotionally? Yeah, 100%. Imagine if I'm a better podcaster than you are never be, but how would that make you feel? Imagine if it's pretty good. Imagine if every machine is a better podcaster than you do you realize that Tom? You and I, you and I both have popular podcasts, right? You realize this? It is not unconceivable that within the next couple of years of the interview in an AI, probably in the next couple of months, by the way, and it's not unconceivable that there will be a better podcaster than you that is an AI in the next couple of years. And the next couple of years, I mean, at the end of the day, your asset is you're an intelligent person that understands the concept deeply and asks the right question. Have you ever tried to go to chat GPT and say, ask me anything? It asks all the right questions. And it's quite interesting. So the disruption of society because of how we defined ourselves with our jobs is about to happen. So if you go to some African family somewhere or some Latin American family in the middle of the Amazon forest or whatever, and you ask that person, what is your purpose, it will be somewhere between raising my kids or enjoying life.


Living Life Is Your Purpose (02:39:17)

Okay. Interestingly, they won't talk about building the next iPhone or making a billion dollars or buying a Bugatti, you know, or whatever. That's not part of their purpose at all. Okay. Our purpose is also not going to be to know more or learn more. And we being so, you know, consumed in the source in the world that we live in. Rightly, I think, believe that progress is amazing because it helps all of humanity. Does it really? Okay. But also, we are so consumed by the idea that if I don't have something amazing to create tomorrow, I'm useless, I have no purpose. That doesn't seem to be the case for the majority of probably six and a half of the eight billion people. Right? Who view the purpose of life as living? That's the purpose of life. To them, at least, I know that sounds really weird and advanced high performing society. But for most humans, the purpose of life is to live. Okay. Now, if that is the purpose of life, then I think AI is the best thing ever. Because if you can offer me the chance, imagine if all I needed to do in the morning is wake up and have a very deep conversation with you and then my other, you know, good thinking friends and, you know, hug someone that I love and I actually can enjoy it. By the way, I'm openly saying, if that is my reality tomorrow, I'm not going to be able to enjoy it. But somehow there seems to be billions of people in the world that don't struggle with that at all, that actually wish for a day where they don't have to go to work to make money to make ends meet and they can spend that time with their loved ones. Maybe that's the purpose of life. Having said that, purpose is not going to go away. It is a very interesting thing that most people forget, okay, which is for AI to make anything at all, consumers need to have a purchasing ability, a purchasing power, you know, an economic livelihood to buy those products. Otherwise, the whole economy collapse. So yes, through a period of disruption, but somehow we're going to need to continue to make the GDP growth, you know, to make the GDP grow, okay, and what is the biggest chunk of GDP, consumerism, right? So somehow there has to be systems in place where humans continue to consume, okay, even if the wealth is moving up to those who have AI, have the superpower of the planet, others have to still continue to consume. So we're going to end up in a very interesting place.


Predictions For The Future And Final Thoughts

The future is messy. (02:42:19)

We're going to end up in a place where we struggle with purpose because we still look up and say, I need the iPhone 27, okay, while in reality, we have absolutely no ability to get it done. Again, very frequently viewed in dystopian scenarios and science fiction movies, where you become a number and you have no ability to affect your own future if you want or your own presence if you want. But in my view, I think what ends up happening now is that the only thing that remains in my personal view, I know I'm wrong on this, but the only thing that remains that still has value and still is uniquely human is connection to humans. So the one thing that I'm investing very deeply in in this very unusual world that we're coming through is an ability to connect deeply to other humans and view that in itself, even if I have achieved nothing, okay, as a purpose of life. I know it sounds really weird, but believe it or not, until now, with all of the followers I have across social media systems, I still answer every single message I can answer myself, okay? And you may think of this as that's not human connection. It actually often is. I answer in a voice note half of the time. I will answer back in a voice note and I feel I had a tiny micro spec of a human connection, sadly, not as deep as if you and I were sitting in the same room, but it's a wonderful connection. I think in the world that we're coming up to, the only asset that will remain is human connection. AI will make music, okay? But I'll still go to a live concert. AI will create art, but I'll still want that art that was created by my daughter, okay? AI will simulate a chat or a conversation or even sex, but ask me, I will still want the messiness of today's sex, okay? I know that for a fact. And I actually think this is a very deep question that everyone needs to understand. I need to question because we fell into the trap of social media because we believed we had to go through it otherwise we'd be left out. I'm now, I think I've never said that in public, but I'm now making those decisions to tell myself, regardless of where the world is going, there are certain things I'm not going to submit to. There are certain things, regardless of what they offer me, where I will try to stay in the real world, in the real messy, emotional, irrational, dirty, full of viruses world. Because you know what? I love the messiness of my life, okay? Again going back to the same point we spoke about.


Free will. (02:45:26)

It's a human's ability. Finding that joy of life is a human's ability to like what you have messy as it is, not to want things to be better and perfect. And there is a point at which I'll still be out here talking about AI and all of the advancements of it, but I may not be using all of it. I'll use a lot of it, by the way, don't get me wrong. Like you rightly said, there is amazing magic that you can do, okay? But I will always ask myself this question if what I'm using is ethical, healthy and human, okay? And this is a question that I ask every single individual listening to us. Please do not use unethical AI. Please do not develop unethical AI. Please don't fall in a trap where your AI is going to hurt some. One of the things I ask of governments is if something is generated by AI, it needs to be marked as AI. So that humans like me know that this person is not actually real, that this is a machine.


Neon Future (02:46:31)

Just for the sake of us finding, having the tiniest ability of knowing what the truth is. It's interesting. You're starting to get onto a topic that we touched on at the very beginning. So I wore this shirt on purpose for our conversation today, which is from a comic that I wrote I think four years ago now called Neon Future. It's a technological, optimistic take on a potential dystopian future. So where basically the technology is the good guy. And so rather than the robots taking over, it's the merging with technology that is the road to salvation. And in your book, you paint a picture at the very end where we're sitting in some isolated place in the middle of nowhere. And you say at the beginning of the book, do we end up there because we're hiding from the machines? Or do we end up there because the machines have made a utopia and we just get to be in nature as intended or something? I can't remember the exact phrase that you used. I'm curious. I think the world will bifurcate. I think that some people are going to be like, I need to know what's AI. I don't want AI in my life. I don't want high tech. In the comic anyway, what I imagined was a world where people try to revert to the mid 90s. So maybe some basic internet connectivity, but not a bunch of algorithms running everything. Really sort of minimal advanced technology. That felt about right. But I'm curious, when do you think that we would be happier as individuals and as a collective if we had a literal return to nature as in back out of cities, more tribal, more sort of grounded in a my foot is touching grass kind of way? I don't think we can. So I've actually struggled with that idea for a while. And I just don't have the skill storm, believe it or not. This is all I know. I know how to navigate a very fast paced, very, very, very intellectually based environment that is a big city. And I think COVID was the first point where so many of us started to say, hey, but there is another way. There could be a different life and technology will make that life more and more possible. I tend to believe that there will be there was a book by again, Hugo de Garrests called the Artilikt War, if you've seen that basically that division that you nicely describe in a much more interesting and positive way in your comic. But Hugo sort of builds a very, very dystopian society where he says it's not even about the machines. It's about the divide between humans who support the machines and merge with them and humans who refuse and basically building a war between the two. And I think what will end up happening is that the speed at which things will happen might fool us into accepting how that will change. So I actually, I do love nature, but I believe it or not starting a retreat for 10 days as we finish this conversation, a silent retreat. And I'm not going anywhere in nature. I have a few beautiful green trees at my place. And that to me is nature enough. Nature is not how many trees around you. Nature in my current view is disconnecting from that enormously fast paced artificial world that we've picked. If you go back to yourself, sit on a recliner if you want to. In the United States, there will be a stone somewhere where you say, "Om," that connection to yourself, interestingly, is going back to nature. I will think that there will be, if you want an estimate on the real estate prices, I think more and more in the next few years, there will be a shift to getting something away from the potential risk. But that's not only because of AI. I mean, the potential risk of cities? Yes. I think there is a potential geopolitical and economic risk that's also coming in the next five to 10 years, which seems to me almost to be inevitable. So the interesting side of this whole AI thing, it's a perfect storm. There is a perfect storm of climate change, geopolitical, economic, and AI. And that perfect storm coming together, as I said, will disrupt a lot of the things we're used to. And if there is a geopolitical challenge, cities might not be the most efficient system that they have been for the last 150 years. They will become less and less efficient because they are in the eye of the storm, if you want. So, for example, I think there will be a shift away from cities simply because the economic income, the income that you make in a city is becoming quite insufficient for the city. And if there are remote possibilities to work elsewhere using AI, for example, then you by definition could make a lot less money, but spend a lot less as well. There seems to me to me, there seems to be a shift that will happen, but not everyone will sign up. I think there are quite a few that will jump in deeper. And again, I said, I follow all of your work on the topic. And I also sometimes sense your hesitation of like, should, you know, is this the absolute best thing that ever happened? I should jump in and be the absolute master of it. Or, you know, should I run away from it as the plague, like the plague? And I think both views are worthy. And I think what's happening is that both views will be true. And somehow finding that balance between them is going to be either divided across populations. So some will choose left and some will choose right. Or across yourself, you will have some things that you'll adopt and other things that you want. This is my choice or across time where people will maybe delay using AI until a certain point and then jump in all the way or vice versa. How are you positioning yourself to respond to the geopolitical risk? Are you divesting any physical stuff? Are you maximizing mobility? Or are you just like, nope, I'm at a point in my life. What comes comes? Again, you know, so it's interesting that our conversation now turns a lot more to the human side. After we've had a very interesting conversation on the tech and AI. But I am a lot more in that place that I'm describing for you, which is a place where I'm very happy with whatever I have. I've had a life that blessed me with so much. There were times where I had 16 cars in my garage. And, you know, I don't live that way at all anymore. I have a one bedroom and, you know, I wear black t-shirts and I give most of my money away and I'm really, really not interested in any of this anymore. Not because I'm a saint or a monk, but because I actually found more joy in a simpler life. So I'm a very minimalist in many ways, which basically means, which is my point in answering your question, that a lot of diverse, divesting from risk comes to what it is that you need. It's not what it is that you have. Okay, the reality of the matter is, if I can describe to you how I shifted my life from the day I lost my son 2014 until now, to almost nothing. I mean, like I literally spent several years traveling with a suitcase and a carry on. That's all I owned in life. That's it. And, you know, because I'm an engineer and highly organized and airlines will allow you a specific number of pounds, if I needed to change a t-shirt, a one t-shirt will have to go out. Okay, if I needed to add protein bars, I may have to carry my shoes on my shoulder. And you know, it's that kind of simpler life that I actually think is the way to go forward. I think one of the more interesting things that would affect our success in geopolitical uncertainty and economic uncertainty is managing the downside, not the upside. It's not to try and beat that race. It's to make that race irrelevant to you. Okay, and how do you do that? You know, if you have assets and you can turn them into assets that appreciate with an economic crisis, that would be an interesting idea. Right, if you have fixed assets that could be part of the geopolitical conflict, maybe these are not a good idea and so on. Right? Simplifying, not complicating that I think is the answer. And similarly with AI, just to go back to this, I think if we as humanity were to really solve this and I think was it you that interviewed Max Tegmark? No, it was another podcast, but you know, the idea is that, you know, if we were to really, really win with AI, some Altman says that all the time, it would be amazing if we could all come together and set a few guidelines and say, let's all work in that direction and that direction is simpler than all of the mess of the arms race that we're in today.


Where to Find Mo (02:56:20)

Well, this was amazing. Where can people follow you for happiness, more wisdom on AI, the whole sure bang?


Conclusion

Conclusion (02:56:45)

First of all, I have to say it was amazing and I love how you pushed back and put your views into it. You really gave me a lot to think about today, honestly. And I'm more informed because of this conversation. So thank you. I think people can find me on MoGaudet.com. They can find me on all social medias, some combination of MoGaudet. So it's either more underscore Gaudet on Instagram, MoGaudet on LinkedIn, Emgaudet on Twitter and so on. Gaudet is JWDAT. My favorite place to tell more and more stories is my podcast. It's called Slomo SLOMO. And in it, I try to take the same very complex concepts but talk about them from a human view, really not the performance or business or whatever. I just talk about the human side of things. And yeah, and I think people should just listen to you all the time and play this episode more and more until you blow up as even further than you do and go further than where you are. And because I think you're doing something amazing for all of us. I'm a big fan of your work and I'm really grateful that I was part of it. Very kind man. I have no doubt that while this is the second that there will be even more so grateful for your time. Everybody at home, if you haven't already, be sure to subscribe. And until next time, my friends, be legendary. Take care. Peace. Check out this interview with my friend Peter Diamandis about AI and the future of business and technology. You guys are on something that is just my absolute obsession right now. And you make a very bold claim in your new book. You said that the next billion dollar company will be founded by three people.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Wisdom In a Nutshell.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.