MEGATHREAT: The Dangers Of AI Are WEIRDER Than You Think! | Yoshua Bengio | Transcription

Transcription for the video titled "MEGATHREAT: The Dangers Of AI Are WEIRDER Than You Think! | Yoshua Bengio".

1970-01-03T08:36:41.000Z

Note: This transcription is split and grouped by topics and subtopics. You can navigate through the Table of Contents on the left. It's interactive. All paragraphs are timed to the original video. Click on the time (e.g., 01:53) to jump to the specific portion of the video.


Introduction

Intro (00:00)

I want to start with a quote from Ilias Zitskiever. He said, for people that don't know, he's a co-founder of OpenAI. He said, "It may be that today's large general networks "are slightly conscious." So I want to pose that question to you. Are computers becoming conscious right now? - I think it's a question that doesn't make much sense because we don't even have a clear, scientific understanding of what conscious means. So based on that, I would say, no, there are lots of properties of our consciousness that are missing, what it means to be conscious, in other words, what sort of computations are going on in our brain when we become conscious of something. And how that is related to notions, for example of self or relations to others. How thoughts emerge and how they're related to each other, all kinds of clues we have about consciousness, including how it's implemented in your circuits that are completely missing in large language models. - All right, as we think about consciousness from an evolutionary standpoint, we think about its utility. And for people that haven't heard consciousness and have defined before, I think the easiest way to explain it is it feels like something to be a human. And so the question is, does it feel like something to be a machine? And the most important question, I think, is we think about the dangers of AI and what's coming, is does it matter? Is it additional utility for it to feel like something to be a human or to be a machine? Do you agree that that's going to matter in terms of goal orientation, in terms of quote unquote wanting to do something as we think about our AI, is it gonna take over, are we gonna be dealing with killer robots or am I totally off base with that? - My group put out a paper just in the last couple of months.


Ai Consciousness And Data Protection

Enemy Aliens (or Not) (01:55)

And we propose a theory that is anchored in how brains compute. So the theory has to do with the dynamical nature of the brain. In fact, you have 80 billion neurons and their activity is changing over time. The trajectory that your brain goes through as all these neurons change their activity tends to converge towards some configuration when you're becoming conscious. That convergence has mathematical implications that would suggest that what we store in our short-term memory are these thoughts that are discrete, but compositional, in other words, like think like a short sentence. And it's also something ineffable, which means it's very hard to translate in words. And there are good reasons for that. It's just that it would take a huge number of words to be able to translate the trajectory that state of your brain, which is a very, very high dimensional object into words. It's just impossible essentially. So even though we may communicate with language, we may have a different interpretation of what this means. And especially in particular, a different subjective experience because of our life has been different. So we've learned different ways of interpreting the world. - Okay, if consciousness is a byproduct of the feeling I get when my particular brain is honing in on a thought that there is a neural pattern that becomes recognizable, the thing I think that becomes important and the reason that I think this is important as we think about artificial intelligence, potentially becoming killer robots, is my big thing with AI has always been AI has to want something. It has to want an outcome.


I Would Say Its AI Conscious Loskins (03:45)

- Not necessarily. - Interesting. Let me finish that sentence and then we'll pick that apart. But if I'm right and AI has to want something and that's certainly how humans behave, then I understand the utility of this ineffable feeling that you're talking about the we call consciousness. Because for humans to make a decision and know what direction to go in, we must have emotion. If you selectively damage the region of the brain that controls emotion, people cannot make decisions. They can tell you all the rational reasons why they should eat fish instead of beef or beef instead of fish, but they can't then actually decide and do it. So we need that feeling that where this thing is more desirable than that thing. And so my thinking has always been, as it relates to AI, that if AI doesn't want something, it will never be from an emotional standpoint. If it doesn't feel like anything to be a robot, they will never have the final decision-making capability to care enough to take over the world. And so that's where it's like, if it becomes conscious and it suddenly feels like something to be a robot, then they're gonna be motivated in a direction. That direction could be bad, it could be good, whatever. But they're gonna be motivated in a direction now if they are like humans. But if they never become conscious or it never feels like anything, I would think they would be much like they are now where it's like, well, it could be this, it could be that if you've ever talked to the Chette which of course you have.


Subjective experience (05:18)

But that feels like it would sort of be a perpetual state of affairs. What might I be getting wrong? - My belief is that you're talking about two things that are actually quite separate as a clearer one. So wanting something, having goals and getting some kind of internal or external reward for achieving those goals is something that we already do in machine learning. Reinforcement learning is all based on this and you don't need subjective experience for that. So these are like really distinct capabilities.


Extreme reactions to AI (06:02)

Subjective experience is related to thoughts that we discussed earlier. We could have machines that have something like thoughts and potentially if we implement it similarly to how it is in our brain, they might have subjective experience. It doesn't mean that they need to have goals. I think we can build machines that have these capabilities. In other words, they can help us solve problems by telling us how, what is the problem? What is the good scientific understanding of what is going on and what might be better solutions? But they're not trying to achieve anything except be as truthful to the data, what they know, what they have observed. - What then is the disaster scenario of something that can pass the Turing test that you're worried enough that you're saying, we need to treat this the way that we retreat anything else dangerous, whether that's the environment, whether that's, or sorry, climate change or whether that's nuclear weapons, like to put it on that level just at the Turing test level, give me the disaster scenario. - We already have trolls, right, that are trying to influence people on the internet, social media. But they're humans and you can't scale the number of trolls very easily, this would be too expensive and maybe people would not wanna do it, even if you paid them. But you can scale AI with just more complete power. So you could have AI trolls. That, I mean, I think there already exists AI trolls, but they're stupid. It's easy to interact with them a little bit and you see they're not human. I mean, they're better than, and so on. And so now we get to the point where you're gonna have AI trolls that essentially invade our social media, invade, or even our email. And in fact, you could do better than that. It could be personalized. So right now, it's a little bit difficult for a human troll to have a good personal understanding of every person that they hit on, to know their history. I mean, it would just take too much time for them to study you and multiply by a billion people. But an AI system that could just have access to all of the interactions that you've had, that are the videos where you spoke, the texts that's available on the internet, they couldn't know you a lot better, right? So how could that be used?


Imagine WhatsApp, Emails Everyone Knows Who Youre Chatting To (08:27)

Well, it could be used to hit on the right buttons for you to change your political opinion on something. It could be used to even fool you into thinking you're in a conversation with someone you know, because they can know you and they can know your friend and they can impersonate your friend, at least text other textiles. So I don't think we have these things, but just we're just like one small step away from having these capabilities. As I was thinking through the same problem, I was thinking, here's a terrifying example, dear parents, AI is gonna reach out to you, mimicking your child, asking for money. And so it's not a Nigerian prince anymore, it's mom, something happened at school, whatever, they talk in their language, they reference things that you don't think that they could have possibly put out there. But of course, if the AI is good at image recognition and it knows that you guys were on a beach seven years ago, like it could replicate things in the form of a memory that you would never believe that anybody else could possibly know. But we leak especially kids, leak so much data out into social media, that to your point, the AI would be able to have so much context. So at my last company, we got socially engineered and they convinced us to wire 50 grand. And when we went back and looked at the emails back and forth between our finance department and the COO, it was so believable. It wrote, it was obviously a person, but it was writing like they would write to each other. And I was really flabbergasted. And so to think that a human could do that to your point, it's very hard for them to get the amount of context just to take so much time. But when AI is doing it and it can churn through everything that those two people had ever said to each other, ever online, that gets really scary really fast. Okay, so if we did this pause, the letter that you guys wrote, and we paused for six months, and we were gonna hold a convention in that time, and all governments were there, Yoshua, and you're up on stage, and your job isn't to tell us what to do, but it's to open the conversation in the right place. Where would you open that conversation? What do you want us focused on in terms? I'm guessing it's like, we need to limit this or something along those lines. Where do you begin? - I don't know for sure exactly how these technologies could be used. You and I can make up things, maybe some are gonna be easier than we thought, some are gonna be harder. But there's so much uncertainty about how bad it can turn, that we need to be prudent. So prudence here is something that we need to bring in our decision-making individually, because we're gonna be facing potentially these attacks, as nations at the planet level. Yeah, that would be my main message, that the technology has reached a point where it can be very damaging, and there's too much unknown of how this can happen, when it will happen. And even the strongest expert, even the people who built the latest systems, can't tell you. It means that we have to get our act together, and most of these gonna come from governments. So we need those people to get quickly educated, and we need to also have scholars, experts, not just AI experts, but like, you know, social scientists, legal scholars. Psychologists, because this is the psychology of how this could be used, how to exploit people's weaknesses. In order to do the work, the research, also like what sort of precautions do we need? So there are very simple things that we can do very quickly. For example, watermarks, and content origin display. So watermarks just means that one company say like OpenAI, puts up their software, they could easily put out another software that anybody could run, that can test with 99.99% confidence, where their texts came from their system or not. So humans wouldn't see the difference, but for a machine that has the right code, it's very easy. If their system is instrumented properly, in other words, the kind of sneaky and some bits of information that are not, you can't notice, statistically there is no difference, but the chances of having this particular sequence of words would be very, very unlikely, and would go to zero quickly as the length of the message increases. So watermarks are easy to put in, technically speaking, and they would say this texts comes from this company, this version, whatever. So a piece of software running on your computer would be able to say, oh, by the way, the text that you gave me to read is this company, and then we need that information to be displayed, 'cause of course, being able to detect that it's coming from the AI system is one thing, but when you have a user interface, it should also be mandatory. If I'm a social media in particular, and I'm getting, I'm interfacing, I mean, I'm interacting with some, some character out there online, I need to know that that character is not human. And so that must be displayed. If I get a picture or a video or a text in an email, I need my email, you know, software to tell me, warning, this is coming from, you know, opening AI GPT 5.6. - Okay, so I'm gonna push back with the obvious thing, and I think I won't even have to play devil's advocate here. I, I, maybe I'm not more pessimistic than you, but I am in the toothpaste is out of the tube, and there's no getting it back in.


Why Do We Do Debate if We Cannot Stop ChatGTP (14:58)

So I, as a way to move all this forward, lets you and I actually debate the reality of all this. So I'm at the governmental meeting, you start saying that my immediate reaction is, "Yoshua, China is going to develop this if we don't. "If we put the brakes on this, they're not going to." And this is a winner take all scenario.


Best Way to Handle Propaganda (15:23)

We cannot allow ourselves to get behind what say you. - It's a good, it's a good concern. And that's why we have to get China around the table as well, and Russia and all the countries that may have the capability to do this. - But Russia right now feels hemmed into a corner. They are, Putin is literally intimating that he's going to use nuclear weapons. There's no universe, like we've already tried financial sanctions, that's caused them to, you know, start trading in non-dollar denominations. They're grouping up with China, Brazil, South Africa, they, India, they don't care. Like they're going to use that to their advantage. In fact, even bluffing would be a way smarter play for him to say, "No, no, no, we're going to keep doing it." Even if he wasn't, even if they're like backwaters, it would be wise of him to say, "No." In fact, if you don't immediately back off, we're going to unleash a troll farm, the likes of which you've never seen, we're going to completely destroy democracy in the Western world. - Yeah, so first of all, we can protect ourselves without necessarily hampering the research.


Protecting Research (16:39)

So I think people miss and just do the letter. It never sends stop-the-eye research. It's mostly about these very large systems that can be deployed in the public and then used potentially in the various ways that we have to be careful with. So tiny, tiny sliver of the whole thing that we're doing. Second, and second, in the short term, we do have to protect the public in our societies with things like trolls and cyber attacks and that can exploit the eye.


1050, Because the Same Ai Systems That Could Perturb" (17:13)

Third, I don't know, I'm not a, no, I don't know. I don't have my comfort zone here in terms of diplomacy and then- - You and me both, but it's fun. - But my guess is that the authoritarian governments are probably as scared of this technology, but for different reasons. So why are they scared?


1080,1137 1143, Nothing (17:51)

Because the same AI systems that could perturb our democracies could also challenge their power. In other words, imagine AI trolls being able to defeat the protections of the Chinese firewall and interacting with people and putting democratic ideas in their heads in China. Well, that would not be something that the government probably would like to see. And in fact, I think China has been the fastest moving on regulation. Not for the same reasons as we are. So they are afraid of this. So I think they will come to the table, but again, it's not my specialty. But at least there's a chance that they might be willing to talk. And remember the nuclear treaties were worked on and signed right in the middle of the Cold War. So long as each party recognizes that they might have something worse to lose by not entering those discussions, I think there's a chance we can have a global coordination. And we have to work, even if it's hard, we have to work on it. - Yeah, I'm not so worried about the hard part. As I am, what is the natural reaction when you have a very difficult, dangerous thing? And history tells me that we don't come to the table to sign the non-proliferation agreement until we have proliferated so far and we have so many missiles pointed at each other that we finally go, okay, let's not let this go beyond anymore and let's not let it go out to other countries. Like we're perfectly fine being in a stalemate with each other and I worry that a similar kind of reaction will be had here. But I take your point that this is not an area where either of us are an expert, as much as I find it utterly fascinating to pursue that line of thought. But I wanna now go back to what would we do to actually begin to limit this stuff.


1201, Reboot Your Life  Your Health  Your Career 1294,1298 Plan A (19:58)

So we need to get people thinking, hey, this is dangerous, that's clear. But then the watermark thing to me works only for people that agree that they're going to do it. But is there a way, so taking the, instead of trying to get people to not do things, how do we build defensive things that even when somebody's trying to hack the system, so I doubt you know this about me, but we're building a video game. And so one of the things you have to think about is this game, people will attempt to hack it. Like that is just, it goes without saying. So rather than me trying to ask everybody, hey, please don't hack video games. Like literally, it's the dumbest thing ever for the gamers to hack the games, it's stupid. You end up ruining the fun, that game will die out and then people will try to invent a whole new game. Far better for everybody to just let's all agree that we're not going to hack it. But it human nature is what it is and that's never going to work. So what they do is they create an adversarial approach where it's like I find the best hackers in the world to come in to try to hack this game and then I figure out what I would have to do to defeat that. So what would an adversarial setup look like in AI when someone's trying not to watermark, but I can still figure out who that came from or is there a signature or something like that that we could identify? You can reboot your life, your health, even your career, anything you want. All you need is discipline. I can teach you the tactics that I learned while growing a billion dollar business that will allow you to see your goals through. Whether you want better health, stronger relationships, a more successful career, any of that is possible with the mindset and business programs and impact theory university. Join the thousands of students who have already accomplished amazing things. Tap now for a free trial and get started today. Watermarks are the easy thing and I agree they will only be done by the like legit actors. People have already been working on machine learning, trained to detect texts or images that come from other machine-unex systems. But these systems are not nearly as good. But yes, this has already been developed and presumably there's going to be a lot more effort in that direction and we need that as plan B.


1357,Plan A or Not" (22:30)

Plan A is already to reduce the, right now it's just too easy. You're going to have an API and just write on top of chat GPT. So yeah, we should do all these things. By the way, the adversarial approach that you're talking about is from what I hear and read is also what OpenAI is being doing and companies like Google have been doing. They hire people to try to break their system as much as they can. That's exactly what they're doing, red teams. And that's good. We need to continue doing that. But maybe we need to make sure the guidelines for doing that are shared across the board and people can, we ensure all companies have that sort of retesting before it's released to the public, for example. - Yeah, I want to add one thing. - Please.


Ai Development And Ethics

"AI Bill in Canada" (23:38)

- About, because you asked, what can we do in a short term and beginning of your question? So Canada has a bill that is going to pass into law probably in the spring that maybe the first one around the world on AI. And it has a nice feature, which hopefully other countries will undertake, which is that the law itself is fairly, simple.


Why were we off guard when facebooks AI lab (23:55)

It states a number of principles. And then it leaves the details of what exactly needs to be enforced to regulation. And the reason this is good is because it's much easier for governments to change regulation, to be challenged like this. You don't need to go back to the parliament. And so you could have much more adaptive legislative system including the law and the regulation. And that's going to be super important because the nefarious uses that we didn't think about, they're going to come up and we need to react quickly. If we have to go back to parliament and it's going to take two years, no, this is not going to work, right? We need to have a system that's very adaptive in terms of legislation. - Yeah, that is inevitable.


Just Takes The Lead in It (25:07)

That brings me back to, we're in this situation because I think people are surprised that how rapidly AI is advancing, what, how did we get caught off guard? Like someone like you has been in this for so long, you knew the rate of change. What happened? Is it just, we just could not anticipate as we scaled the data up how fast the machine would learn or is there, what is the X? We were surprised that the machine did X quickly. What was X? - As we train tests. In other words, manipulate language well enough that it can fool us. - The experience I had of-- - Sorry, what I'm asking is what allowed it to do that in a way that caught us off guard? - Well, that's interesting, but it didn't require any new science. It's essentially scale that did it. - Do you think consciousness is a function of scale? No, right? - No, I don't think so. I mean, some people think so, but there are theories around that. I think scale is probably useful, but that there are some very specific well as the features of how we become conscious that would work even at smaller scales. So yeah, scale is important simply because the job that we're asking these computers to do when they answer questions is computationally very demanding. And this comes from, so I have these, I have a blog post where I talk about the large language models and some of their limitations. The issue here is that if you take almost any problem in computer science that you can write down probably, like try to optimize this or that to find the answer to this and that question. Almost all of these questions, the optimal solution is intractable, meaning it would take an exponential amount of computation compared with how big the question is. And so the, it's like if you want the optimal neural net that can answer questions about that can reason properly and so on is exponentially big, which means we can't have it. But the bigger our neural net, the better it approximates this. So there's a sense in which bigger is better because of that, even with problems that look simple. So as an example, to illustrate what I mean, consider the problem of playing the game of Go. The rules of the game are fairly simple. You can write a few lines of code that check the rules and tell you how many points you get and so on. The neural net that can play Go and like really kind of like a win, like in other words, go by the rules and exploit them in order to figure out how, you know, what is the optimal move and so on. That neural net, the neural nets we have now that play really better than humans. They are huge also. And it's just a property of many computer science problems that are like that. Like the knowledge needed to describe the problem maybe, even when the knowledge is small, the size of the machine that's necessary to answer questions, take decisions that are optimal is very big. So I think that's the reason why we need big neural nets. That's why we have a big brain. Even if the amount of knowledge that's involved is small. Now in addition, the amount of knowledge that's necessary to understand the world around us is also big. So, but I think that biggest part of what our brain does is inference, as this is the technical term to me, give a knowledge, how do you answer questions properly, like optimize or take decisions that are good given that knowledge? Okay. Is inference the ability to apply a pattern that I saw in the past to a new novel problem? That's, yes, that's part of inference. In classical AI, things were very clear between knowledge and inference. So knowledge was people having typed a bunch of rules and facts. And so the knowledge was not right. It was handcrafted. And inference was, well, you have some search procedure that looks how to combine these pieces of knowledge, these facts and rules in order to answer a question. And we know that's NP-hard. That's like exponentially hard. And so we use approximations. It's never perfect and so on. But people didn't use neural nets in those days. They use classical computer science algorithms that try to approximate this, like a star. Now we have neural nets. And neural nets can do this approximative, because it can be trained to do a really good job at searching for good answers to questions given that piece of knowledge. How does it define good? I always assumed that what AI was doing was trying to guess effectively the next letter or the next word. So based on all the patterns that it had seen. So it's like, I've seen questions like this before. And here are the answers that have been rewarded in that a human has told me that it likes this answer better than this answer. And that the pattern recognition of the machine combined with the human ranking those responses from the machine gives us the way that the AI approaches that question to this answer. Am I missing something? Yeah, I think, I mean, what you're saying makes sense, but there's also a lot of knowledge we have that can be distilled. For example, through how we do it for education. We do it through books and psychopedias. So it's not all the knowledge we have, but you can see that. So let me try to put it in this way. Wikipedia is way smaller than your brain. Smaller than my brain. Yeah, smaller is a number of bits that are needed to encode it. Whereas the number of bits that are needed to encode all the synaptic weights in your brain. Got it.


Why Scale Is Important To Use (31:46)

Yep. Huge orders of magnitude greater. So if we were just talking about these kinds of knowledge, which is not everything, obviously, like physical intuitions and so on is another kind that we can't put in Wikipedia.


Why We Dont Have A Conscious AI Gory Why... (31:55)

But if we just talk about that kind of knowledge, you would want a very big brain just to be able to answer questions that are consistent with that knowledge. That's what I meant.


Garage Topological Negative Transorientation... (32:10)

Right now, that's not the way we train our large language models, by the way. The only thing we can do is to try them is we look at texts that presumably is more or less consistent with that because that's not even the case. There's like people are not truthful and there's all kinds of things. But even if it were, and then by imitating that text, like predicting the next word and so on, we implicitly encapsulate the line knowledge, which let's say is Wikipedia. So again, the argument is scale is important because many problems require doing computation that is intractable if you want to really get the right answer.


Neural Networks And Big Data

What contemporary neural networks look like? (32:58)

And so we need these really large neural nets to do a good job of approximating how to compute the answer. Okay. So now I'm going to have to get into the nitty-gritty a little bit. This will be really one-on-one for you, but might be, certainly will be instructive for me and hopefully many others to say that a neural network is large. What do we mean? Are we just daisy-chaining GPUs? CPUs? Are they, so when I think about the brain, the brain is broken into these hyper specialized regions. So for instance, vision is comprised of this part of vision tracks motion and I can selectively damage the motion center of your brain and now you see everything in a snapshot. There's things to deal with corners. And so you can selectively damage the part of your brain that detects corners. There's things that detect straight lines, curved lines. It's all these like hyper specific little bits and pieces. And I don't, my understanding of a neural network is it isn't that hyper specialized. It's a lot of the same thing over and over and over and over and over and over and over. Help me understand what it means to be a large neural network. Okay, so you write that the brain seems to have very specialized and modular structure as in different parts of cortex, especially when we look at what neurons do in different parts, we see that they're rather specialized. It's not perfectly easy to like identify what this neuron does, but we get a sense of what it's about. And it's also true of our large neural nets, but to a lesser extent. So people have been trying to give a name to what each particular unit in a large neural net is doing. And we can do that by checking when does it turn on, what kind of input was present. So if we look a lot of the things that make this particular unit on and we ask humans, so, you know, what's the category that this belongs to, then we are often able to give a name. And at least that has been done a lot for image processing neural nets, because it's easy sometimes you can say, well, it's this part of the image and this kind of object. For text, I know there's some papers doing that. Now, I do think that our brain is more modular, you know, with more specialization than what we're currently seeing. By the way, cortex is a uniform architecture, the part of your brain that is cortex, which is thought to be that it's more modern in evolution and really essential for like advanced going to abilities, is all the same texture, is all the same kind of units repeated all over the place. And depending on your experience or the kinds of brain accidents that you may have, a different part of cortex will latch on a different job. So these are more or less replaceable pieces of hardware, like our neural nets. There are other pieces in the brain that are not cortex that seem to be much more specialized, like hippocampus and with all of this and so on. I'm at the edges now. That was certainly useful information, but I want to push a little bit farther. So when I'm trying to wrap my head around is I have a vague understanding of how the brain works, very specialized. I do not understand how we scale a neural network unless you're saying that each, okay, let me, I was going to say each node, and then I realized to me a node is either a GPU or a CPU, but I actually don't know if that's true. So first is I would need to understand what is a node inside of a neural net and then how are the different parts of the neural net program to do a specialized thing? We'll start there. Okay, okay, all right. I'm going to start with the end. They're not programmed to do a specialized thing. That emerges through learning.


smart materials (37:38)

Whoa, whoa, whoa, whoa. That's true of the brain and that's true of neural nets. You don't tell this part of the neural net you'd be responsible for vision and this part you'll be responsible for language. But that happens? Yes, you get specialization that happens. Whoa. Because they collaborate to solve the problem. They're different pieces. That's how learning this. Like even like a simple neural net from 1990 does that. How complex is that underlying code? Is that really basic but somehow has these incredibly complex emergent properties or is that incredibly sophisticated? The first. Whoa. It's very simple. What the complexity emerges because you have all of these degrees of freedom and you have a powerful way to train each of these degrees of freedom, these synaptic weights, so that collectively they optimize what you want, which is like predicting the piece of text that comes next properly. But let me go back to the hardware question. The hardware we use currently to train our artificial neural nets is very different from the brain. We're very, very different. We don't know how to build hardware that would be as efficient as the brain in terms of energy and oral compute that we can squeeze into a few watts. And we wish we would say lots of people are trying to figure out how to build circuits that would be as efficient computationally as the brain. Another difference is that the brain has highly decentralized, like, at the level of neurons and we got like 80 billions of them, decentralized memory and computation. The traditional CPU has memory completely separated from compute. And you have bus that transfers information from one to the other to do the computation in the little CPU. That's very different from how the brain is organized, where every neuron has a bit of memory and a bit of compute. Now, people doing hardware have been working to build chips that would have something that's more decentralized and more like the brain. And there are several companies doing these sort of things. They haven't yet reached a point where it can be a GPU. So a GPU is a kind of hybrid thing where it's really the same CPU pattern, but instead of having one CPU, you've got 5000. And they each have their little memory, but there's also some shared memory. And it was designed initially for graphics and good graphics, but it turned out that for many of the kinds of neural nets that we wanted to do, it was a pretty good computational architecture. But it has its own limitation. It's energy-wise, it's like a huge waste compared to the brain, as I said earlier. And a large part of that waste is because you have all that traffic still between memory, places that contain memory and places that do compute. So it's much more parallel than the good old CPU, but much less parallel than the brain.


Our Intuition Machines (40:57)

You're so deep in this, it probably doesn't freak you out as much as it frees me out. But this is like as I really start to try to wrap my head around what is happening. This feels deeply mysterious. Now I've heard people say that one of the things is freaking them out, and this is people deep, deep, deep in AI. One of the things that they find unnerving is that they don't understand what the neural network is doing. They don't understand how it came up with a given answer. How is that possible? It's just a fundamental property of systems that learn. And that we learn not like a set of simple recipes like you would learn how to do a recipe in your kitchen. But learn something very complicated that cannot be reduced to a few formulas, like how to walk or how to speak or how to translate or how to go from speech to speech to sequence of words. These tasks cannot be easily done by traditional programming. But if you put a machine that has like approximate any function to some degree of precision, so a big neural net, and you tweak each of the parameters of that machine billions of times, it can learn to do what you want. It can change its -- but then you don't really understand how it does it. You understand why it -- you understand the code that specifies how this machine computes. But the actual computation it does depends on what it has learned, which is based on lots of experience. So maybe a good analogy is like our own and push-in. These machines are like intuition machines. So what I mean is this. You know how to act in different contexts, like for example, how to climb stairs. But you can't explain it to a machine. You can't write a program. People have tried. You can't write a program that does that. One reason is -- it's all happening in the unconscious, right? But there's a more -- the reason it's all happening in the unconscious is just too big. It's a very, very complicated program that's running in your brain. And the only way that you can acquire that skill that's reasonable is by trial and error and practice. You know, maybe some of evolutionary pressure that initializes your weights close to something that's needed to learn to walk. So things that we do intuitively that need a lot of practice are exactly like what those machines are learning.


Always Like to Explain Why To Give This Particular Answer (44:18)

They can't explain it. We can't explain our own intuition. We just know this is how we should do it. And its knowledge is so complex that we can't put it in form -- we cannot put it in a few formalized or a few sentences. It's just -- that's a nature of things, that they are very complicated things that can't be easily put into verbalizable form. But they can still be discovered, acquired through learning, through practice, through repetition, of doing the exercise again and again. I have a grandson who's been learning to walk him the last few months. You know, he was stumbling a lot and going again and again and again. And after a few months now he's pretty good. He's not like us yet. But it's months and months of practice and getting better gradually through lots and lots of practice. That's how we train those neural nets. And that's why we can't explain why to give this particular answer. They're just like, "Well, I know this is the answer, but I can't explain to you because it's too complicated to have like 500 billion weights that really are the explanation. Do you want those 500 billion weights? What are you going to do with that?" Okay. Let's start teasing this apart. So one of the more interesting things in what you just said is going to highlight the difference between what humans do and what machines do and why until there is a breakthrough. And I always love saying this stuff in front of experts. So you can strike me down if you think I'm crazy. But I think one of the reasons that a breakthrough is going to be required and that we're not just going to be able to scale our way to artificial general intelligence. And I've completely heard you that AI passing a Turing test opens up a Pandora's box that is utterly terrifying in terms of its ability to disregulate the human's ability to function well as a hive heard. But now the reason I think there's going to need to be a breakthrough is that the reason that your grandson is able to get better over time isn't just the calculus of balance. It's that by doing it, he's building stabilizing muscles. And so his muscles are getting stronger in areas that they didn't need to be strong in when he was crawling. So you get this biological feedback loop of, oh, I see what I'm going to have to do. Part of the repetition isn't just locking it into my brain. Part of the repetition is that I'm going to need to develop the muscle fibers and the strength. Now how much of that is mediated by the brain and a part of the brain that's subconscious is a huge question and certainly gets to the complexity and your 50 billion parameters and all that. The other part is that his brain is reconfiguring neuronal connections and it's making some of those connections more efficient through a process called myelination. So it's wrapping the fatty tissue to sheath different connections just like an electrician would do. And now it's got this incredible biological feedback loop of I have a desire. I'm goal oriented. I want to do this thing. This thing is walk. Now how the interplay of I want to walk because I see my parents walk, I see grandpa walking, I want to do that thing or I have something in me tells me being over there is better than being here. And so I actually want to locomote to get there and I would figure this out even if I never saw anybody move, which is probably more likely given the baby start crawling and they don't see people crawl. They just have a desire to locomote somewhere. Then going back to my initial thing about I think machines are going to need to have desire. They have a reason that they want to cross the road if we want to get to human level intelligence. But let's just let me not fractal too much here. So okay, we have this biological feedback loop. You're not going to get that with a neural network. No matter how much you scale it up, it doesn't have a biological, it doesn't have the ability to change itself yet. Now maybe it will and maybe it could architect a new chip or something once it has the ability to manipulate 3D printers or what have you. But for now it's stuck with a physical configuration of chips unlike a human which can morph from muscles to brain matter. It's stuck with a configuration. But and this feels like the very interesting thing that we've gotten right so far, which is I have figured out the pieces that I need, so whether that's GPUs or the code or both. But I figured out the pieces that I need for that configuration to learn in a very emergent way. So I set up the pieces and then I give it a thing I want it to learn and a quote unquote reward for doing so. And then a massive amount of emergent behavior comes out of that. But it's always going to be limited in a way that human intelligence is not because of the biological feedback loop. Okay.


The Problem of Consciousness Scale (49:28)

Now that I've set that stage, do you agree that machines will need something that imitates that biological feedback loop meaning I need efficiency here that I did not have a moment ago for me to continue to get good at this thing. And that without that, we're sort of stuck at the highly potentially destructive ability to manipulate language and images. But that's it. So actually, currently, let's already do what you say. I mean, they don't have the biological framework, but they do learn from practice and mistakes. But can they reconfigure their architecture to get better at it? They don't need to change the chips. They just need to change the content of the memory in those chips that contains that says, so why is the biological loop different? Why is different? It's different because it has been designed by evolution, whereas we are designing these things using our means. But fundamentally, let me step back here a little bit to state something important as a kind of starting point. Bodies are machines. There are biological machines. Cells are machines. There are biological machines. We don't fully understand them. We know it's full of feedback loops. We know a lot. We know a lot of biology, but we don't understand the full thing. But we know it's just matter interacting and exchanging information. So yeah, it's just a different kind of machine. Now, the question some people think that in particular when people were discussing consciousness because consciousness looks mysterious, some people think that, well, it's got to be something that's based on biology, otherwise, how could it ever be in machines? Well, I disagree completely with that because it's just information processing. Now the kind of information processing going on in our bodies and our brains and so on may have some particular attributes that we still don't have in our current machines. But the specific hardware just needs to have enough power. So one of the great starting points of computer science by people like Turing and von Neumann in the early days of computing is the realization with, for example, the Turing machine that you can decouple the hardware from the software that the same output facing behavior can be achieved by just changing the software. Part so long as the hardware is sufficiently complex and Turing showed that you need very, very simple hardware and then you can do any computation. That's like computer science 101. So that would suggest that there is no reason why we couldn't in the future build machines that have same capabilities as we do. Now we are still the current systems are missing a bunch of things. You talked, you know, we talked about walking and why is it that we don't have robots that can walk? I mean, they can walk as well as humans. Have you seen Boston Dynamics? That's a freakish. It can parkour. They're not as good as humans by, you know, big gap. But yeah, I've seen them. But I think the issue is simply that we have tons more data available to train language models than we have for training robots. It's hard to create the training data for a robot because it's in the physical world. You can't just replicate a million robots and then, but eventually people will do it. Or be able to do a good enough job with simulation. There's a lot of work going in that direction. Yeah. So I kind of disagree with your conclusion. So go back to the reason that we don't have robot second walk is because it's just not able to use some sort of model to see enough. Okay.


TheTyranny of Bigness (54:10)

You're saying the point of that is there's nothing fundamentally missing from the architecture that the AI is running on. It's just a modeling problem. Yes. The software part we're still far up. For example, you know, one of the clues I mentioned earlier is that the amount of training data that a large language model needs, like, you know, GPTX, compared to what a human needs in terms of amount of text to kind of understand language is hugely different. So that tells me we're missing something important, but I don't think it's because we're missing something in the low level hardware of biology. Although, you know, I'm a big fan of listening to biology and understanding what brains are doing and so on. So they can serve as inspiration, but I don't think it's a hardware problem. Now hardware is important for efficiency. So current GPUs are not efficient compared to our brains. But it doesn't mean that in the next few years we will not be able to build specialized hardware that will be a thousand times more efficient than current ones. And now there's a much bigger incentive for companies to actually invest in this because these AI systems are going to be more and more everywhere. It's going to become much more profitable to do these investments. Yeah, and proliferation of AI is crazy. Before we derail on that though, I want to ask you, so we're comparing the way that machines are evolving, the way the AI is evolving to human evolution. I've always thought of evolution as to use Richard Dawkins, quote, "The blind watchmaker." It's not trying to make a watch, but the watch emerges out of what we could probably refer to as a few simple lines of code. It's like replication and the way that it replicates plus a desire to survive on a long enough time scale. There's not even a need for a desire to survive. It's simply the selection of those who survived. Yeah, interesting. Is that an important distinction? Because I worry, actually I don't worry. This would then maybe what you're trying to get me to understand about why machines don't need a desire. There needs to be a selection criteria for the one that does the thing better. That will be enough to have the exponential move. That's the way we train those systems. The way we train them is that we throw away all the configurations of parameters that don't work. We focus more and more on ones that do. That's how training proceeds. It changes things in small steps just like evolution does, except evolution does it in parallel with billions of individuals searching the space of genetic configurations that can be useful. Whereas we're doing it the learning way, so we have one individual big deal on that and we're making one small change out of time. It's both our processes of search in a very high dimensional space of computations. Okay, so let me, this was something that I heard you say in an interview at one point. I wasn't sure if I was going to ask it, but it's now, as you were saying that, I realize that the entire universe is born of a simple set of physical laws, for lack of a better word and everything that we see from, because I was trying to think, what is the origin of evolution? Because you said that you don't need it to desire. It just needs to get selected. And then I was like, well, what's selecting it? The laws of physics just dictate that certain things will continue to hold their form and function and others will disintegrate. Okay, so then everything is born out of these laws of physics, which we don't fully understand yet. But do you think there will be similar laws of intelligence that we realize, oh, here are the very simple subset and all of the struggle that we have right now is because much like we don't yet fully understand the laws of physics, but yet we can still build a nuclear bomb nuclear power GPS, all of that we know enough to do amazing things, but we don't know everything. Do you think we have the same thing happening in intelligence?


Intuition, Desires, And Moral Dilemmas In Ai

Intuition and complex things (58:43)

That's what drove me into the field that hope that there may be some principles that we can understand as humans. We verbally write about them, explain them to each other and so on. Maybe write math that formalizes them. That are sufficient to explain our intelligence. Now, obviously for this to work, it has to be that it explains how we learn because the content of what we learned and knowledge that has been acquired by evolution and then by individual life is too big to be put in a few lines of math. So whether this is true or not, obviously we don't know, but everything we have seen with the progress of neural nets in the last few decades suggests that yes, because if you look inside these systems, what are the mathematical principles behind those large language models? I think it's something you can describe and you can explain when I teach, we explain these to students and so on. It's not that complicated. It's just like physics is not that complicated. What is complicated is the consequence. I think there's a good analogy here to also understand the story about intuition and very complicated things that are difficult to put in formula. The laws of physics are very simple. You can write them down.


Desires (01:00:14)

But what's complicated is, well, if you put a huge number of atoms together that obey these laws and you get something very complicated like an ice storm, it's very difficult to predict because we don't have the computational power to emulate that. It's out of very simple things like simple laws of physics. You get something extremely complicated that comes out, that emerges and is similar with neural nets. A few simple lines of code or few simple mathematical equations plus that at scale and with enough data in this case, you get something that emerges that's very powerful and very complicated and not easy to reduce to those initial principles. Now I want to bring back in the idea of alignment of desire. If physics runs off the back of a set of simple rules that does not need to want any outcome, but humans manifest desire and so we rapidly become the most complicated thing that we know of, do you think about the problem of alignment? Are AI researchers trying to give the intelligence a level of desire because that would make it more profound or is that, am I just barking up the wrong tree? I keep coming back to AI without desire mildly potent, AI with desire dangerous beyond all measure and reason. Yes and no. So yes, with desires and a lot of, and the right, you know, computational and the right algorithms could be very potent and very dangerous and potentially very difficult to align to our needs, our values and so on. And lots of people are working on this, like how do we design the algorithms so that even though we give goals to the machines, they will not end up doing things that are against what we want. So that's the alignment problem. But where I disagree with you is that I think we're going to have AI systems that have no goals, no ones. Whether just trained to do good inference, to learn as well as possible about the world from the data they have and to recapitulate to us what are good answers to the questions we are asking. So let me explain why this would be very useful. In science, typically we do experiments and then we try to make sense of that data. We come up with theories and there could be multiple theories that are consistent with the data and so different people may have different opinions on them or they recognize that all of these theories are possible and at this point we can't disambate great between those theories. Even what they do is based on the fact that we have these competing theories, they will design another batch of experiments to try to figure out which to eliminate some of those theories. And then the cycle goes back because more experiments, more data, more analysis, more theories and eventually we hopefully zoom in on fewer and fewer theories. So this is the experimental process of science. We come up with an understanding of the world but it's not one understanding, there's always some ambiguity. In some cases we are very sure but yeah, a scientist whose honest is never, never sure, except maybe for a math, right? So why am I telling you all this? Because that whole process is at the heart of all the progress we've seen in humanity which would be needed to cure disease, to fight climate change even to understand how society works and people interact with each other better. So all of the things that scientists do to make sense of the world and come up with proposals of things we could do to achieve goals.


Having machines like us (01:05:07)

All of that process could be done by very powerful AI systems that don't have any goals. Their only job is to make sense of the data, represent all the theories that are compatible with it and suggest the best choice of experiments we should do next in order to get the answers to the questions we want. And that can all be done without any wants just by obeying some laws of probability that we know that they are known and we just need the computational scale to implement that. And algorithms that people will discover but I think we will only have the basis for that. So what I'm trying to say is we could have machines that are extremely powerful, more powerful even than a human brain. We have scientists doing that job right now. But I'm looking for example in biology because of the progress of biotech, we are now generating data sets that no human brain can visualize, can absorb. We have robots that do experiments again in biotechnology where the number of experiments is in the millions, the human cannot specify a million different things to try. My hand. A machine can. A machine with the right code.


Apart from his predictedswoosh cliffs of Bicklebye (01:06:35)

And that machine doesn't need to have any wants. It just needs to do Bayesian inference if you want the technical term. It just needs to do Bayesian inference. So yeah, bottom line is we can have hugely useful machines that are incredibly smart but have no wants whatsoever. Okay. So it's becoming clearer to me now what our base assumptions are. So your base assumption is that I think AI is already does all the amazing things you want it to do is as dangerous as you could hope it to be as a tool for humans to use. And the thing that I'm focused on is in your scenario, I can just tell it to stop and it will stop the paperclip problem in my estimation isn't a real problem. If I can just tell it stop, stop making paper clips and then it shuts down where it becomes a problem is when it's like, no, I want to make paper clips and I'm going to keep making paper clips and there's nothing you can do to stop me and I'm going to go around you this way and that way. And I'm not nearly as concerned. I get humans have so many weapons at their disposal. I already know what the world looks like when people have just unbelievable, Lee powerful weapons at their disposal. It's manageable. But when the weapon gets to be a million times smarter than I am and decide what it wants to aim at and decide when it wants to go off and nobody gets to tell it otherwise, that's a world that freaks me out. And so when you think about the alignment problem, do you think it's a problem? Like because in your world where the AI doesn't have its own wants and desires coming from an emotional place where one thing feels better than the other. And so it has the same type of human desire to go in a given direction that we have and we know what that's like. People kill for their fucking kids, man. They will do crazy things when the thing feels good enough. So in your world, can't we just tell it to stop? Okay, so there are two kinds of machines we could build with the current state of our technology today. This is a choice. One kind of machine is more like us and has wants and goals and it could decide to do something we did not anticipate. And that can be very dangerous. And people are trying to see how we could program them in a way that would be safer. That's the alignment problem. But we have a choice. We don't have to go that route. We could build machines that are not like us. We don't try to make them like humans. We don't give them feelings. We don't give them wants. See, the thing is once we understand those principles of intelligence, we can choose how we apply them. If we're wise, we're going to choose a safe.


Truth is a great collective unifying goal (01:09:44)

It doesn't do anything. It doesn't want anything. It's just it's trending objective is truth. Okay, so might I suggest when you're we've gathered all the nations together, you're about to go on stage. What I'm going to try to then get you to convince people is that that becomes the most important thing. Do not give AI desire period like inference only truth only. That's it. That's it. That's it. And I actually I think that's the safest route. The problem now the problem is we need to have all these people around the table and to agree. And honestly, I'm not sure it's going to work. There might be some crazy guy somewhere who says yes, but then goes a different route because he wants to have fun with those machines that look like humans and he's a moron and doesn't realize how dangerous it is. People are crazy. People have emotions. People are unconscious of the consequences. They think, oh, it's going to be fine. But I'm going to make a lot more money than the other guy because I'm going to use this thing that is more like humans. There is going to be a temptation to build systems that are like us. Would it be more powerful if it was more like us? I don't know if it would be well, it would be more powerful in the sense of being able to act in the world. But that's also more dangerous. Think in the world based on their goals. That's the place which is a slippery slope. Or maybe we can make progress. But even if we make progress with the AI, like the techniques, we're trying to design if you want rules and algorithms such that even if they have goals, it's going to be safe. But even that is not a sufficient protection because somebody could just decide to not use those guidelines. So having algorithms that make AI alignment work is not enough. We have a social problem. We have a problem of collective wisdom. How do we change our society so that we avoid somebody doing a kind of a strappic thing with a very powerful tool that can potentially destroy us all? It's not something for tomorrow. It's not going to happen tomorrow. It's not going to happen next year. It's not going to happen in five years. But we are on that path. And it's going to take a lot of time for society to adapt and probably reinvent itself deeply for us to find a way to be happy and safe. When was the last time we had to reinvent ourselves like that? We reinvented ourselves many times older, but not like that. Of course, this challenge is completely new. But we did. So think about major cultural changes that have happened in the history of humanity. Think about religions, the invention of nation states, invention of central banks and money. I'm almost quoting Harry here. So we've created all kinds of fictions, as he calls them, that drive our society and people in ways that kind of work but are not adequate for the next challenge. By the way, dealing with this challenge also helps us deal with things like climate change and nuclear dangers and so on. Because it's all about how to coordinate the billions of people on earth so that we all behave in a way that's not dangerous to the rest. I don't know how to do that, but we need our best minds to start thinking about it. You're really starting to pull together some very interesting threads here. So you've all know a Harari's idea of a collective fiction.


If everything is evil, which really works, What is good? (01:14:10)

I've heard other people refer to it as a useful fiction. That's very interesting. Now my concern is that that works when people don't understand. So I'll go to the most recent one, central banking. So people don't understand it. And you've got the whole idea of what's it called? The beast from Jekyll Island or something from Jekyll Island where they go. And to your point, it was very much a decision. They, a cabal of people went and decided we're going to do it like this and we're going to present it to the world like that. And they did it and hey, it just quote unquote works. There's very few things though more unnerving than peeling back that and realizing what it actually is. And so I wonder how we present a useful fiction to the world about AI that will get us all unified in a way that will be useful but isn't manipulative. I think that's the essence of what democracy should be. That we rationally accept the collective rules for our individual and collective well-being. So that actually has worked quite well in many countries. And we need to go one step further in that direction. It can absolutely be truthful and not manipulative. So long as principles of justice and fairness and equity and so on are respected, people will go with that. But here I think we need to go beyond even beyond the democratic system. So in a democratic system, if it works well, we don't need to lie to people to get them to accept to go in a particular direction, to vote for a referendum or something for a particular decision. They should be in fact as conscious and well understanding of the decisions that they're actively taking. Yeah. Yeah. Getting them, getting everybody on the same page, that is the tricky part. That's why when you first said it in the context of religion, it immediately felt like, "Oh, if we could pull that off, if we had a collective narrative about what this meant, it might work. The problem is..." It's not my preferred way of solving the problem, obviously. I'd much rather go with an uber-democracy that really goes to these principles even further.


Ai Competition, Efficient Time Management, And Perception

Proposed Solution: Alignment (01:16:52)

Yeah. That's where I think I get it. Regulation works and on the countries that come up to the table, regulation is amazing. We should regulate this. I think people we have to. You have to do something to your point just because it's hard doesn't mean you should stand still. But at the same time, that's one where I'm like, "Yeah, well, all the countries that regulate it does not account for the person. You were talking about this, like, "Oh, I'm going to go build this thing." They don't recognize second, third order consequences or more terrifyingly. They do recognize the second and third order consequences and they do it anyway or even like because that gets into the crazy man hypothesis. But having read about Robert Oppenheimer when they were building the bomb and how you just become convinced that, "Look, the Nazis are building a bomb. We need to do it. We have to do it faster. We'll sort of worry about the bigger problems later down the road and I very much worry that that's where we are with AI."


Issues of COMPETITION IN A.I. (01:17:47)

Okay. I'm going to set that aside for a second because it's terrifying. So as I said at the beginning, I worry too. Yeah. Rightly so, how we solve the problem is a completely different thing. Let me ask you, do you think as AI continues to come on board and let's say that we're thoughtful about it, we've got good regulation in place, will it be like dealing with a hyper-intelligent human or will it feel completely alien to us? It depends how we choose to design it. So if we build systems that have ones, that have a personality, that have emotions, we could because the more we understand these things from humans, the more we'll be able to do it. Personally, I don't think that's the wise choice. And so if we go, the other root of systems are very useful to us, but not necessarily anything like humans. I think it'll be much easier, comfortably, because we won't be expecting those things to interact with us like humans do. It will be just assistance basically to help us solve our problems and find solutions. Yeah. The alien idea, as you were answering that question, I had a wave of, I don't see how we're going to loneliness unto itself is going to lead people to play with making it emotional. Even as I think about the way that we want to use AI in my company, it's to generate very realistic characters in a video game. And I can just see that to make them more and more realistic, you're going to want that you can mimic emotion for sure. And if we pass regulations, that's probably where we stop is you create things that mimic emotion, but don't actually have them, but to create something that is, that is more realistic, we will. So I want to go back to go for a second. So in Go, they said that the, it was like playing an alien. It just, it made moves that were so different. So given that now already, you have people saying that it comes at something so counterintuitive that it feels completely foreign. You don't think that even without wants and desires that it's going to feel just, just I think it's completely different. The reason it, it, it looks foreign to current players might be the same as the current we are playing Go at the master level. I might look foreign to somebody a hundred years ago, because we've made progress in our understanding of how to play well and the strategies we use now may look very surprising to somebody a hundred years ago. So it's just that these machines are now trained on so many games. They're like, you know, a hundred years into the future if you want in the evolution of Go, if we had let things go. So they discovered, basically it's like science, right? Looks like magic and so you understand it. So if somebody from a hundred years ago comes today and looks at our cell phones, it's going to look like magic. It's going to look very unintuitive. What could that possibly be? And we are just used to that. So I don't think it's because there's something fundamentally different. I mean, there are fundamental differences, but that is just being systems that are more competent because they've been training more data and trained longer and focusing on this particular problem in case of Go. Evolution is one of the big themes that is come out today.


How to Use 10x More Efficient Time Management By Implementing These 10 Steps (01:21:45)

If people want to keep up, Yoshua, with you and the ever-evolving science at his AI, where should they go? Well, I have a website. It's easy to find. I'm a blog where I write some of my ideas. And of course, I also write a lot of scientific papers. My group does. Muhmila, the institute that I founded with a couple of universities here in Montreal. It's about 1,000 AI researchers working on many of these problems, but also thinking hard about the responsible AI aspect and these questions. And there are many people around the world who are thinking hard about this as it should. The truth is hitting your career goals is not easy. You have to be willing to go the extra mile to stand out and do hard things better than anybody else. But there are 10 steps I want to take you through that will 100x your efficiency so you can crush your goals and get back more time into your day. You'll not only get control of your time, you'll learn how to use that momentum to take on your next big goal. To help you do this, I've created a list of the 10 most impactful things that any high achiever needs to dominate. And you can download it for free by clicking the link in today's description. Alright my friend, back to today's episode. This is literally a direct quote from the book towards the end, but, and I quote, "We are headed for collapse. Civilization is becoming incoherent around us." I'd love to know what you guys mean by that and if that to you is a big part of the thread through the book because it was for me. Well, the first thing to say is that you skipped the warning in the front of the book that it should only be read while sitting down. So fall over and injure themselves. Yeah, well, we are headed for collapse. That's really not even an extraordinary claim if you just simply extrapolate out from where we are. We are outstripping the planet's capacity to house us and we don't appear to have a plan for shifting gears. So it's really a factual statement. Now the question really is why? And the bitter pill is that the very thing that made us so successful as a species is now setting us up for disaster, that is to say, our evolutionary capacity to solve problems has outstripped our capacity to adapt to the new world that we have created for ourselves. And so we've become psychologically and socially and physiologically and politically unhealthy and our civilization isn't any better. That said, if any species could get us out of this mess, it's us. It's exactly as Brett said, we are the most labile, the most adaptable, the most generalist species on the planet and born with the most potential to become anything else previously unimagined. So I do feel like in the end, the message of the book, which is explicitly and consistently evolutionary and all of its different instantiations, is hopeful. And yes, that quote that you read is ominous. And I think as Brett said, you know, factual statement, but we can do this. We have to do it and we need to try. And in fact, in evolutionary biology, we recognize something we call adaptive peaks and adaptive valleys. And it would have to be true that to shift gears to something much better, something that gave humans more of what it is that we all value, we would have to go through an adaptive valley and it would look frightening. And in fact, they are dangerous places to be, but it's part and parcel of shifting from one mode of existence to another. All right, I think an idea that's going to be really important to get across. And this is something as a guy that only ever thought I would talk about business. And then in trying to explain how to get good at business, I kept having to come back to mindset and then trying to explain mindset, I keep having to go to evolution. It's like that we're having a biological experience that your brain is an organ. It comes, you guys said that we are not a blank slate, but we are the blankest of slates, which I think is a phenomenal way to put this idea. And I want to tie that to the title and get your guys's take. So it's a hunter-gatherer's guide to the 21st century. And so the way that I take that is that notion. You have to understand that you're a product of evolution, that your brain is a product of evolution. And then once you understand the forces of evolution and how we got here, then maybe, just maybe, we can find our way out of it. So what are the key elements to being a product of evolution that you think people miss, that we must understand if we're going to navigate our way well out of this valley of evolution?


Hunter-gatherers guide to the 21st century (01:26:28)

Let me say first that the title, a hunter-gatherer's guide to the 21st century evokes that sort of romanticized hunter-gatherer on the African savannah of the Paleolithic, which, of course, is a part of our human history and does have many lessons in it to teach us about who we are now and who we can become. But as we say in the book, we are all parts of our history.


To Do for: Our Mistaken view of present extinctions (01:26:52)

We are not just hunter-gatherers. We are also right now post-industrialists. And there are evolutionary implications of that. Go a little farther back, a lot farther back, depending on your framing. And we are agriculturalists. Go farther back, we're hunter-gatherers. Go farther back, we're primates, we're mammals, we're fish. All of these moments of our evolutionary history have left their mark in us and have something to teach us about both what our capacities are and what our weaknesses are and what we can do going forward. And I would add, the lessons from evolution are both good and bad here. One thing that we realized that our students over the course of many years of teaching this material realized was that everything about our experience as human beings is shaped by our evolutionary nature. And that has a very disturbing upshot because we are fantastic creatures with an utterly mundane mission, the very same mission that every other evolved creature has to lodge its genes in the future.


Are we just information processing machines? (01:27:47)

And that this actually explains the nature not only of our physical beings, but of our culture and our perception of the world. So understanding that all of that marvelous architecture is built for an utterly mind-numbing purpose is an important first step in seeing where to go. But the other thing to realize, and you referenced our assertion that we are the blankest slate that has ever existed or has ever been produced by evolution. And what this means is that we actually have an arbitrary map of what we can change, that to the extent that our genomes have offloaded much of the evolutionary adaptive work to the software layer, that means we are actually capable of changing that layer because that layer is built for change. But not everything exists in that layer. So some things about what we are are very difficult to change. Some things are actually trivial, easily changed. And knowing which is which is a matter of sorting out where the information is housed, but it's all there for the same reason. It's all there for the same reason. It's all evolutionary, be it genetic or cultural or anything else. Can you guys give us an example of, and I found this very provocative in the book and it certainly rings true to me, but that to say that we are in some ways fish from an evolutionary standpoint, that we are in some ways primates from an evolutionary standpoint, what does that mean exactly? Again, it's a factual claim, one that once you've seen the picture standing from the right place is uncontroversial. When we say, is a platypus warm blooded? We are not asking a question about its phylogeny, right? We're asking about how it works, right? When we ask, is a whale a mammal? We are asking a question about phylogeny. So when we ask the question, are humans fish, if we are asking a functional question, then maybe not. But if we're actually asking a question akin to, is a mouse a mammal, right? Then we are asking a question about the evolutionary relatedness of that creature to everything else. And the key thing you need to understand is that a group, a good evolutionary group like mammal or primate or ape is a group that if you imagine the tree of life falls from the tree of life with a single clip, right? If you clip the tree of life at a particular place, all of the apes fall together. If you clip it lower down, all of the primates fall together. And the claim that we are fish is a simple matter. If we agree that a shark is a fish and we agree that a guppy is a fish, if you clip the tree of life such that you capture those two species, you will inherently capture all the tetrapods, which is to say creatures like us. So we are fish as a factual matter if the question is one of evolutionary relatedness. So let me, if I may just say, say that in slightly different words.


Consciousness, Impulse, And Understanding Thought Process

Martin Fishbutt, what does the word even mean? (01:31:10)

There are at least two main ways to be similar, right? You can be similar because you have shared history and you can be similar because you've converged on some solution. And so dragonflies and swans both fly, not because the most recent common ancestor of dragonflies and swans flew, but because in each of their environments flight was an adaptive response. And that means that flying, flyingness is not a phylogenetic. It's not a historical representation of what those two things are. Whereas if you say, well, both whales and humans lactate in order to feed their babies, that is a description of something that they both inherited from a shared ancestor, right? So the earliest mammal lactated to feed its young. If any organism on the planet today that is descendant of that first mammal that lactated to feed its young is a mammal. Even if some future mammal went a different way and lost the ability to the lactate, it would still be a mammal. So Brett mentioned tetrapods, tetrapods with a fish that came out onto land with four feet and started moving around and amphibians and the reptiles and the birds and the mammals. But snakes are tetrapods, not because they still have four feet because they don't, but because they're a member of that group. So it's a historical description of group membership as opposed to ecological description of what we're doing. So we're not aquatic like most fish are, but we're fish because we belong to a group that includes all the fish. I'm going to say why I think that matters and why I think you guys put that at the beginning of a book that sort of has this punchline of like, hey, we're really headed towards disaster and we have to be very thoughtful and here are some solutions.


Consciousness Vs Impulse (01:32:52)

So the reason why in business you end up having to talk about evolution is because I need a business owner to understand you cannot trust your impulses because your impulses may not have the growth of your business in mind, it may not reflect an understanding of consumer behavior. It may simply be something from our evolutionary past that was like akin to it's better to jump away from the garden hose thinking that it might be a snake than it is to think that it might be a garden hose and it really is a snake. And once you understand, okay, my mind is structured in a certain way. It has these insane biases. It tends me towards certain things like the one that bothers me. The absolute most is that when people have a feeling, it feels so real. And you never translate it into logic. So you're like, that thing makes me angry. Therefore it is bad and it must be attacked, assailed, whatever. And if you run a business like that, if you cannot divorce yourself from, I have an impulse, stop that, insert conscious control and then figure out sort of what the first principles logical buildup is, you can't solve a novel problem. And until you can solve a novel problem in an environment that changes as rapidly as our current world, you guys call it hyper novelty, if I remember correctly, you get into these crazy making scenarios. And so while it seems almost absurd to say that in some way we are fish, the key point that I take away from your book and that just seems so powerful to recognize to me is that you have to understand that it wasn't a perfect construction, at least not towards modern goals. Does that make sense to you guys?


Understanding How We Think Is Key (01:34:42)

Absolutely. Absolutely. Now there are really two upshots to this claim that you are a fish, right? It's very hard for people to wrap their minds around it the first time. But once you realize that this is what we mean when we say, you know, a whale is a mammal that we are making a claim about the tree of life, then you can actually teach yourself how adaptive evolution works just by simply recognizing that snakes are the most speciose clade of legless lizards. Snakes are lizards, right? You don't think of it that way, but they are. Seals are bears that have returned to the sea, right? So once you understand that all you have to do is say, actually, this is a that, it's unambiguous. And that means that adaptive evolution is the kind of process that can turn a bear into an aquatic creature like a seal, right? So that's one. Or lizard into a snake. Right.


Moving Between Novel Problems And Familiar Memories (01:35:35)

Or a lizard into a snake. The other thing that you mentioned and you're right on the money, which is that if you use your intuitive honed instincts in order to sort through novel problems, you will constantly upend yourself because those instincts aren't built with those problems in mind. The thing that's special for us humans is that we have an alternative. And the alternative, we argue in the second to last chapter of the book, is consciousness, that the correct tool for approaching novel problems is to promote whatever the underlying issue is to consciousness, to share it between individuals who likely have different experience, will see different components of it clearly, and to come to an emergent understanding of what the meaning of the problem is and what the most likely useful solution may be. So in some sense, really what you're saying is in this context, you're trying to get people to get into their conscious mind and process this as a team activity rather than go with their gut, which is very likely wrong. Absolutely. And our capacity as humans, but that includes as a modern human who is trying to engage in business with people to oscillate between this conscious state and a cultural state, which is one in which actually maybe change isn't happening so rapidly. Maybe the rules that we've got are good for the current situation. Let's just do this. Let's do a set and forget on this set of things over here and not constantly renegotiate. Whereas in this other part of the landscape, we actually do need to stay in our conscious minds. And yes, we need to tamp down the emotion and tamp down the quick gut response, but engage with one another and recognize that it's not Satan on the other side of the interaction. It's another human being with all the same kinds of strengths and weaknesses as each of us has. Yeah, there's a really interesting thing that happens when you have a team around you, whether they're employees or otherwise, where literally just the other day I said something to my team and several of them misconstrued it. And I could see they were having a big emotional response. And I said, okay, tell me your objection in a single sentence with no commas, no run-ons, no parentheticals. And what you find is that old Einstein quote of, if you can't explain it simply, you probably don't understand it very well. And so people have this emotional reaction, and they then enact out in the world that emotional reaction, but they don't actually stop to take the time to be able to say it in a single sentence. And so you end up in what my wife and business partner and I call, you end up having to chase them, because you'll solve the, they'll say, here's my problem. You'll solve it and say, cool. So if I do something that addresses that and they'll be like, well, it's not quite that it's, it's this. And then you solve that and they're like, well, it's not. And it's like when you force people to say something really simply, it forces them to interpret that emotion, to bring it into the conscious mind and then to actually deal with it, which I find utterly fascinating. Do you guys have a method by which you do that in your own lives or that you've taught other people to do it? Yeah, I would say there's a first go to move, which is let's figure out what we actually disagree about. And very frequently, you can cover half the distance or more just simply by separating an issue into two different ones. So for example, if I talk to a conservative audience, I know we're going to disagree about climate change. But I also know from experience that I can get a conservative audience to agree that if they believed that human activity was causing substantial change to the climate and that that was going to destabilize systems on which we were dependent, that they would be enthusiastic about doing something about it. And so what we really disagree about is whether or not we are causing something sufficient that we need to take that action, right? Because half the distance covered in a matter of just simply dividing it into two puzzles and you'd be amazed. Almost everything that we have fierce disagreements about look like this, where you just sort of assume the other side has every defect rather than realizing we agree to a point at which point we differ. Yeah, no, and this is different from what we were just talking about, right, with regard to, you know, you're having an emotional or an analytical response. This is a question of, okay, we think we're talking about the same thing.


Negotiating differences in thinkers (01:40:02)

But probably we are using the same words for different categories in our house. And can we can we figure out how many sub categories there are and, you know, say I've got five and my thing, you've got five, but maybe there's only two that overlap. So maybe we focus on those two, but maybe there's also maybe that, you know, the devil in the details is in one of those, one of those other six that is only in one of the people's brains. And when it's revealed to be like, actually, you think I believe that thing and I don't, like that's not something we share between us. So yeah, having the capability to go in and like zoom in and out on problems and say, actually, the problem can be smaller than you think and also it is larger than you think. And then I think and let's constantly re reevaluate the framing and the scale at which we're doing analysis. You guys talk in the book about theory of mind and Heather, I know you've either started writing, have written or have threatened to write a science fiction novel, which, you know, I desperately want you to do and publish. But I've started doing a game when I find myself in that situation where, and I learned this in my previous company where both of my partners were really smart guys, but every now and then we'd get in an argument and I'd be like, I think they're an idiot, but I know they're not an idiot and they think I'm an idiot, but I'm not an idiot. And so I started approaching it as a writer and saying, okay, if I were writing this character in this scene, what would have to be true for them to be acting this way? What would they have to believe, be thinking, whatever? And in my marriage, this has become an extraordinary tool of saying, for you to be reacting this way, you would have to think that I believe XYZ. Is that the issue? And then by getting to that, what I call base assumptions, you can really begin to facilitate that. You guys must have encountered this a bazillion times with students. How do you unearth that? Like, what's the process of uncovering that, especially, in fact, it is so weird to me that you two have become like the most attacked people on planet Earth. I will never quite understand how this has happened, but how do you guys tease out and not just go out there evil? How do you find those underlying issues? Well, first of all, I think we're attacked because we look like villains. Sure. It's so much so. Right. Exactly. Well, you hinted an important issue here that I think is actually quite modern. So if you lived any sort of normal existence from an ancestor, even just a couple hundred years before the present, you would find that they pretty much grew up around the people that they ended up interacting with as adults. They didn't stray very far from home. Everything would be incredibly familiar. And the language that they used to interact with everybody they were encountering would have been shared because it would have been picked up from an immediate group of ancestors that they both knew. Right. When we use English to talk to someone else, we have an incredibly blunt tool because the ancestor from which we picked up that shared language is quite distant.


Definition and language (01:43:15)

And what this does, you know, you really have two kinds of people in the world. You've got people who more or less use the tool like English as it was handed to them. And they don't question it. And you have people who are trying to break new ground. And what is true for everybody who breaks new ground is that they end up building a personal toolkit. They will redefine words so that they become sharper and more refined and more useful. And then when you put two such people together, they will talk right past each other because they don't remember that they redefined things. So one thing that is essential, if you're going to team up with someone else who is generative and done their own work and arrived at some interesting conclusion, you need time. It's weeks of talking to each other before you even understand how they use language. Once you do that, you can have an incredible conversation. But if you think you're going to sit down with them and immediately pool what you know and get somewhere, you've got another thing coming because at first they're going to sound like they don't know what they're talking about. Right. You've got to find those definitions and figure out what they mean. And it's actually, once you realize that this is the job, it's very pleasurable and it's really an honor when somebody lets you look through their eyes and say, "Oh, that's how you see the world." And now I get a chance to see it that way. And then let me show you what I'm seeing and you really can get somewhere. But there's no shortcut about the time necessary to learn each other's language. That's right. And that really is a parallel for what we were doing in the classroom as well. And if we were teaching 18-year-olds, if we were teaching freshmen, we didn't assume that they all came in as experts, obviously. And yet the same logic applies. That everyone has... I wouldn't say actually. I don't think I really agree that regardless of what language you're speaking, you either take it on faith as you have received it or you act decisively to change it. I think teenagers tend to be modifying language pretty actively. And so especially when you find yourself in a room full of relatively young people in a college classroom, you have a lot of people who are using language differently than the professor does. And then you're also in the business of introducing to them a set of tools, some of which has specialized language associated with it, associated with whatever it is that you're teaching. And finding the common ground between these like, okay, actually all of us modify language sum. And let's figure out how to use language that we can all agree on and understand and for the purposes of communication, as opposed to for the purposes of displaying group membership.


Science, Education, And Flawed Studies

Jargon versus Terms of Art (01:45:50)

Yeah, in fact. Because that's what jargon is often is about group membership displays. And that's what memes and especially, well, a lot of them, a lot of the very rapidly changing language that doesn't happen in technical space is really about demonstrating that you're on the inside of some some joke. Well, actually, this is a perfect case of a personal definition that must be shared. Otherwise you can't talk, right? Because I at least distinguish between terms of art and jargon. Most people will use the term jargon for both things. But the point is, terms of art are a necessary evil, right? You have to add some special term because the language that you're handed, the general language doesn't cover it. And so you need a special term to describe something. And that means that somebody walking into the conversation isn't necessarily aware of what's being said until they've learned that term. Jargon is the pathological version of this. Jargon is the use of these specially defined terms to exclude people from a conversation that they probably could understand and that they might even realize you didn't know what you were talking about if they could understand the words that you were using. So you use those words to protect yourself. And until somebody gets that when you say Jargon, you're not talking about specialist language, you're talking about a competitive strategy. They won't know what you're saying. So, and the difference, as Heather points out, is in a room full of 18 year olds, especially when you're the professor at some level, you can say, look, here are the terms that we need in order to have this conversation and more or less people will adopt them because that's the natural state of things rather than two peers getting together where you have to, you know, my rule is, I don't care whether the definition ends up being the one that I came up with or your set of definitions. It doesn't matter to me. What I need is a term for everything that needs to be distinguished. And we both need to know what those terms are in order to have the conversation, but whose terms they are, doesn't matter. And yet, you know, as I think we say in the book, our undergraduate advisor, Bob Trivers, an extraordinary evolutionary biologist, when we were leaving college and applying to grad school, he gave us a piece of advice about what kinds of jobs we might ultimately want if we were to stay in academia. And he said, do not accept a job in which you are not exposed to undergraduates because teaching undergraduates means exposing yourself and the thinking that you are presenting to naive minds who will throw curveballs at you. And some of those curveballs are going to be nuisances and maybe they'll waste your time, but some of them are likely to reveal to you the frailty in your own thinking or in the thinking of the field. And that is the way that progress is made.


What is science? (01:48:43)

And so, you know, who we call peers is up for discussion and recognizing that we can all learn from almost every person that we interact with is a remarkable way forward. Yeah, and the corollary to that is there's a lot of pressure not to reveal what you don't know by asking questions that will establish the boundaries of your knowledge and being courageous about actually acknowledging what you don't know often leads to the best conversations, right? You guys do talk about that in the book. And I think that this is such an important idea. I'd love to tie it to something else you talk about, which is what is science? Like you guys have a pretty unique take on what science is that it could be done with a machete and a pair of boots out in the jungle. It can be done in a laboratory. Yeah, what is science? This is a method for correcting for bias. And that method is pretty well known. It has had a few updates along the way. But the basic idea is it is a slightly cumbersome mechanism for correcting for human bias. And the result is that it produces a set of models and a scope of knowledge that improves over time and what improves means is it explains more while assuming less. And fits with all of the other things that we think are true maximally. Ultimately, all true narratives must reconcile. And that includes the scientific narratives that we tell at different scales, right? The nanoscale has to fit with the macroscopic scale, even if we don't understand how they fit together yet. So ultimately, we're sort of filling in from both sides what we understand. And what we expect is that they will meet in the middle like a bridge. And if they don't, it means we got something wrong somewhere. Yeah. So science is not the methods of science. It's not the glassware and the expensive instrumentation. And it's not the indicators that you're a scientist because you're wearing these things. You know, it's not the lab coat. And it's not the conclusions of science. It's not the things that we think we know, many of which things are actually true and some of which aren't. Science is the process. And all those other things are sort of hallmarks that may or may not be accurate proxies when you're trying to figure out, is that person doing science? Is this science over here? But what science is is action process. And it's worth saying that you don't need it for realms that are not counterintuitive, right? You don't need to do science in order to figure out where the desk chair is before you sit down, right? There's a parent to you where the desk chair is because you're built to perceive it directly. Now, every so often, we all have the experience of looking at something and not being able to figure out what we're seeing. There's some optical illusion, the way we are sitting, where we are in relation to the object we're looking at.


Students should be challenging your thinking (01:51:45)

And then you will go through a scientific process. You know, if that is a so and so, that also suggests this. And I can see that that's not true. So what could it be, right? That process is scientific, but by and large, the direct perception of objects around you, because it is intuitive, because it's built to be intuitive. Your system is built to understand it in a way that makes it intuitive, doesn't require this. So we need science where things are sufficiently difficult to observe or counterintuitive. So you need a process to correct for your expectations. What drives all this to me and that gets missed, even though it's sitting in plain sight, is to make progress, you must hunger to know where you are wrong. And if you can derive, and again, I commit everything from a business lens, in business, if you can derive tremendous pleasure and quite frankly self esteem, from your willingness to seek out the imperfections in your thinking, you'll actually make it. If you don't, and it's an ego protective game for you, and your ego is built around being right, then you're going under. And to your point about exposing yourself to undergrads, some of the most phenomenal, like, incisive questions challenging my leadership have come from interns who just, they've never had a job before. And so they're like, "Oh, why are we doing XYZ? And you're like, why are we doing that?" And if in that moment you're like, "I must, you know, present myself and have a reason for why we are doing that," you actually talk yourself into something. And because the market, much like evolution or reality, which is something I definitely want to talk about, how there's a weirdness that we're living through now where people feel like if they can convince you through language of something that it actually somehow affects the underlying truth, but in business, the market does not care. Like you can convince your team that you're right, but if the market doesn't embrace it, you're going to fail. And there's something wonderful about that. Well, I want to push back slightly. Admittedly, this is not an area of expertise, but it seems to me that there are two things that business needs to be divided into two things in order to really understand what you're getting at. The business where the market is actually in a position to test your understanding of what is true and what will work and what people want and things like that is one thing. That's real business.


Psychology is a field struggling with flawed studies (01:54:08)

And then there's a kind of rent-seeking in which it may be about a company that does not have a functional product that is selling the idea that it will have a product that no one else will have and its stock price rises as a matter of speculation. That may well be a realm in which it is deception. In fact, this is beyond the scope of the book, but wherever perception is the mediator of success, you have deception as an important evolutionary force where physics dictates whether you've succeeded or failed. You don't have that problem. You can't fool physics. So I don't know what the two words for the two kinds of business are, but the rent-seeking part of business and the actual production of superior goods or the same goods at a cheaper price, that's a different kind of business structure. Well, here's what's interesting. I really fasten that point. I think that they do fall under the same category. So when I say that the market decides, so if your pitch is, "Hey boys and girls, we have to deceive the market and we have to game it and here's how we game it." So everything is a function of your goal. So if your goal is to deceive and to create a pump in your stock price, there is a way to do that that will work and there is a way to do that that won't work. And now getting into honorable goals versus dishonorable goals, that is really fascinating.


Rent Seeking (01:55:36)

But I think that they do fall into the same category of either the thing you do moves you towards your goals or it does not. Yeah. I mean, I still think there's room for a division because there is the mythology of the market is that it pays for value and rent seeking violates that. Rent seeking effectively is a failure of the market. And so I don't know where the definitional split needs to be, but it does seem to me that although you're right, whether what you are doing is assessing what you believe the psychology of the market to be or whether you are assessing what might be physically possible in terms of a product, those are both real systems that you are either correct about or not, but there does seem to me to be a distinction between rent seeking and the production of actual value. And there's a perfect analogy to be made to academic science, of course. And so in academia, if you are a scientist, you are supposed to be seeking and understanding of reality, but the way that modern science is done involves a lot of requesting of grants from most of the federal government. And just as I imagine in business, although definitely not my area of expertise, the bigger you are, the harder it is to change course. And in academia, in part, that means the later in your career you are, the harder it is to change course. And therefore, the harder it's going to be to do something like embrace that you were wrong. And, you know, actual honorable good scientists will always, will always fess up and talk publicly about when they were wrong. But if your entire lab is contingent on a model of the universe that is turning out to look ever less likely, it's going to be much more difficult for you to do that for you to embrace the brongness of, you know, what might be the livelihoods of not just you, but many of the people who are working under you. How would you handle it? Well you have to restructure things so that what actually matters is being right in the long term. What we have is an epidemic of corruption inside of science, which has more or less been spotted first with respect to psychology. Psychology is difficult to do because you're inherently looking into the mind and you don't have a direct ability to measure most of what's there. But the p-hacking crisis, basically the abuse of statistics to create the impression of discovery, which then resulted in the inability to reproduce a large fraction of the results in psychology is actually the tip of a much larger iceberg that basically science as a process is excellent. But science as a social environment is defective and especially defective where we have plugged it very directly into market incentives. And we've put scientists at an unnatural level of competition for a tiny number of jobs. We produce huge numbers of applicants, which means that the incentive to cheat is tremendous. And those who stick to the rules probably don't succeed very well. So basically what we have is a race to discover who is best at appearing scientific and delivering those things that the field wants to believe rather than those things that the field needs to know. So the short answer to your question, which isn't especially operationalizable, is you need to put a firewall between market forces and the scientific endeavor because although science is an incredibly powerful process, it is also a fragile process that needs insulation for market forces or it cannot work. So I would say just in brief, again, not particularly operationalizable, but reward public error correction. No matter at what stage you are and what the nature of the error was, unless there was intentional fraud, which of course does exist, public error correction should be rewarded without shaming, without loss of priority in other things and the ability to do science because not only do we need people to be able to see that they've made mistakes and actually course correct, but we need people to be taking enough risks early on that they are likely to sometimes make errors. And so in the current environment where any error, it can be considered like the death nail for a career, we have ever more timid scientists and that is making us less good at science as a society. And in fact, it almost seems implausible that people would go around acknowledging their errors, but it wasn't so long ago that this was fairly common. In fact, I used to study bats. And there's a famous example of this not so long ago. A guy named Pettigrew had advanced a radical hypothesis that suggested that the old world fruit bats, the so-called flying foxes, were in fact not part of the same evolutionary history as the bats that we see here in the new world, for example, the microbats. He argued that they were in fact flying primates, which was a fascinating argument. It was based on their neurobiology looking more like monkey neurobiology than it does like bat neurobiology, which turned out to be the result of the fact that they use their eyes rather than at a location. So it was wrong. And what he said at the point that it was revealed by the genes that he had been wrong was, if it is a wrong hypothesis, it has been a most fruitful wrong hypothesis, which was absolutely right. The work that was done to sort that out was tremendously valuable.


Evolutionary Toolkit (02:01:25)

And so anyway, nobody who has had to course correct and admit an error finds it pleasant. But we have to restore the rules of the game where the longer you wait, the worse it is so that the incentive is as soon as you know you're wrong, owning up to it so that you are on the right side of the puzzle as quickly as possible. That has to be the objective. As you guys look at society and where we're at now, so one problem you've obviously just very eloquently laid out, you've got incentives around admitting that you're wrong is could be the death knell of your career. What else is going on that makes you guys have that quote that we started the episode with around sort of the, you didn't use the word disintegrating, but that there's to put my own words to it. There's a crazy making that's happening at the societal level. What has led to that? What are three or four factors that are causing that breakdown? Well, in part, the bias that we have as evolutionary biologists is that we see a failure to understand what we are as producing short term reductionist metric heavy pseudo quantitative answers to questions that warrant a much more holistic and emergent approach. What are some of the things that modern humans have embraced or have been told to embrace and some of us have and some of us haven't that have helped produce problems for modern people? This is not new with us, but the ubiquity of screens, the change in parenting styles to protect children from risk and unstructured clay and the drugging of children legally with anti anxiety and anti depression meds more likely if they're girls at the speed of their boys. Those three things in combination, all of which were sort of on the rise in the 90s and hit fever pitch in the in the odds and early teens, helped reduce a generation that became embody adults, but with binds that had not had a chance yet to actually learn what it is to be human. And some of that is reversible. And, you know, really, we just by by chance, we were college professors. We were college professors for basically the entire period of time during which millennials were in college. So we taught millennials from from beginning to end and almost to a person, our students were amazing and receptive and creative and and capable. And if you know, when when we talk about the generation of millennials, it's those people who were drugged and screened and helicopter and so cloud-parented, right? So with individual attention, people can be pulled out of the tailspin, but as a side a level, that's exactly what we're in as a tailspin. What is the tailspin exactly though? What is it about those things that what does it create in people? I want to address that as part of a slight reorientation of the question.


Symptoms, Virtual Risks, And Learning From Mistakes

Evolutions Toolkit (02:04:43)

So one of the things that is causing the dysfunction is, you know, it's not just the fact of the screens, but it's what they imply that virtually everything that people know is coming through a social channel, right? So it is all open to manipulation, augmentation, distortion, and what people generally do not pick up in the normal course of an education, even what we consider to be a high quality education, is interaction with systems that allow you to check whether or not that which sounds right actually comports with logic. So for example, if you interact with an engine, you can't fool an engine into starting. You either figure out why it isn't starting or you don't. And so we advocated for students that they dedicate some large fraction of their education to systems that are not socially mediated, in which success or failure is dictated by a physical system that tells you whether or not you've understood or failed to understand. And this can be mechanics or carbon tree, but it also can be, you know, baking frankly, or running to play the guitar, right? Or parkour, anything where success or failure is non-arbitrary, which you don't want is an education built entirely of I succeeded when the person at the front of the room told me I got it, because if the person at the front of the room is a dope, which unfortunately happens too often, you may pick up wrong ideas and feel rewarded for believing in them. And that can result in tremendous confusion. I would just finally say that the book really is about what we have informally called an evolutionary toolkit. And that evolutionary toolkit, the beauty of it, what we saw and what students reported to us in their picking it up, that toolkit allows with a very small set of assumptions, the understanding of a large fraction of the phenomena that we care about. Almost everything we care about as humans is evolutionarily impacted. And the ability to go through what you are told about your psychology or your teeth or anything like that and say, does that make sense? Given the highest quality Darwinism that we've got, does it make sense to be told that our genomes suddenly went haywire? And that's why an ever increasing fraction of young people need orthonometry? Nope, not for a moment. Does it make sense that we have a piece of our intestine called the appendix that is no longer of any value? And yet a huge number of people have this thing become inflamed and burst so that their lives are placed in jeopardy? Nope, it does not. The ability to check what you're being told against a set of a toolkit for logic that is so robust that you can instantly spot nonsense is a very powerful enhancement. And it does not involve knowing more. It involves knowing less and having that little bit that you know be really robust. That's terrific. I would just say it doesn't necessarily involve you knowing less but being certain of less. It requires that you rest what you know on less. The foundation is more robust and less elaborate. It's just about to ask what it means to know less. So thank you for that. Yeah, that is very interesting. When I think about, I forget the exact quote, but as the island of your knowledge grows, the shore of your ignorance grows to you know whatever the the famous quote, but it's a really interesting dichotomy. So all right, we've got this generation that's growing up there looking at screens. You guys make a pretty interesting assertion in the book about what screens do in terms of you're getting emotional cues from a non-human entity and that it may play a part in the rise in autism. I found that incredibly interesting. What I want to better understand is what's going on in our brains that so helicopter parenting or snowplow parenting for instance. Like why does that trap us in a perpetual childhood? You guys talk about rights of passage in the book. I'd be very curious to hear like how do we begin to deal with some of these things whether it's screens, whether it's snowplow parenting, you know, if I find myself a 19 year old and I realize I've been done dirty. I've been on drugs for ages. I was raised essentially by a screen. I'm, you know, having trouble connecting, having trouble relating and my parents have taken care of everything for me. What are the symptoms I need to look out for and then how do I push forward? Well, in terms of symptoms, this is more or less a it's a self-diagnosing problem. You either, none of us feel perfectly at home in modernity because in fact we are not at home.


What are the symptoms (02:09:30)

We can't be even, you know, the world that we live in is not the world of our grandparents. It's not even the world that we were born into. We live as adults in the world that just literally didn't exist when we were born and back. It's not the world even that our children are born into unless they were literally born yesterday. Right, exactly. It's changing so fast it can't be. But that said, you either are feeling constantly confused about what you're seeing and hearing and you don't know what to think or you found something that allows you to move forward and even if you can't fully manage what it is you're confronting, it should surprise you less and less. And so we provide a couple of tools in the book. We talk about the precautionary principle and we talk about Chesterton's fence which are really two sides of the same coin. And if your life has been built around the idea that whatever the newest thing is, the latest wisdom is what you were brought up on then in all likelihood you are taking various drugs to correct for various things which may very well be the symptoms of the last drugs you took. You may be engaging in all kinds of behaviors to fix mysterious problems. Maybe you can't sleep and you know so you're taking some aggressive mechanism to deal with that. The basic point is back away from that which is novel and untested and in the direction of that which is time tested and it will result in a decrease in anxiety and increase in your control over your own life. And the way you'll tell is that you will feel less confused more of the time. Can you guys define Chesterton's fence? I thought that was a really great part of the book. Yeah so GK Chesterton was a 20th century political philosopher. Maybe I'm not sure exactly how he would have defined himself but of the many contributions that he made to you know I think he was a conservative but of one of the many contributions that he made was imagining two people on a walk together and coming across a fence that appeared to be on their way. And Person A says let's get rid of the fence. And Person B says well what's it here for? Person A says I don't care it doesn't matter I just want it gone. And Person B Chesterton I suppose in my telling here it says there's no way that I should let you get rid of the fence until and unless you can tell me what its function is. If you can tell me what its function is or was originally here for then maybe we can talk about whether or not it's time for it to go. But until you can explain to me what the function is or was then there's no way that I should allow you to get rid of it simply because you see it as an inconvenience. So you know the appendix that Brett already mentioned is a perfect example of this and we talk about in the book things like you know Chesterton's breast milk you know we should we should we should be abandoning breastfeeding. We are abandoning breastfeeding to the degree that we're doing so at our peril.


The risk of the Virtual Cocoon (02:12:50)

Chesterton's play not letting children have long periods of unstructured play in which adults are not monitoring them and are not telling them not to bully each other even though bullying is bad yes but allowing children to figure out for themselves in mixed age groups how it is to navigate risk themselves that is how those children will grow into competent young people. And you know if you do arrive at 19 having been drugged into submission and having had your parents clear all of the hazards out of the way for you the thing you can do is start exposing yourself to risk and risk is risky you know this is you know this is both a tautology and also shocking to people because you know wait you're telling me I need to expose my children to risk well if you want to guarantee that your child will make it to their 18th birthday alive then sure put them in a cocoon right that's the way to make sure that their body will get to 18 is to reduce all risk from their lives and protect them from everything but will they have the mind of an 18 year old at that point no they will not so you trade a little bit of security that your child will survive and you know every time I say anything like that I get chills you know we have children they're teenagers now and the idea that one of them would die and that they would die taking a risk that we had implicitly or explicitly encouraged I don't know how you go on right and your parents do but I don't know how you go on but the bigger risk is that they get to 18 and they're incompetent and they can't think and they don't know how to navigate the world especially now where the world in the future will look nothing like it did in the past they need to be able to problem solve and the way to do that is to be exposed to as many situations in which they're navigating on their own as early as possible selection has really given parents a the job of both managing risk and not fully managing risk in other words it's not that you don't protect your children but you want to protect them at a level where they do make mistakes and those mistakes do come back to haunt them and it causes them to be wise adults who are capable of managing risk when the risks when the stakes are much higher and that's really the question it's not do you want your child to be safe of course you do but you want them to be safe across their entire life and if you protect them too much when they are young they will not be able to do it when they are older and the risks are frankly much larger yeah one of the things that I find most intoxicating about you guys your book your podcast is nuance complexity like recognizing that by being reductionist by boiling things down to you know make them simple but note no further whatever the quote is that there is a point at which you can reduce something so far that you lose what's really going on and finding our way through all of this complexity though is incredibly difficult so as it comes to your own parenting style how


Idea of nuance and bind (02:15:31)

have you guys employed this the idea that I'm most interested of yours is is the idea that the magic happens in the friction so whether it's male female whether it's right left it's understanding or safety and risk it's understanding that it's either side is problematic how have you guys navigated that complexity well we gambled with neither of us knew particularly much about rearing children at the point where we ended up with them and we more or less gambled on the idea wasn't much of a surprise to me at the point that we ended up with them no no that carried them every once no no certainly we knew they were coming for the many months but but from the point of view of what one does to raise children well we hadn't had a lot of experience with young kids they just hadn't been in our lives and we gambled on an idea that I still think it's not entirely obvious why it works at all but if you treat your children more or less at least cognitively you just shoot way over their heads right you talk to them like like adults from very early on and they cannot respond in kind but they get much more than you would think based on what they can say in response and so we have been extremely open with our children about the hazards in the world that they face and the hazards in our family have been frankly greater than than most children would be confronted with at least in the weird world now um we have been honest with them we you know we have an explicit rule in our household and the children could recite it without thought right you are allowed to break your arm or your leg you are not allowed to damage your eyes you're not allowed to damage your skull you're not allowed to damage your neck or your back right now when you say that to a kid and they realize actually it's not that I'm being told no no no no I'm being told I am actually allowed to break my arm and nobody is going to necessarily you know be concerned you know yes we'll we'll take care of you no matter what and if you damage your eyes we'll take care of you then too but the basic point is there's just a fundamental distinction between damaging things which repair pretty well and damaging things which don't and that ought to exist in your mind you know every time you leave the house understanding that there are certain things you know that it's not that you want to avoid bad things and go towards good things it's that there's a whole spectrum of bad and you may need in an instant to navigate you know if you're driving down the highway yes the first job is don't crash right don't crash is a good rule but you can't always not crash and sometimes you've got a choice about what you crash into or how you crash and you know if you've just got everything filed as a binary then you're you're in much more danger so being clear with kids about the subtleties and the nuance and frankly about the bind that you're in our children know that we have made a conscious decision that in order that they can manage risk as adults they have to face risks as children that could potentially cost them their lives you know we took our kids into the amazon for example that's not a safe place to be but they're also the kind of kids who can handle it now so one of the things that was very important to us was that our children literally learned how to fall that when they were climbing up on things on trees or in jungle gyms that they would launch themselves intentionally so they would learn how to fall safely but metaphorically learning how to fall is the other thing that you learn once you are engaged and literally learning how to fall and maybe maybe that is the kind of risk that we are in fact trying to prepare our children for and that we are arguing that parents everywhere should be preparing their children for how to fall safely so that you get up and can live to maybe not fall again but if you do fall again live to get up again another day yeah actually it occurs to me right now but engineers know this backwards and forwards right fail safe that's what you want a system that fails safely and building that into your kids is is an essential an essential skill so one of the things you talked about in the book that i was like whoa was when your son broke his arm i think he was older so i know when the ones broke it going down the stairs that one required immediate medical attention but there was a time where he broke his arm and it was like a couple days i think before you actually went and had it looked at um and there's like an actual principle behind strengthening the bone that you guys go through and i was very impressed um talk about that including the notion that as a success like you guys have overcome one of the reasons that i didn't become a father was it seemed so self-evident to me that you had to do things like let your kids take risks you know within confines that you had to make


Mistakes as learning tools for children (02:20:55)

things hard for them within confines and i wasn't sure that i would enjoy that process so it was obvious i would have to do it and not obvious that i would enjoy it and when you guys talked about like how we sort of over coddle things which i could immediately empathize with i get why people do it why you want to wrap a broken arm in the thickest cast you can possibly find but that even that isn't always the right answer yeah no it's it's really not that we our brains are antifragil and our bones are antifragil and they they become stronger with stressors and society seems to be imagining that what we all are is fragile and by imagining that we're fragile by creating conditions that imagine that we're fragile that becomes the reality we become more and more fragile and less antifragil so in the case of our our older son or it was our younger son but when he was older who broke his arm in the last day of camp we did get him to to an emergency room that day it was several hours but it was that day and they told us that at the point that we got back home to portland which was a several several hour drive and it was going to be many days before we got there that we should go see an orthopedist and to have a cast put on to have a cast put on so we spent several days splinted he splint he spent several days splinted and with some with some pain medication before we ended up going home to portland and where we did not get a cast but the important thing is this is not an experiment we were ran on our child before I had learned it on myself right so evolutionarily speaking there is a logic to what one does with broken bones and it's a very different logic lots of creatures don't heal so well horses famously don't heal very well and the reason for this is fairly obvious a horse a wild ancestral horse that had a broken limb wasn't going to recover that is to say once an animal was hobbled by a broken limb it was going to be picked off by a predator so the selection that creates the capacity to repair wasn't there on the other hand sloths which fall out of trees fairly regularly but don't depend on their ability to get away from predators through speed actually survive very frequently and when we look at sloth carcasses they very regularly have breaks that have healed so creatures that can heal have that capacity our arms and humans are such creatures we are such creatures so one wants to be very careful right if a bone is misaligned you then want to utilize medicine in order to get the healing process to work correctly so it doesn't heal in a misaligned way but if you've got a fracture and you haven't misaligned something there's a whole other logic that takes over immobilizing the arm isn't what we're built for in fact what you're built for is to have pain and inflammation do the job and the result when I broke my arm and I just said you know what I've thought for my whole life why is it that we rush to get a doctor to to immobilize this and then we atrophy and have it removed and we have to rebuild our strength maybe that's not how it's supposed to be logically evolution has prepared us for this let's see what happens when I broke my arm and I was certain that it was not misaligned and I let it go what I found out was a one has to be very careful for the first day or two until you learn what it is that you're capable of doing but your capacity begins to return very very quickly and the degree to which I was better off the time that I broke my arm that I fractured my arm and did nothing medical than the time that I fractured my arm and did the standard medical thing and had the cast was night and day different and the fact is we talked to Totovi our younger son when he broke his arm


Adult passages, life transitions, wisdom, advice (02:24:18)

and we told him what we were thinking and he had watched me go through the experiment and he elected to go through it himself and lo and behold the same logic applied in his case yeah that's really interesting and feels like there's a lot that we can extrapolate from that in terms of our real lives one idea that I find really enticing and I'm sad that I didn't go through it when I was a kid our rights of passage do you guys think about that? All the time we have dispensed so this is a classic Chesterton's fence issue where it used to be that there were these you know hallmarks of having passed through a certain developmental state and at some point I think people started to feel that these things were primitive and they dispensed with all of them and much to our peril because it you know what you are is a creature that starts out utterly helpless and ends up incredibly capable but there are moments at which you take on new responsibility right now it's arbitrary is an 18 year old really an adult in many ways yes in some ways no it's not really a moment at which you become an adult but you do need a moment at which we say actually at this point these responsibilities are ones we believe you can handle and going forward that's what they are and the ceremony itself instantiates it the ceremony helps make it real and maybe it's at 18 maybe it's at 15 maybe it's at 13 depending on the tradition maybe it's counting in a different way in cases where perhaps adherence to the calendar is not the thing but you know the moment now you are a man now you are a woman is has got to be an empowering one and it's one of the things that is almost universally lost for us us weird people and you know it may well be the case just as is the case with something like follow through in sport what you do after you hit the ball actually does not matter but it is very important that you intend to follow through and in that same way going through your life knowing that at this moment i'm going to be expected to do this thing whether it's a vision quest or whatever it may be knowing that that moment is coming and that on the far side of it you will be a different person is a developmental process in and of itself so it's very likely the thing that happens as you anticipate this uh right of passage that is really the important developmental thing but we've just dispensed with them all so we've talked a lot about evolution and all the different things and ways that it manifests in our life and i want to bring now people back to where we started in the book that you know we're we're at if if instead of a nuclear clock coming towards 12 you guys would say that from just a societal standpoint we're edging up somewhere in there um talk to us about the fourth frontier but to understand the fourth frontier i think we have to understand the first three frontiers uh so if you can walk us through that it was a really interesting idea it was the part of your book that i had to read twice because i was like whoa there's really something fascinating here and it it hints at a very complex answer to a very complex problem uh was entirely novel for me i've never heard this idea explored before and i think that it'll be really helpful for people to see uh you've thought not just through the problem but through potential solutions well the first thing to realize is that all evolved creatures are effectively in a search for opportunity and that opportunity looks like for an average creature under average circumstances if it's a sexually reproducing creature the average number of offspring that it will produce that reaches reproductive maturity themselves will be two doesn't matter if they produce a hundred babies or three the average that will reach that number is two and the reason is because the population isn't growing or contracting so two parents will end up replacing themselves and no better at least on average when you have succeeded evolutionarily you find some opportunity that allows that rule to be broken right uh a creature that passes over a mountain pass and ends up in a valley in which it has no competitors may leave a hundred times as many offspring as it would have if it had remained in its initial habitat and so these places where creatures discover an unexploited or underexploited opportunity and


Society Evolution And Interaction

First, second, and now a fourth frontier (02:28:54)

their population can grow are frontiers and the feeling of growth is the feeling of evolutionary success the problem is all of these things are limited right no matter what opportunity you've found the population will grow until that opportunity is no longer underexploited at which point the zero-sum dynamics will be restored but let's just lay out the the first three types of frontier before perhaps you um you expand on what the fourth frontier is so um the the first type of frontier being the one that most people think of when you hear the word when they hear the word frontier which is a geographic frontier so we begin the book by talking about the baringians the first americans who came over from through baringia across what is now the Bering Strait from Asia into the new world something between 10 and 25 thousand years ago they were coming into two continents that had never before been inhabited by humans and that was a vast geographic frontier the second type of frontier might be called a technological frontier in which you innovate something that allows you to make use of resource that you heretofore had not had access to so for instance the terracing of hillsides to allow water to be held and agricultural systems to be to be done where previously all the water would have run off taking the nutrients and the water with it that would be an example of a technological frontier and then the third type of frontier which is ubiquitous throughout human history is a transfer of resource frontier and this is really not a frontier it's it's just it's theft right and so the baringians coming into the new world for the first time again 10 to 25 thousand years ago we're experiencing a geographic frontier thousands of years later when Europeans came to the new world from the other direction from from the east they landed in a space that already had tens of millions of people in it and basically took over and that was a transfer of resource moment and a transfer of resource frontier basically theft so geographic frontiers and technological frontiers are not inherently theft transfer of resource is and so we are proposing a fourth frontier so i just say transfer of resource is the explanation for almost all wars and genocide from the point of view of some population the resources of some other population that cannot be defended are as if a frontier but the idea or the overarching idea is that all creatures are seeking these non-zero some opportunities that they are experienced as growth that they are inherently self destabilizing that they cause the growth of populations that then restores the zero some dynamics restores the austerity which doesn't feel so good and the population is then in the search for the next non-zero some growth frontier the problem is we can't keep doing that right that process made us what we are and we've been tremendously successful at it but there are no more geographic frontiers on earth we've found it all technologically we've done an excellent job of figuring out how to exploit the world in fact over exploit the world transfer of resource is a world destabilizing not only is it a despicable process but it is a lethal process from the point of view of the danger it puts us at we simply have weaponry that is too powerful we are too interconnected and so in a sense our fates are all now linked and we have to agree to put that competition aside and then the question is well what do we do do we face do we accept the zero some dynamics and live with austerity that doesn't sound like a very good sales pitch even even if it was what we had to do so what we propose in the book is that there's actually an alternative to this that one can produce a steady state that feels like growth to the people who experience it without having to discover new resource and that may sound preposterous it may sound utopian we are not utopians we regard utopia as the worst idea human beings ever had or at least very close to the top of that list but there's nothing uh undoable about a system that feels like perpetual growth in the same way there is nothing utopian about the idea that it's always springtime inside your house right it's always pleasant inside your house that's not a violation of any physical law it's just a simple matter of the fact that we can use energy to modulate the temperature with a negative feedback system and we can keep it very pleasant in your house all the time and the point is can that be done in our larger environment such that human beings are liberated to do the things that we are uniquely positioned to do to generate beauty to experience love to feel compassion to enhance our understanding of the world all of those things are the kinds of things that are worthy of us as an objective and what we need in order for more people to spend their time pursuing those things is a system in which we are freed from competing one lineage against the others for a limited amount of resources and and the uh so that you know we are condemned to violence against each other in order to pursue these things so in essence the fourth frontier is a steady state designed to liberate people we should say it is not something we believe we can blueprint from here we know enough to navigate our way in that direction but we cannot blueprint it is something we will have to prototype and navigate to but the good news is although we here probably would not live to see the final product things would start getting better immediately upon our recognition that pursuing the fourth frontier was the right thing to do that suddenly there would be a tremendous amount of useful work to be done in discovering what the various mechanisms of that new way of being art all right you guys are gonna have to give me a little more than that in the book you've talked about you give an example and it was the thing that really allowed me to begin to understand how we could achieve a steady state that gave us those things um i don't know if you remember the example that you gave in the book i do so if you don't let me know and i will refresh your memory but um you're talking about the mimes is that right in the book you specifically talk about craftsmanship but if you've


example of craftsmanship (02:35:45)

got something for me on the mimes i'll take it yeah well i mean i think we do we do both right um and uh let's see well maybe maybe remind us of exactly what we say about craftsmanship i remember that we talk about it but i'm not sure exactly what the context is here the idea was basically that so we have this inherent desire for growth but it isn't necessarily growth itself it's sort of then now i'm using my own words it's the neurochemical state of feeling this deep sense of satisfaction at having something of import is probably the the easiest way to think of it and that gave me something to grasp onto because i so i often get asked the question i've had financial success in my life and the irony of my life is that i'm constantly going around trying to convince people that money is not going to do for you what you think it will it's very powerful but it isn't what most people think they think it will make them feel better about themselves and money is just absolutely incapable of doing that and so when you realize the only thing that matters is how you feel about yourself you start playing a game of neurochemistry and so this idea of craftsmanship felt like that to me yeah so you know recognizing the long-term hormonal glow that you get from producing something of lasting value and beauty and meaning in the world as opposed to only being exposed to short-term stuff you know the difference between buying something in iKEA and putting it together with allen wrenches on your floor and of either making yourself or coming to know a craftsman who really builds things with care and knowledge with the intention that you will be able to pass this on to your children or your friends or you know whomever later on this is a piece with lasting beauty lasting function that was built with someone who knew something about the wood or the metals or whatever the materials are this is a way into finding the kinds of meaning that a fourth frontier mentality can provide yeah i think the distinction is one between the satisfaction of life coming from consuming which is inherently empty versus producing and producing doesn't necessarily have to mean stuff it can be meaning or insider and any one of a number of other things but what we say about the the Maya in the book what we argue is that they very conspicuous this is an extremely long lived civilization thousands of years of remarkable success and they had as one of the things that they produced in all of their city states they produced these incredible monuments which are actually not what they appear to be we have spent a lot of time in in Mayan territory and these things look like pyramids in the sense that the Egyptians produced them but they are not they are in fact growing structures so these things got bigger and bigger


Mayans produced something that stood in for population growth (02:39:03)

over time the longer a city state existed in the same place and then there's the hidden version of this which are an incredible network of roads stone roads that exist between the city states called sock base in any case the point is the Maya were producing things that stood in for population growth they were taking some fraction of their productivity and they were dedicating it to these massive public works projects and the thing about a massive public works project is that it brings a kind of reality and cohesion to the people involved I mean imagine yourself living in one of these amazing cities and the public monuments made of stone that speak to the power and the durability of your people are you know part of this this public space these things allow the following process if it's not just a pyramid that you you know you it's a line item on a budget you build the pyramid it's done but in fact what you do is you augment it well then in good years you will have that to augment and you will take some fraction of the productivity that might be turned into more people which would then result in more austerity you can invest it in


Public space amplifies people interacting (02:40:21)

these public works projects and then in a lean year instead of having not enough to go and feed all of the mouths that have been created you can just simply not augment the the public works project that is a natural damper for the kind of ebb and flow the boom and bust that we have suffered so mightily under under our modern economic systems so the production of meaning the production of shared space that actually augments the ability of people to interact with each other these things are models of what we should probably be seeking as a society based system that


The production of meaning (02:41:00)

tamps down the fluctuations that provides liberty to people that's really the key thing right we want realized liberty for individuals that they can pursue what is meaningful rather than satisfying themselves with consumption that's sort of a rough outline of what a fourth frontier would look like yes i love it um in the book i can't remember if it was liberty that you were talking about specifically but you talk about it's an emergent property i assume you mean the same with liberty how do we create the bed from which liberty will emerge well what we argue in the book is that liberty is a special value and the reason that is a special value is there really two ways to delineate it you can be technically free but not really free right if you're concerned about being wiped out by a health care crisis or you're concerned that you you may lose your job and have to find another in a different industry you're not really free even if technically you could go out and start an oil company it's not going to happen so but we argue is that real liberty realized liberty is liberty you can act on and in order for a person to be liberated


Discovery Of Individual Happiness

Giving people the freedom to discover What Makes Them Happy (02:42:14)

there are more mundane concerns there safety their sustenance all those things have to be taken care of and therefore we can know that we have succeeded when somebody has real liberty that they are capable of acting on it's a proxy and what we argue is that the objective ought to be to provide real liberty for as many people as possible hopefully ultimately everyone would be liberated to do something truly remarkable rather than only elites having that freedom yes i would just say as as high a fraction of the population is possible if we say as many people as possible it might sound like we're also interested in maximizing population growth and of course you know of course we're not you know we think we will we will peak hopefully at some point soon and then population may start going down through attrition but that at every moment in in human history going forward the vast number of the greatest number of people possible who have maximum liberty will be a success and let me just refine that slightly the objective is the maximum number of liberated people but not living simultaneously ultimately the way to grant the marvelous liberated life to the maximum number of people is to get sustainable at the level that humans can live indefinitely on the planet rather than having a clock ticking where we just simply don't have the resources to continue doing what we're doing with so much disruption in tech and finances you have to get educated check out this episode with ralph paul to learn about how to protect yourself financially do you think that ai presents a mega threat to our economy it's very exciting technology


Could not load content

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Wisdom In a Nutshell.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.