Scott Aaronson on Computational Complexity Theory and Quantum Computers | Transcription
Transcription for the video titled "Scott Aaronson on Computational Complexity Theory and Quantum Computers".
Note: This transcription is split and grouped by topics and subtopics. You can navigate through the Table of Contents on the left. It's interactive. All paragraphs are timed to the original video. Click on the time (e.g., 01:53) to jump to the specific portion of the video.
Today we have Scott Aronson, UT professor, CS, and also a blogger at Settle Optimized. And on the top of your blog, you have something that says, if you take just one piece of information away from this blog, quantum computers would not solve hard search problems instantaneously by simply trying all the possible solutions at once. Why not? >> Great question. Very good ask. So I've been researching quantum computing, working in this area for about 20 years. I've been blogging about it for 15 years, I guess. And the single most common misconception, which you find repeated in almost every popular article about the subject that is written. It says, well, a classical computer is made of bits.
Quantum Computation And Concepts
Quantum Computation vs. Science (00:58)
And so it can just try each possible solution one by one. But a quantum computer is made of qubits, which can be zero and one at the same time. And this means that if you have 100 qubits that the quantum computer can explore two to the hundredth power state simultaneously. And then it can just try all the possible answers at once. Well, that is gesturing towards something in the vicinity of the truth. But it's also very seriously misleading. And it leads people to think that quantum computers would have capabilities that actually we don't think that they would have. And this is not even controversial within this field. We all know this, but it's very hard to get the message out. I've been trying. So here's the situation. The central thing that quantum mechanics says about the world is that to each possible state of a physical system, like each possible way that it could be when you measure it, you have to assign a number called an amplitude. And amplitudes are related to probabilities. A larger amplitude means you're more likely to see that outcome. But amplitudes are different from probabilities. Like a probability, an amplitude can be negative. In fact, it can even be a complex number. And so all the sort of magic of quantum mechanics, anything you've ever heard about the weirdness of the quantum world, the spookiness, it all boils down in the end to the way that these amplitudes work differently from probabilities. In particular, probabilities could only sort of add up positively. The more ways that something could happen, that just keeps increasing the probability that it happens. But amplitudes can, as we say, interfere destructively and cancel each other out. So if a photon could reach a certain point, for example, in one way, with a positive amplitude and in another way, with a negative amplitude, those two things can cancel so that you never see the photon there at all, as in the famous double slit experiment. If you won't take this on my authority, you can take it on Richard Feynman's, that he used to say that everything in quantum mechanics boils down to these minus signs. So a quantum computer is just a device that could maintain a state that is, as we say, a superposition with some amplitude for every possible configuration of the bits. So indeed, if you had a computer with 100 quantum bits, qubits, as we call them, that is a 2 to the 100th power amplitudes that are needed to maintain that computer state. And it's actually very easy to create, as we say, an equal superposition over all the possible states, which you could think of as every possible key for a cryptographic code, every possible solution to your optimization problem. So that's what the popular articles are trying to talk about. All of that is true. The problem is, well, for a computer to be useful, at some point you've got to look at the thing. At some point, you've got to measure and get an answer out. And the rules of quantum mechanics are very specific about how these amplitudes turn into ordinary probabilities that you see something. And the rule is, the probability of some outcome is just the squared absolute value of its amplitude. That's the rule. It sounds a little technical, but that's one of the most basic rules of the universe. So it's probably-- If you saw it on paper, it would have been. Yeah, that's right. But in particular, one thing that means is that if I just created an equal superposition over all the possible answers to some problem, and then I measure it, not having done anything else, then all I'm going to say will be a random answer. Now, if all I wanted was a random answer, I could have just flipped a coin a few times, saved billions of dollars building this device. So the entire hope of getting a speed advantage from a quantum computer is to exploit the way that amplitudes work differently. It's to try to choreograph a pattern of interference, where for each wrong answer to your computational problem, like some of the paths leading there have positive amplitudes, some have negative amplitude, so they cancel each other out. Whereas for the right answer, you want all the contributions to it to reinforce each other. And the tricky part is you've got to figure out how to do that despite not knowing in advance which answer is the right one. Right. In addition to the error correction. Right. That's right. Oh, yeah, yeah, yeah. All of that, too. Yeah, for-- you know, those are all the engineering difficulties. Right now I'm talking about even if you had a perfect quantum computer, right? It's still not obvious how to get a speed advantage from it, right? Because you've got to choreograph this interference pattern, right? It's like, you know, nature gives you this really bizarre hammer. And it was not until the 1990s that people really started figuring out what nails you could hit with this hammer. Well, this is a question I had for you. Yeah. So quantum computers seem to be in this technology category rather than the science category. So we did a podcast with Rana Adakari from LIGO. Okay. And like that is squarely put in the science category. How did quantum computers end up in this like business use case category? That's a super interesting question, right? Because, yeah, indeed, you know, often when magazines and newspapers are writing about quantum computing, they put a technology reporter on it, right? Yeah. And then, you know, they want to know about, well, you know, how is this going to impact the finance industry in the next five years? And, you know, they will-- let's just see if the universe-- let's prove that the universe has this capability at all. How about we do that as a first step? It's an important understanding because, like, it's not there. Yeah, that's right. So I think that part of what makes it exciting is that, you know, this is fundamental science, right? I mean, in the sense-- I mean, not in the sense that we're overthrowing any of the known laws of physics. In fact, all we're doing is, in some sense, we're taking completely seriously quantum mechanics as it was written down in 1926, right? You know, it hasn't changed at all since that time. But what we're doing, what we're trying to do is to test it in an entirely new regime, right? Where really, you know, it has never been tested in this regime of, let's say, universal quantum computation or, you know, quantum error correction, right? The, you know, the math seems very unequivocal that this can be done, right? But this is really building something, you know, that has never been built before. And so, you know, and there are skeptics. I talk to them a lot. They come on my blogs, you know, often, who say, "This will never work. This is, you know, this is so absurd that you could build this quantum computer that, you know, there has to be some new law of physics that will prevent it." But there seem to be equal amounts of, like, crackpot scientists who come on your blog and figure out, like, the PNP problem. Yeah, no, I get every kind of, you know, of an original thinker on my blog. I mean, that's, but that's, you know, one of the joys and pitfalls of blogging, I guess, right? But, you know, in particular, you know, there are people, you know, including very serious and well-known computer scientists and physicists who say, "Well, look, you know, if not a new law of physics, like, it must be that we just don't understand quantum mechanics well enough, right, that, you know, the error is going to, you know, inherently kill you. You will not be able to scale this, you know, and, you know, maybe you can build a small quantum computer, but, you know, nature will prevent you from scaling it up, right?
Quantum errors (08:58)
And, you know, in the '90s, that actually seemed like a pretty plausible view, you know, even to many of the experts in this field, right? What changed everything for most of us in the '90s was the discovery of quantum error correction, right, and quantum fault tolerance. The upshot of which was, "Well, if you want to build a scalable quantum computer, you don't need to get perfect qubits that are perfectly isolated from their environment, which of course would be a physical absurdity, right? You merely need to get them ridiculously well selected from their environment, and, you know, way better than, you know, we can do, but, you know, but it, but it, but it, in the minds of most experts, it reduced it to merely a staggeringly hard engineering problem, right? Now, you know, what, I mean, what I like to say is that, you know, I mean, I mean, I mean, if it turned out that there's some deep reason why this could never be done, and if the attempt to build, you know, quantum computers were to lead instead to the discovery of that, you know, new impossibility principle, well, then that's awesome, right? I mean, that's like Nobel Prizes, you know, for, you know, for whoever discovers it, and that's, you know, I mean, I mean, compared to that, you know, the idea that you can build a quantum computer is like the more boring possibility. That's the more conservative option, right? But, you know, we're, as I said, we're testing quantum mechanics in this new regime, and we want to know the truth, whatever it is, right? So I think that there is fundamental science here. And to me, you know, that's really why I got into this, right? I like to say, you know, for me, the number one application of a quantum computer is not breaking cryptographic codes, it's not optimization problems, it's not simulating quantum physics, it's just proving the skeptics wrong. Just, you know, just-- What happened in your childhood? A lot happened, but, you know, we don't have to go into that, right?
But, I mean, but, um, no, I mean, I mean, you know, it is sort of seeing whether nature actually has this computational ability to put it beneath the surface. You know, but now, of course, what made a lot of the world interested in it is that, you know, it actually could have some applications, right? Maybe the most important application that we know about it is just giving us this new way to simulate nature, simulate physics and chemistry, and maybe discover new drugs, discover new materials, right? You know, I mean, that's the application that Richard Feynman and the others had in mind when they first proposed the idea of quantum computing in the 1980s. But before we started recording, you were talking about what you've been working on for the past year, which is potentially relevant because, you know, many of these drug finding applications might need, say, million qubits, right? We might already be able to start getting some of these with some hundreds of qubits. Okay. It's not, you know, people are going to try. There, but we're not even at 50 yet, right? That's right. That's right. Not, not 50 that we have good enough control over suddenly. Right. Right. So what was the application that you were working on? Oh, okay. So, so I have a new idea that I've been working on for the past four months or so, and there's been independent work by others pursuing related ideas. But you know, this, as far as I can see, may be the first application of quantum computing that people could actually be able to realize with like near-term devices with 50 or 60 or 70 qubits. And this application is to generate cryptographically secure random bits. Okay. So for example, if, you know, you have these proof of state cryptocurrencies, right, you need a huge public source of random bits to sort of run this lottery to decide who is allowed to add a new block to the blockchain, right? For all sorts of other cryptographic applications, you need, you know, public random bits, right? Of course, you know, decide which precincts to audit an election, for example, right? You know, of course, for cryptography, you also need secret random bits. Now for secret random bits, you would need to own the quantum computer yourself. You wouldn't want to download them over the internet, right? But so, so now, you know, there are many, you know, websites already that will give you public randomness. There's one called random.org where you can just get random bits to order. Yeah, it's... Allegedly random. Yeah, right. Oh, yes. Thank you, right? NIST runs something called the randomness beacon where every minute they release 512 new random bits to the world. Oh, okay. Which are partly, you know, generated using quantum mechanical processes, right? So now, you know, you could say if you believe quantum mechanics, right, then, you know, which you should, and it's very easy to get random bits, right? Randomness is going to baked into quantum mechanics, you know, famously. Yeah. You just keep, you know, measuring some photons, you know, measure the polarization, the outcomes will be random, right? Or just get some radioactive material from Kim Jong or whatever, right? Just, you know, put a Geiger counter, right? Right. The Ks will be random. But so, so, but, but, but if you were to get these random bits from the internet, right, then the question is, how do you know, how they were generated? How do you know that the hardware wasn't back-dored by someone, right? In fact, you know, NIST did have a standard for pseudo-random bits, which we learned a few years ago because of this note in documents was back-dored by, most likely, by the NSA, right? So, so, so, so, so there is a big issue, you know, how can you trust randomness, right? How can you prove to someone that, that, that bits were actually generated randomly? It seems almost like a philosophical impossibility, right? So how does yours work, then? Yeah, okay. So, so, so there, you know, there are ways to do this with quantum mechanics, right? And, you know, over the past 10 years, people discovered that one way to do it is using the bell inequality, which means if you had like two entangled particles that were far away and you measure them and you see that they have a certain type of correlation that, you know, could not have been produced classically, right? This is kind of the famous bell inequality. It's the thing that disproves Einstein that, you know, that there's no sort of secret local hidden variables that control what's going on, right? That sort of entang, quantum entanglement is a, is a real thing in the, you know, in the universe. But, you know, for, for many decades, people said, well, you know, this, you know, is conceptually, you know, a breakthrough that Bell made. Of course, it's completely useless, right? Of course, you know, you don't actually need to create these correlations between far away particles, right? But, you know, what people realized a decade ago is actually the fact that you've created those correlation also means that there has to be some randomness in the bits that are produced because the only way to create those correlations without using true randomness would be using communication. But if we put the things far enough away that a signal could not have gotten from one to the other, even traveling at the speed of light, then, you know, we can have some kind of assurance that, yeah, there is randomness there. Gotcha. Okay. But now, you know, the one thing is you've got to believe that these devices were really separated far enough, right? Which again, if it's over the internet, you know, how would you know? So the new realization is that you can get guaranteed randomness with a single device, you know, at least under some cryptographic assumptions.
Quantum Proof (17:18)
Okay. As long as that single device is able to do quantum computations that are sort of hard to simulate with a classical computer. So basically what you would do, imagine that Google, let's say, has some 70-qubit quantum computer as indeed they are working to build right now, just visited their lab a month ago, you know, they're working on it. I don't know when it's going to be ready.
Quality filtering (17:40)
But then you could, you with your classical computer, right, could submit challenges to the quantum computer that basically say, just run this quantum circuit, which is, you know, pretty random looking, pretty messy arbitrary quantum circuit. Which is run this quantum circuit and then, you know, it will lead to some probability distribution over output strings. Okay. In this case, strings of 70 bits, right? Okay. And so that just give me a sample from this distribution. Okay. And I'll just keep sending it challenges of that kind, one after another. And each time demand that it send me back the sample from this distribution in a very short time. Like, let's say, you know, half a second. Okay. And then I take these samples and now, you know, if Google did the right thing, then these samples have lots of randomness in them, right? Right. But the more interesting part is that under a cryptographic assumption, you know, if I can check that these samples were actually correlated with the distributions that they're supposed to be. So in other words, like one shows up 10% of the time. Well, yeah. Like, like, like, you know, so all the outcomes are pretty unlikely, right? Because, you know, they're all like on the order of two to the minus 70 probability of occurring, but not exactly, right? Some of them are like twice two to the minus 70. Some are half two to the minus 70, right? And so I can check that the heavier outcomes are more likely to occur, right? I can do some statistical test to sort of check whether Google is plausibly sampling from this distribution, right? And then what we can do is we can mathematically prove that, you know, if you'll assume that some problem is hard for a quantum computer that looks like it should be hard, then it would follow that even with a quantum computer, the only way that Google could be quickly generating samples that pass this test is to generate them truly randomly, right? There's no secretly deterministic way to do it, you know, without them spending a huge amount of computational power more than we believe that they have. And so when you were testing this out, or did you test it out with like tons of compute? It has not been tested out with real quantum computers yet. I mean, you know, it's so, so the apparatus that I use is a pen, paper, and I did, I did use a maple a little bit to do some numerical optimizations. So yeah, I have the sexiest thing. I'm a theoretical computer scientist. Yeah, okay. But Google is hoping to move forward and test this out and actually demonstrate it, you know, once they have the device, right? I mean, you know, of course it could also be simulated with a classical computer, you know, one could code something up. But you know, one thing that's exciting about this is that it looks like pretty much as soon as you have a 60 or 70 qubit quantum computer that can sample any distributions that are hard to sample with a classical computer. You can pretty much get this application, right? Okay. So it's sort of designed for near term quantum computers. And in fact, even if you had many more qubits, we couldn't really make use of them for this application because the verification, if I have n qubits, is going to take two to the end time with my classical computer, which means that, you know, like with a thousand qubits that might be working fine and yet we could never verify it. Man, what other projects did you clear from the cache on your sabbatical? Well, not many.
Fellow karplus ideas and theorems (21:47)
I was on sabbatical in Tel Aviv for a year. I came with a long list of old papers that needed to be written up and I wrote up almost none of them. And instead I just started new projects and put the old ones even further onto the back burner, which is often the way it goes, unfortunately. But actually, I did write a paper this year about a new procedure for measuring quantum states, which is called shadow tomography. I had a more boring name for it. And then a physicist. I appreciate it. Yeah.
Shadow tomography (22:30)
Yeah. Yeah. Yeah. It came up. Physicists are much better than computer scientists at naming things. So it's called shadow tomography. And what it is, so measurement in quantum mechanics is an inherently destructive process. Right? Famously, it collapses the wave function. It collapses your state and you only get one chance. So the problem that shadow tomography is trying to solve is, let's say, I have a small number of copies of a quantum state. But I want to know the behavior of that quantum state on a large number of measurements. So like a much, much larger number of different measurements than the number of copies that I have, maybe even an exponentially greater number. Right? Let's say, first simplicity, that each measurement has just two possible outcomes. Yes or no. Right? And I want it, but I want to know for each measurement, what is approximately the probability that it would return yes, applied to the state? Right? First, if I have enough copies, I could just measure each copy with a different measurement. But I don't have enough copies. Right? And again, if I have enough copies, then I could just fully learn my state, right? Measure each copy and then eventually, by collecting enough statistics, I could write down in my classical computer a full description of the quantum state. Right? But I don't have enough copies for that either. Okay. Where are these assumptions coming from that you don't have enough copies? Oh, well, I'm just telling you that because this makes the question interesting. Alright, first. Right? I mean, I mean, if we do have enough copies, then we do one of those simpler things. Right? Right? But I'm asking what happens if we don't. If we don't. Right? So, you know, you have, you know, maybe it's very expensive with, you know, your hardware to create new copies of the state, right? So what shadowed demography is, and it's a way to sort of take these states and manipulate them very, very carefully so that, you know, you can keep reusing them over and over and wearing the- Without destroying them. Without destroying them, right? Damaging them only slightly each time, right? And wearing the answers to each of these yes or no questions, which again, there could be exponentially more of than there are copies of the state. How does the partial destruction work? Well, okay. So, it has to do with- So, the way that measurement works in quantum mechanics, right, is that if you measure your state in the wrong basis, like, you know, then the state is destroyed. So for example, if I have a state that is, you know, if I have an electron that's very spread out in position, and I ask it what's its position, right, then I now force it to make up its mind, right? It's localized now to one place, and then that destroys all the other information that was in that superposition over positions. On the other hand, if I ask that electron for its momentum, well, then its momentum might have been much more determinate, right? And you know, if I ask a question, we're given knowledge of the state, and someone could have almost perfectly projected the answer to that question, then the state only needs to be damaged slightly by the measurement. Okay. And we know that not all measurements are destructive. You know, for example, you know, if I read a book, you know, and I see what words are written in it, my friend can read the book too, right? So that's a non-destructive measurement, right? But even in the quantum realm, like, if I'm careful to measure a state, like, in a basis where it's already been localized, right, then that's not going to damage it by very much, right? So, you know, now the challenge, of course, is now I have these copies and I say, I don't know in which basis, you know, they are, aren't localized, right? But I can, you know, do something, you know, and this measurement procedure that I designed takes a very long time to carry out. Okay.
Measuring the quantum state gently (26:34)
So I'm not promising you that it's fast, right? But it makes very, very careful reuse of these same states over and over again. So, you know, so this had various implications. It solved various theoretical questions that I cared about. And by the way, I had conjectured that this was not possible, right? And so, you know, this is the way that research often happens for me. I tried to rule it out. I was unable to rule it out. And then eventually I figured out why, right? So was it just like brute force pen and paper? Or did you have a conversation that sparked it? What happened? Well, this, I mean, I had taught a mini course in Barbados a few years ago where I, you know, I raised this as a question. Yeah, I don't see how to do this. Maybe it's not possible. You know, and then I just, you know, thought about it more. And of course I was building on earlier work that I, you know, others had died. And sort of, you know, so it's like, you know, you never start completely from scratch. Yeah. You kind of know the tools that were used to solve related problems. But anyway, but then, you know, this year, you know, we've carried it further. So I have a joint work with Guy Rothblum from the Weitzman Institute. And so, you know, I get what happened was I gave a talk about this work. And he said, you know, this idea of measuring a quantum state very gently and not damaging it. This sounds a lot like what I work on, right? You know, he's a guy as a classical computer scientist who works in a field called differential privacy, right? So some people may have heard of this. This is actually used, I think, and some, I don't know if Facebook uses it, but some, some websites use this. It's a way that you can do data mining, right, for like, like a database of like a whole bunch of users sensitive data, you know, could be their medical records, could be, you know, all their personal data. But you can do it in a way that mathematically guarantees in some sense that you're not going to be revealing too much information about any individual user, right? In a sense, if any individual user were to drop out of the database or change their data, then that would have only a very small probabilistic effect on the outputs of this algorithm, right? And you know, the way that you achieve differential privacy, hence, you know, it was often things like I may ask like how many of these users have, you know, colon cancer or something. But then I'll add some random noise to the result, right? And so then, you know, the adding the random noise, the data is still perfectly good for doing medical statistics. But now it's like, even if I knew everyone else's data, I still can't determine whether a particular person has colon cancer or not, right? So, so he said, you know, and actually you do the same kinds of things that get gentle measurements of quantum states. So guys said there seems to be a connection here. And I said, come on. It's like you can, you know, like relate anything to anything else, right? That's, you know, probably just an analogy. But then, you know, we sat down and in fact there is a connection. There's a, you know, precise mathematical connection between these two problems. You can prove it, you know, it goes in both directions. And then we were actually able to use it to, you know, take work that's been done in differential privacy by people who don't know anything about quantum mechanics, right? Just purely classical CS and use it to get better procedures for shadow tomography. That's really cool. Yeah. Is that, is that online? Oh, yeah. It's another, it's another thing on my stack to write it up. Yeah. Well, you know, I'll try, you know, we will try to write it up this summer. All right.
The P versus NP problem (30:23)
Cool. Yeah. So moving to one of the more poorly named CS problems that we talked about our email, the P versus NP problem. Yeah. I heard you describe this because it can sound really complicated. But I heard you describe it once as for every efficiently checkable problem, is it efficiently solved? Yeah. And I thought that was a good way to describe it. Yeah. Well, I mean, that, I mean, that's just the standard way to, you know, say what this problem means, right? But for those, why does it matter? I guess it's the question. Yeah. Okay. Well, I think it's, you know, a strong contender for the most important unsolved problem in math, you know, of this century. So, so NP, right, stands for non-deterministic polynomial time. As I said, we're not as good as naming things. Terrible, yeah. That's right. But it's all the problems were, you know, if I told you the answer, you could check it efficiently. What we mean by efficiently in computer science is like buying algorithm that uses a number of steps that scales at most like the size of the problem raised to some fixed power. Okay. We call that a polynomial time algorithm. That's kind of our rough and ready, you know, you know, it doesn't always correspond and practice to efficient, but it's a pretty. It's like ballpark. Yeah, exactly. It's ballpark, right? Yeah. And it's like, if we can't even answer this, then we're not going to answer the more refined questions either, right? So, you know, NP is all the problems that are efficiently solvable, right? They actually have an algorithm that will find the answer in many steps. So a good example of an NP problem would be factoring, right? I give you an enormous number. I ask like, what are its prime factors, right? That problem happens to underlie much of modern cryptography, right? Good example of a problem in P, you know, if I just give you a number and I ask you whether it's prime or not, but not to find the factors, then that actually has a fast algorithm. It was only proven to be in P 16 years ago, you know, in a big breakthrough, okay? So you know, so that's an illustration of how it could be very, very non-obvious to figure out which problems have these efficient algorithms and which don't, right? And so in particular, you know, like I think as soon as anyone under, as soon as a layperson understands the P versus NP question, I think most of them would say, you know, that's a good example to say, well, of course, you know, there's not going to be an efficient way to solve every problem that's efficiently checkable, right? Why are you even asking that, right? I mean, like a jigsaw puzzle, right? Obviously, it's a lot easier to look at, you know, a jigsaw puzzle that your friend did and say, oh, yeah, good job. Looks like you finished it. Then to actually, you know, do it yourself, right? Same with a Sudoku puzzle, same with breaking a cryptographic code, same with, you know, solving some optimization problem, like optimizing an airline schedule, right, that may involve solving some, you know, satisfying some enormous number of constraints, you know, or as many of them as you can, right, when they conflict with each other. Right, but it's not proven not to be possible. That's right. That's right. No one has ruled out that a fast such algorithm could exist, you know, essentially, you know, it's very, very hard to prove a negative, right? You know, occasionally we can do it, but it's, you know, it tends to be much harder, right?
When trying to prove a negative (33:42)
And you know, if something is possible, you just have to show how to do it, right? But to prove that something is not possible, you have to, in some sense, understand the space of all the possible ways that it could have been done, right, and give some general argument why none of them could work. Yeah. Right. So sometimes we can do that. It's not like, you know, we've made no progress, right? But we're a long, long way from being able to prove things like P not equal to NP, I think. You know, I like to say that if we were physicists, we would have just declared P not equal to NP to be a law of nature, right? We would have just been done with it, right? Yeah, fine mode of declarative. Yeah, that's right. Right. So, you know, well, like, like the second law of thermodynamics, you know, you know, we could have given ourselves Nobel prizes for our discovery of the law. And later, if it turns out that, oh, actually, P equals NP, there is a fast way to solve all these problems. Well, then we could just give ourselves more Nobel prizes for the laws overthrow, right? But, you know, there's one thing you learn in an interdisciplinary subject, like quantum computing is, you know, like there are different differences in terminology and culture between fields, right? What the physicists call law, we call a conjecture. Right. Yeah. And it's increasingly hard to draw the lines in between the two as well. Yeah. CS math physics. Oh, yeah. Well, no, I mean, I make fun of my brothers and sisters in physics all the time, you know, only because I love them, of course. But, but, you know, but in fact, you know, large parts of physics and CS have been coming together in the last decades. You know, partly statistical physics made this very, very deep connection between like spin glasses and condensed matter physics and combinatorial optimization problems, right? And you can understand what many algorithms are doing using physical analogies. And then of course, quantum computing, right? Was this enormous intersection where suddenly these fields were just thrust together and they had to quickly learn each other's terminology and frame the reference, right? And you know, and that that's a good part of what I've been doing is just helping to translate. And, you know, I give colloquia in physics departments where, you know, they just want to know like, well, what are P and NP, right? And what are the basics of, you know, like undergraduate level computer science, right? But what's cool is that, you know, like I can talk to string theorists, let's say, right? And they know this, you know, like staggering, you know, tower of knowledge that, you know, that I don't know, right, that I'm only at the, like lowest foothills of, right? And yet suddenly they too need to know about computer science, right? And so, you know, they, you know, they have to respect you. Well, they know, they want to, you know, we have something to talk about, right? So we did a podcast with Leonard Susson that's not out yet.
Holographic principle (36:43)
He's the perfect example of someone who has been pushing this intersection, maybe even you know, more aggressively than I've been. Yeah, possibly. Every time I talk to him, I'm like, slow down, many, you know, computer science, you know, is not quite the future of all of physics. And he's like, absolutely is. We didn't even get that far. Yeah, yeah, right. But I do have a question related to his work. So yeah, he talks about this holographic principle, right? Right. How does that relate to this, the firewall paradox? So I couldn't quite grasp a two together. This is a big discussion. Okay, the holographic principle is this like general phenomenon where often you write down a physical theory in some number of dimensions, like, you know, involving, let's say, a three dimensional space. And then it turns out to be dual in some sense to a completely different looking physical theory, which is defined on the boundary of that space. Right. So I can even in a different number, one fewer dimensions, right? And the first theory, the one that's in the bulk, as they say, right? And the interior involves gravity. So it's a quantum theory of gravity where it could have things like black holes that form and evaporate. Whereas the theory that's on the boundary is a, you know, pretty ordinary quantum field theory, meaning it has no gravity. And it's a flat space time. So you know, these two theories look totally different, right? And you know, what do you even mean in saying that they're the same thing? Right. Like literally the same information. Right. It's very confusing. Well, you mean that there's a one to one mapping between states of the first theory and states of the second theory, right? And this mapping is non-local, right? Like, like I could have like a little particle here, right? Like inside of the bulk and yet, you know, on the, in the boundary theory, that would correspond to some enormous smeared out thing, right? The mapping between the bulk theory and the boundary theory in recent years, people realized that it is literally an example of one of these quantum error correcting codes that I told you about before. That's the, you know, same things that one would need in building a quantum computer, right? So, you know, the whole point of an error correcting code is that you take like a local, you know, one bit and you smear it out. Yeah. You represent it with a large number of bits. Yeah. Right. And so, so this is, you know, and this is also what happens in a hologram, right? Hence the name, holographic principle. So there's this smeared out representation of everything that's happening in the interior, you know, which is represented on the boundary, right? And this is, in some sense, like this is like the most precise definition that the string theorists are able to give of what they mean by quantum gravity, right? That they say, well, you know, what we really mean by quantum gravity is you define this theory on the boundary, which they more or less know how to do. And then somehow there's this thing in the bulk that's, right? And, you know, and like, so, right. So again, you know, the different culture, different standards, you know, they don't even have like a rigorous independent definition of this bulk theory.
How does information get out of a black hole? (40:06)
But, but what they can do is in various special cases, they can calculate things in the bulk theory and then they can calculate the same thing in the boundary theory. And in every single case where they can do the calculation in both places, they get the same answer. Okay. So this is what leads them to say. So they're like, good enough. Yeah. Good enough for them. Yeah. Yeah. Right. So, well, so this is, so the firewall paradox is this, it's sort of like a modern refinement of Stephen Hawking's, you know, original black hole information paradox from the 1970s. Like, Hawking radiation. Right. Right. So, so in, well, yeah. So shortly after he discovered Hawking radiation, you know, in 1975, Hawking, you know, wrote a paper that posed the information paradox or puzzle of black holes, which is basically just the question, how does information ever get out of a black hole? Right. You know, why does it have to get out? Well, if we believe that quantum mechanics describes everything in the universe, you know, quantum mechanics, you know, except possibly when a measurement is made. Okay. Well, let's leave, you know, if you believe in the man. Except when you observe anything. Exactly. Well, you know, if you believe in the many worlds theory, then then even a measurement is just, you know, is just another ordinary thing, which is, you know, you split into multiple, you know, branches.
Black Holes, Ai And Ethics
Black hole information paradox (41:34)
Yeah. But let's leave that aside. Right. Any isolated physical system, supposed to evolve in a completely reversible way. Right. It may be very hard to reverse in practice. You know, you, it's a lot easier to scramble an egg than to unscramble it. Right. But in the view of physics since the 19th century, that's merely because, you know, our universe started in a very special state, right, with a very low entropy, right? My friend Sean Carroll likes to say that every time you cook an egg, you're doing an experiment in cosmology. Right. You're proving that the Big Bang was in a special state. Right. But, but, you know, but in principle, you know, everything is supposed to be reversible. So in particular, if I drop an encyclopedia into a black hole, you know, then the information, what was written on the pages cannot be deleted from the universe. Right. It has to still be there. So then the question is, well, where does it go? Right. You could say maybe you wanted to hit the singularity. It goes into some, you know, other bubble universe, you know, and I think people thought about that for a while. But the, you know, a popular point of view nowadays is that ultimately the information does come out, right?
Hawking radiation (42:59)
It comes out in the Hawking radiation, right, which for a black hole that was the mass of our son, it would take a mere 10 to the 67 years for this to happen. You know, eventually, you know, that's right. You know, you have a long enough grant. You could wait and see this, right? You know, eventually it would come out. You know, of course, in a very scrambled form. Just like, you know, if I burn a book, right, physics tells us that the information is still there and the smoke and ash. It's not very accessible anymore. But, you know, in principle, it's still there. And so the idea is that a black hole is just another example of this. Okay. But there's a big puzzle because the information, you know, if you were like, you know, a floating next to the encyclopedia, you would just see it go right past the event horizon of the black hole, go, you know, all the way down into the singularity.
What are the different ways to look at the information that gets into the black hole? (43:45)
And, you know, it's kind of never, you know, it doesn't seem like it's ever coming out, right? How does it get into the Hawking radiation or to come out? Right. And so, you know, this was such an acute puzzle that it forced people like Lenny Saskind and Gerard had hoofed in the 1990s to this view called black hole complementarity, which basically says that there are two different ways to look at the same situation, you know, for an observer who's outside the black hole or for an observer who is jumping into it with the encyclopedia, right? And the idea is from the point of view of the first observer, the information, you know, the information, if you like, never even makes it past the event horizon, right? It just sort of gets pancaked. Right. It's like a fly hitting a windshield. Yeah. I mean, I mean, first of all, just because of relativistic time dilation, you're never going to see anything fall into the black hole, right? It'll just get like slower and slower as it nears it, right? You'll never actually see anything go in. And so the idea is from the outside observer's point of view, you could treat the interior of the black hole as not even existing at all, right? Or it's just like some weird and different and scrambled way to rewrite what is happening on the event horizon of the black hole. So this is another example of one of these holographic dualities, right? Where there's two different ways to look at the same physical situation. You know, there's the interior point of view, and then there's the point of view where it's all on the event horizon, right? And so then, you know, but then there are all sorts of puzzles about reconciling these two different points of view, you know, as you could imagine, right? The firewall paradox was, you know, like a particular technical puzzle about how to reconcile these two different points of view. If we had another 20 minutes, I could go into it, but it might take too long. But in the meantime, you know, people actually do, you know, the other thing they do is that they use this bulk boundary correspondence as sort of a laboratory. So they say, you know, we have a space time where, you know, we have a boundary where we can sort of calculate what's going on. And now let's inside of the bulk of that space time, let's form a black hole. And now let's try to answer all these, you know, enormous conceptual questions about, you know, what is going on inside of a black hole by translating them into questions about what is happening in the boundary theory, right? Now meaning, you know, the boundary of space time, not the boundary of the black hole, right? Right. But, you know, but that's proven very difficult because, you know, in some sense, what physics want, what the theory wants is to just answer questions about what is observable by some hypothetical observer who was far away from all the action. Yeah. Who can just send in some particles that also like, you know, hit each other and stuff and then, you know, some other particles come out, right? You know, so like, you know, this is like a point of view that physicists like to take a lot of the right, you know, all of existence is like a giant particle collider, right? You just send some, smash some things into each other. You look at the debris that comes out on the other end. Yeah. Right. But if you're asking, what is the experience of someone who jumps into a black hole, then that is inherently not that kind of a question, right? It's not a question about the observer at infinity. It's a question about, you know, someone who is very much in the action, right? Yeah, yeah, yeah. Like Alice sends Bob into black hole. Exactly.
Co-channels and the growth of AI (47:30)
And these boundary pictures are just don't seem very good yet at addressing that kind of question. Okay. So let's move on to another unanswered question. Yeah, sure. Sure. So you got a bunch of AI related questions from the Internet. Yes. And it seems that people want you to opine about AGI. So let's go with one of them. Yeah, sure. So Anag asks, how can we channel AI growth but not weaponize it? So in a sense, like, how do we, it seems like they're assuming AGI happens. What do you, what do you think? I mean, I think that there will be many social issues that we'll have to deal with with AI or already are having to deal with even long before we reach the era when AI is like, you know, near the level of human intelligence, right? I mean, you know, we're obviously going to have to worry about self-driving cars and all the benefits and also the disruption and issues that those are going to bring, you know, AI for data mining and all of the implications that it has for privacy or, you know, a deep net, you know, denies your loan application.
AI, Ethics, Morality (48:17)
But then, you know, no, no human can explain why your application was turned down. Right, so I mean, these are things that, you know, I think lots of people are thinking about and, you know, the good thing is that we can try things out in the real world, right? I think we don't normally think of like ethics and morality as experimental sciences, but, you know, but very often, you know, people have moral intuitions about something that, you know, are really bad until they have been checked by experience, right? And so we're going to have to sort of, you know, and we'll have the opportunity to refine our ethical intuitions about all these issues by seeing the ways that AI actually gets deployed. And you know, I don't think I'm going to shock the world if I say, you know, I hope that we'll find ways to use it for good and not for evil. But, you know, now I have many friends, including, you know, here in the, especially here in the Bay Area, you know, where I see every time I come to visit here who are very, very interested in, you know, what happens after that when AI actually reaches the level of human intelligence or exceeds it, right? And clearly, whenever that happens, then, you know, we are living in a completely different kind of world, right? I mean, you know, think of like the woolly mammoths, right? Once the hominids, you know, start, you know, making their spears and their, you know, bows and arrows, right, that, you know, life is not the same anymore. And so a lot of my friends in this community are very interested in the question, how can we ensure that once an AI gets created that is sort of at a, you know, or beyond human level that it sort of shares our values, right? That, you know, it doesn't just, you know, say, okay, my goal was to make paper clips. So I'm going to just destroy the whole earth. Yeah. Because, you know, that's more raw material for paper clips, right? That it will say, you know, I should, you know, the humans created me. I should revere them as my, you know, as my great, although slightly dim with it ancestors. And, you know, I should let them, you know, stay in a nice utopia or something, you know, even while I go off and, you know, prove P is not equal to NP, right? Right. Do whatever interests me. Yeah. So, I mean, my point of view is that if civilization survives for long enough, eventually we're going to have to deal with these kinds of questions, right? Right. And I see no reason to believe that the human brain, which is the product of all these, you know, weird evolutionary pressures, you know, including like the width of the birth canal and, you know, how much food was available in the ancestral environment and all this stuff, right? There's no reason to believe that we are near the limits of intelligence that are allowed by the laws of physics, right? And so, eventually, sure, you know, it could be possible to produce beings that are much more intelligent than we are. Yeah. And we may have to eventually worry about that. Now, I have to confess that personally, you know, when I think about like the future of civilization, you know, say the next 20 years, the next 50 years, I tend to worry less about super intelligence than I do about super stupidity. You know, then I tend to worry about it, you know, as, you know, killing ourselves off or, you know, by catastrophic climate change, by nuclear war, or just the world, you know, regressing into, you know, fascism, just giving up on liberal democracy. And of course, we've, you know, seen many distressing songs all over the world that, you know, that there is this kind of backsliding right now. And so I like to say that, you know, that I hope that, you know, my biggest hope is that civilization should only last long enough that, you know, being destroyed by super intelligent robots becomes our biggest. Right. Right. Let that be our worst problem. Right. Of course. It's like a silly mental game where it assumes we've learned nothing along the way and it's just like, I mean, I know, look, I wouldn't go that far, right? I mean, I think it's good to have some people thinking about these things, right? It's like, you know, there should be people thinking about how could we prevent, you know, a catastrophic asteroid impact, right? Or, you know, how could we prevent, you know, a bioterror, right? And you know, and they'll probably discover various interesting things along the way, right? That, you know, we'll have implications for the world of today, right? I mean, that usually happens when people, you know, let their minds, you know, roam freely over the far future, right? So I'm happy to have people think about this. I just think that, you know, let's, let's as practice for solving the problem of AI alignment. Let's see if we can solve global warming first. Yeah. We'll see how that goes. Yeah. See how it goes. Yeah. It goes, man. All right. Let's do another Twitter question.
Busy Beaver and independent of ZF set theory (54:10)
Yeah. So Michael Berg asks, yeah, is anyone keeping track of the smallest and such that busy beaver and is independent of ZF set theory? Yeah. He mentions, I recall there was some activity after the 2016 article. I assume that was on your blog. Oh, yes it was. And I'm wondering if 1919 states is still the record. Ah, so, okay. So let me back up and explain what he's talking about. Yeah. Thanks. So, so the busy beaver numbers are, well, they're one of my favorite sequences of numbers since I was a teenager. The, the, the nth busy beaver number, you can think of it as the largest finite number of things that could be done by any computer program that is n bits long. Okay. So, you know, we rule out programs that just go into an infinite loop, right? But we say as long as your program has to eventually halt, then what is the most number of things that it could do before halting, you know, this program is say n bits long and it's, you know, run on a, a blank input. Okay. So, you know, of course this could depend on the programming language a bit, but let's just take the original programming language, uh, touring machines, right? And so then the nth busy beaver number is defined as the largest number of steps that it can be taken by any touring machine with n states, you know, as defined by Alan Turing in 1936 before it, it halts. And the amazing thing about this function is that it increases more rapidly than any function that could be calculated by any computer program. This is provable, right? So, you know, it is a ridiculously rapidly growing function. So like busy beaver, the first four values of the busy beaver function are known, right? They're like one six, twenty one and a hundred and something. The fifth one is already not known, but it's at least forty seven million. And then the, the sixth one already you would need like a stack of exponentials to start to express it. So, you know, so if you're ever in a contest to name the bigger number, you know, and you just say busy beaver of a hundred, if your opponent does not know about computability theory, you will destroy them, right? Okay, but now another fascinating thing about this busy beaver sequence, uh, besides, you know, the fact that it grows so rapidly. Yeah.
Writing, Social Media And Programming
Graceful Funke avars (56:47)
Okay, well, so in some sense, you know, it encodes all, you know, uh, uh, uh, in a certain sense, it encodes all of mathematics. For example, you know, if, if I wanted to, uh, uh, know, uh, you know, is the Riemann hypothesis true, right? You know, well, there's some touring machine with some number of states that tests the Riemann hypothesis, right? That holds only if it finds a counter example to it. And then if I knew busy beaver for that number of states, then I would just have to run that machine for that number of states, see if it holds, that would answer the Riemann hypothesis, right? So, you know, so like, sometimes it's no surprise that this function grows uncomputably rapidly, right? Because, you know, it has so much, uh, you know, so many secrets of the universe encoded into it, right? And furthermore, uh, one can prove that sort of, uh, um, the, uh, axioms of set theory can only determine finitely many values of this function. Okay. So, uh, in some sense, beyond a certain point, you know, the, um, uh, standard rules of mathematics cannot even prove what are the values of this function. You know, it has some values because, you know, every touring machine either halts or it doesn't help. Yeah. And yet, you know, in some sense, we could never know them. Right? So, uh, so, so a few years ago, uh, I had a, a master student, uh, when I was then at MIT named Adam Y diddia. And I gave him, uh, as a thesis project, who tried to determine, well, what, you know, what is, uh, a concrete bound on the, on the, uh, number of states where this busy beaver function just goes off the cliff into un-knowability. Right? We may not be able to determine exactly where it happens, but, you know, at least we could say, can it, does it happen by, you know, at most, you know, 10,000 states or by at most 100,000 states? So what he did is that he designed a touring machine, uh, with 8,000, about 8,000 states, uh, that, um, does something that's equivalent to just, uh, trying out all the possible theorems of set theory one after the other and halting if it ever finds a contradiction. Okay. Now, what, what does that, that mean? Well, it means because of Girdle's incompleteness theorem means that, uh, uh, the, uh, uh, uh, set theory can never prove that this machine, uh, uh, runs forever. You know, uh, you know, if set theory is consistent, then the machine does run forever. But if set theory were able to prove that, then set theory would be proving its own consistency. That is a no, no. That's, that's exactly what Girdle's second incompleteness theorem says it can't do without being inconsistent. Okay. I think I got it.
10-state Maze (59:35)
Yeah. Okay. It's kind of like, uh, the way to remember it is, uh, you know, anyone who brags all about themselves probably has nothing to brag about, right? Yeah. You know, if, if your theory is bragging about its own consistency, it means it's inconsistent. Yeah. I mean, in, in, in theory could believe it's inconsistent while being consistent. Got you. That's possible, but not the other one. But not the other one. So, you know, I can't believe it's consistent with it. Uh, so, um, uh, so, so he designed an 8,000 state machine. You know, and this was a lot of software engineering that went, right? You had to like, uh, compile down the touring machine, you know, keep very careful control over the number of states. And so then he and I wrote a paper about this. I put it up on my blog and then, you know, this, uh, uh, what's cool is that, you know, a lot of hobbyists were able to look at this, say, you know, maybe they could improve on it in particular. There's a guy named Stefan O'Rear and he got it down to a less than 2,000 state machine. And I believe that most recently he's gotten it down to under 800 states. Wow. Uh, in any case, all of his, he hasn't written a paper about it, but all of his code is, uh, available on GitHub. If anyone wants to look at it, even try to improve over what he did. You know, I suspect that there may even be a machine with 10 states that would already exceed the ability of set theory to know what it does. Why do you suspect that? Well, I don't know. I mean, I mean, already with five states, there are machines that, you know, you know, whose behavior seems to hinge on some weird number theory. Okay. No one has yet understood this, right? And, um, you know, and we know how rapidly this busy beaver function grows. I mean, the truth is that we don't know, right? Uh, but you know, it's somewhere between five and 800 or so. This thing goes off the cliff. Cool. Yeah. Uh, I actually do have a question about your blog. So from what I can tell, you're basically inactive on social media. Yeah. Oh, I do not have a Twitter account. That's not an accident. Okay. Yeah. That's what I figured. Uh, despite that or in spite of that, you've been blogging for 10, 15 years since 2005.
Blogs vs. Social Media (01:01:46)
Okay. And I, I guessed blogged on some other blogs before that. Okay. Yeah. But, uh, I mean, I mean, I, I, uh, blogs used to be considered social media. That's true. I mean, yeah. I, I, I feel like a dinosaur, right? Like back in my day, we just had blogs and we really liked it. You know, uh, I mean, I feel like so. So my, uh, uh, my, my friend Sarah Constantine had a post of, I thought a very insightful, uh, post about this recently where she was making the point that, uh, blogs are, I think very much in keeping with the sort of original promise of the internet, the original idea that it was going to be a space where people would discuss things, right? Where they could spell out an argument, you know, by composing some paragraphs of text, right? That would set out what they think and why, you know, put it, put their take responsibility for what they said, put their name to it. Other people would then respond to it, give counter arguments that would all stay there. You could search for it. You could find it. You could link to it. Right. It's, it's very much a continuation of, you know, the culture of say, use net in the, you know, 80s and 90s, right? Uh, and since then we seem to have moved away from that toward a model of communication on the internet. That's a lot more like what offline communication used to be, right? Yeah. I mean, I've described Twitter as sort of the world's biggest high school, right? It is a, I mean, no, you know, which, which, you know, doesn't mean it's all bad, but in fact, I have wonderful friends who, you know, use Twitter for, you know, to do, you know, very, you know, worthy and great things. I mean, you know, I like to tell them that they're, they're sort of like they bear the same relationship to Twitter as like the 10 righteous men board us autumn and good more. Right. But, uh, you know, unfortunately it is not a medium that I think is designed for spelling out an argument, right? Or, uh, for sort of explaining carefully where you're coming from. Uh, it is almost like designed for ganging up with people for forming these kind of outrage mobs, which indeed we see that, you know, it is susceptible to these, you know, repeated sort of outrage explosions. Right. And, you know, and I'm not blaming one political side, right? I think, you know, there's, uh, we can find plenty of examples on both ends of the political spectrum of, uh, Twitter kind of being used for, I think it was really nasty purposes. Uh, and, um, you know, I mean, I mean, a tumbler and Instagram, I mean, you know, you know, it's not always nastiness, right? But they're just sort of, you know, they're designed for kind of, uh, you know, sharing a photo, people click like on it. Yeah. Right. It's a lot of kind of social signaling. It's a lot of building up ones, popularity, one's presence, right? And not sort of discourse. And not, they're not really designed for, you know, carefully clarifying, well, what is it that we really disagree? Yeah. Right. Where are we coming from? And that is really what interests me. Right. That is what, you know, I don't always succeed, but that's, that's, that's, that's, that's that's kind of what I try to do on my blog. Yeah. I think the problem comes when, you know, we try to have that kind of conversation on the blog, like a really careful conversation where anyone is welcome to contribute, but, you know, they have to play by the ground rules, right? Of sort of, you know, have some empathy, understand where other people are coming from. Right. And then if people come into that from the culture of outrage models, where they just say, let's just look for the most inflammatory sentence ripped out of context. Yeah. We can just put all over Twitter to say, you know, look at these blithering idiots, right? Then, you know, it really, it becomes scary and it becomes much harder to have that kind of discourse where you're really trying to understand the other side. So have you been? Yeah, because you've been the victim of this before, right?
Scotts Favorite Posts (01:05:58)
You could say so. Yeah. Or, you know, for better. I mean, I mean, I mean, I mean, a lot of people who try to do this. For sure. Yeah. You know, in fact, a lot of people have had a much worse than I have. Yeah, absolutely. And it has that like, were you on Twitter at one point and these? No. Okay. No, just, it would just never really tempted me. Interesting. I have something to say. You know, I mean, sometimes I just put like little updates on the ends of my blog posts that are kind of like tweets. But yeah. Yeah. Yeah. Okay. Do you have a favorite post? Oh. So, my, I had these posts critiquing information, sorry, integrated information theory, which is a proposed theory of consciousness by people like Julia O'Tannoni. And you know, I was explaining why I don't think this theory of consciousness works, why it solves the problem it's supposed to solve. But what was great about this post is that, you know, the, all the experts, you know, Tannoni himself got involved in the discussion. David Chalmers, the philosopher of consciousness got involved in the comments section. And so we kind of had this, you know, kind of Plato's Academy thing going, right? You know, like, you know, just in my blog comment section where I feel like we were actually able to make progress on major issue, right? You know, that's not all right. I mean, sometimes I write a post that just, you know, some stupid joke or procrastination. Right. But sometimes, you know, when I, you know, have something that I want to get out there, it's nice to have a forum. Yeah, that's great. Yeah. All right. So you suggested this question. So I might as well ask you advice for young people. So you kind of are all across the world, like, you know, you're potentially licensing ideas to companies, but you're within academia and you're, you're also, you know, kind of a CS science communicator. So you're across many realms. What is your advice for nerds in general or, yeah, people who want careers in science?
Young People Forward (01:08:13)
Well, first of all, you know, if you are currently in high school, you know, well, you know, I hope you're having a good experience. If you are, that's awesome. And, you know, take advantage of it. If you're not, you know, realize that things will get better. You know, so one of, you know, because this is a Y Combinator podcast, I should mention that one of the most influential essays that I ever read was Paul Graham's, "Why nerds are unpopular." Oh, yeah. It has, you know, an enormous amount of insight, I think. That's the beginning of hackers and painters. Yes. Yeah. Yes. You know, so buy his book, but if you don't want to buy it, he's also got it on his website. And the basic argument that he develops there is that, you know, we, you know, a teenager hood is sort of a creation of the modern world, right? It used to be that, you know, once people would pass through puberty, right, well, you know, either they would go off and get married, you know, right, that or they would apprentice themselves to some craftsman or, you know, maybe they'd be working in the fields or whatever, right? But, you know, but, but they, in any case, they would not be in this sort of environment of high school, which is sort of an artificial environment that we've created because we don't know what else to do with people, right? And, you know, maybe there's some teaching of them that goes on. Although if you look at how much knowledge the average high school graduate possesses, you know, it can't have been that much. You know, they're retaining not so much. That's right. That's right. So, but, but, but what you do get a lot of is sort of popularity contest that are, you know, can be sort of, you know, based on nothing, right? And yet if you want to do well in it, then you sort of have to devote almost all of your time to this. Right. And so this, that's the core of the essay. Yes, right. Right. So a nerd, you know, in his telling is someone who is in that environment, but who's already thinking about the issues that matter in the wider world? Yeah. And he says basically like they care more about being smart than being popular. Yeah. So, you know, and he says like, it's very hard to accept that that is your priority, right? Because it seems like, you know, you would give anything, right? I mean, you know, you would even accept like a lowering of 30 IQ points or something just to not be in the situation that you're in, right? But except if someone actually gave you that choice, would you actually take it?
The journey of a programmer (01:10:57)
Right. So, but realize that, you know, there is a wider world of, you know, people who are going to appreciate, you know, sort of things that really matter, right? And you know, you can try to get to that world sooner depending on your circumstances. So you know, so I actually left high school early. I got a GED from New York State when I was 15 and I went to a place called Clarkson School in upstate New York, which is a program for high school students who can take courses at Clarkson University and then, you know, apply to college from there. You know, almost every college rejected me. I mean, this was kind of a bizarre trajectory, but Cornell was nice enough to accept me. How old are you? Oh, I was 16. You were 16 when you started at Cornell? Yeah, yeah, and then I, since I already had one year, then I spent three years at Cornell. Okay. And then, and then I went to Berkeley for grad school. Yeah. So, you know, so I was lucky to be able to leave it a little bit earlier, you know, and, and you know, my parents, you know, supported me. I mean, once it became clear that this was what I wanted to do. Yeah. Right. They, you know, they warned me, well, you know, this is going to make your social life like really, really difficult, which turned out to be a hundred percent true. But you know, the, I remember telling them at the time, look, you know, my social life already stinks. So, you know, you know, it's all, I mean, you know, you know, at least I could have, you know, I mean, I mean, I was lucky to have some very, very good, you know, a few very, very good friends in high school, some of whom are still my wonderful friends, right? But like it was, it was only after, you know, I had been a postdoc for a while that I started finally figuring out how to drive a car, how to ask someone on a date. Oh, yeah. So, so I sort of did things in a weird order.
Scotts depression story (01:13:03)
So when you, yeah, when you wrote about, you've written about depression a little bit on your blog. Yeah. Yeah. Was that during this period or was that? Yeah. It was pretty much during this period. But, you know, but even starting before I had skipped any grades. So, so right. So that's the thing. I felt like I was already in such a, a constricted environment, right? That like, at least I could be learning some, you know, CS and math, right? And, you know, at least I could be in an environment that was, you know, where, where people cared about the intellectual things that I cared about, right? But, but, but, but realize, you know, you know, once you get into that environment, right? You are not the only one, right? Eventually, you know, you will be able to, you know, a great thing about the modern world is that people can sort themselves, right? And you can find a group of friends who will care about the things that you care about.
Life Events And Personal Decisions
How Scott met his friends and left high school early (01:13:57)
Yeah. In other words, you know, put yourself out there and try things. Yeah. Yeah. Cool. All right, Scott. Well, thank you so much for coming in. Yeah, of course. Thank you. Thank you. Thank you. You