Episode 6: Sam Altman

Transcription for the video titled "Episode 6: Sam Altman".

1970-01-01T02:17:27.000Z

Note: This transcription is split and grouped by topics and subtopics. You can navigate through the Table of Contents on the left. It's interactive. All paragraphs are timed to the original video. Click on the time (e.g., 01:53) to jump to the specific portion of the video.


Introduction

Intro (00:00)

My guest today is Sam Altman. He of course, is the CEO of Open AI. He's been an entrepreneur and a leader in the tech industry for a long time, including running Y Combinator that did amazing things like funding Reddit, Dropbox, Airbnb. A little while after I recorded this episode, I was completely taken by surprise when, at least briefly, he was let go as the CEO of Open AIA.


Artificial Intelligence Discussions And Growth In The Field

AI Safety and Openais mission (00:24)

Lot happened in the days after the firing, including a show of support from nearly all of Open A is employees.


GPT evolution (00:32)

And Sam is back. So before you hear the conversation that we had, let's check in with Sam and see how he's doing. Hey, Sam. How are you? Oh, man, it's been so crazy. I'm all right. It's a very exciting time. How's the team doing? I think, you know, a lot of people have remarked on the fact that the team has never felt more productive or more optimistic or better. And so I guess that's like the silver lining of all of this. In some sense, this was like a real moment of growing up for us. We are very motivated to become better and sort of to become a company ready for the challenges in front of us. Fantastic. So we won't be discussing that situation in the conversation. However, you will hear about Sam's commitment to build a safe and responsible AI. I hope you enjoy the conversation. Welcome to unconfuse me. I'm Bill Gates. Today we're going to focus mostly on AI because it's such an exciting thing, and people are also concerned. Welcome, Sam. Thank you so much for having me. You know, I was privileged to see your workers evolved, and I was very skeptical, like I did not expect. GPT to get so good and I'm still it blows my mind and. We don't really understand the encoding that we know the numbers. We can watch it multiply. But the idea of where is Shakespearean encoded? Do you think we'll gain an understanding of the representation? Oh, 100%, OK You know, trying to do this in a human brain is very hard. You could say it's a similar problem, which is there.


AI interpretability (02:25)

These neurons, they're connected. The connections are like moving and we're not going to slice up your brain and watch how it's evolving, but this we can perfectly X-ray. And there has been some very good work on interpretability and I think there will be more over time. So yeah, I think we will be able to understand these networks, but our current understanding is low. The little bits we do understand have, as you'd expect, been very helpful in improving these things. So we're all motivated to really understand the scientific curiosity aside, but. The scale of these is so fast. And it is, we could also say where in your brain is Shakespeare encoded, And has that represented? We don't really know. But it somehow feels like even less satisfying to say we don't know yet in these masses of numbers that we're supposed to be able to perfectly X-ray and watch and do any tests we want on. Yeah, I'm pretty sure within the next five years we'll understand it, and in terms of both training efficiency and accuracy. That understanding would let us do far better than we're able to do today, 100%. You see this in a lot of the history of technology, where someone makes an empirical discovery, they have no idea what's going on, but it clearly works. And then as the scientific understanding deepens, they can make it so much better. Yeah, no, that in physics, biology, it's sometimes just messing around and it's like, whoa, what? How does this? Actually come together. In our case we had someone that was the guy that built GPT one sort of did it off by himself and saw this and it was somewhat impressive and but no deep understanding of how it worked or why it worked. And then it was we got these scaling laws, we could predict how much better it was going to be. That was why when we told you we could do that demo, we were pretty confident it was going to work. We hadn't trained the model, but we were pretty confident. And that has LED us to a bunch of attempts and better and better scientific understanding of what's going on.


Potential milestones (04:37)

But it really came from a place of empirical result. First, you know, when you look at the next two years, what do you think some of the key milestones will be? Multimodality will definitely be important. We started speech in, speech out, speech in, speech out, images, eventually video. Clearly people really want that. We launched images and audio and it had a much stronger response than we expected. We'll be able to push that much further. Maybe the most important areas of progress will be around reasoning ability. Right now, GBT 4 can reason in only extremely limited ways, and also reliability. You know you if you ask GPT 4 most questions, 10, 000 * 1 of those 10, 000 is probably pretty good, but it doesn't always know which one, and you'd like to get the best response of 10,000 each time, so that'll be that. Increase in reliability will be important. Customizability and personalization will also be very important. People want a very different, very different things out of GPT, 4 different styles, different sets of assumptions. We'll make all that possible. And then also the ability to have it use your own data. So the ability to know about you, your e-mail, your calendar, how you like appointments, booked, connected to other outside data sources, all of that. Those will be some of the most important areas of improvement. In the basic algorithm right now, just feed forward multiply so to generate every new word. It's essentially doing the same thing. I'll be interested if you ever get to the point where. Like solving a complex math equation where you might have to apply transformations an arbitrary number of times that the control logic for the reasoning may have to be quite a bit more complex than just what we do today.


Compute for AI. Adaptive computing (06:28)

At a minimum, it seems like we need some sort of adaptive compute. Right now we spend, you know, the same amount of compute on each token, a dumb one. Or like figuring out some complicated math. Yeah, when we say do the Riemann hypothesis, that deserves a lot of compute, The same compute as. Saying the right. So at a minimum we've got to get that to work. We may need much more sophisticated things beyond it. You and I were both part of a Senate education session, and I was pleased that I think about 30 senators came to that and helping them get up to speed since it's such a big change agent, I don't think we could ever say we did too much to draw the politicians in. And yet when they say we blew it on social media. We should do better social media. We still that is an outstanding challenge, that there are very negative elements to that in terms of polarization and even now I'm not sure how we deal with that. I don't understand why the government was not able to be more effective around social media, but it seems worth trying to understand as a case study for what they're going to go through now with AI. Now it's a good case study and when you talk about the regulation, is it clear to you what sort of regulations would be constructed?


Array of regulations for AI (07:55)

I think we're starting to figure that out. It would be very easy to put way too much regulation on this space. And you know, you can look at lots of examples of where that's happened before. But also if we are right and we may turn out not to be, but if we are right and this technology goes as far as we think it's going to go, it will impact society. Geopolitical balance of power. So many things that for these still hypothetical but future extraordinarily powerful systems. Not like GPD 4, but something with 100, 000 or a million times the compute power of that. We have been socializing the idea of a global regulatory body that looks at those super powerful systems because they do have such global impact. And one model we talk about is something like the IAEA. So for nuclear energy, we decided the same thing. This needs a global agency of some sort because of the potential for global impact. I think that could make sense. There'll be a lot of shorter term issues, issues of what are these models allowed to say and not say, how do we think about copyright. Different countries are going to think about those differently and that that's fine. You know, some people think, OK, if there are. Models that are so powerful, we're scared of them. The reason nuclear regulation works globally is basically everyone, at least on the civilian side, wants to share safety practices, and it has been fantastic. When you get over into the weapons side of nuclear, you don't have that same thing. And so if the key is to stop the entire world from doing something dangerous, you'd almost want global government, which. Today for many issues like climate terrorism, we see that it's hard for us to cooperate and people even invoke sort of US China competition to say why any notion of slowing down. Would be inappropriate. So isn't it going to be hard to? Any idea of slowing down or going slow enough to be careful will be hard to enforce. As asking for a slowdown, that'll be really hard. If it instead says, OK, any do what you want. But any compute cluster above a certain extremely high power threshold, and given the cost here, we're talking maybe five in the world, something like that. Any cluster like that has got to submit to the equivalent. Of international weapons inspectors and the model there has to be made available for safety, audit, pass some tests during training and before deployment. That feels possible to me. I wasn't that sure before, but I did a big trip around the world this year, talked to heads of state in many of the countries that would need to participate in this, and there was like almost universal support for it. OI think that's not going to save us from everything. There are still going to be things that are going to go wrong with much smaller scale systems, in some cases probably pretty badly wrong, but I think that can help us with the biggest tier of risks. I do think AI in the best case can help us with some hard, hard problems, including polarization, because, you know, potentially that breaks democracy and that would be a super bad thing right now. I guess the. We're looking a lot of productivity improvement from AI, which, you know, that's overwhelmingly a very good thing. Which areas are you most excited about? Yeah. So first of all, I always think it's worth remembering that we're just sort of on this long continuous curve. So like right now we have an, we have AI systems that can do tasks. They certainly can't do jobs, but they can do tasks and there's productivity gain there. Eventually, they'll be able to do more things that we think of like a job today. And we'll of course find new jobs and better jobs. And I totally believe that if you give people. Way more powerful tools. It's not just they can work a little faster, they can do qualitatively different things. And so right now, maybe we can speed up a program or 3X.


Programming tasks & productivity (12:08)

It's about what we see. I mean, that's one of the categories that we're most excited about. It's working super well. But if you make a programmer 3 times more effective, it's not just that they can write, they can do three times more stuff. It's that they can, at that high level of abstraction, using more of their brain power, They can now think of totally different things. And it's like, you know, going from punch cards to higher level languages didn't just let us program a little faster and let us do these qualitatively new things. And we're really seeing that. And so as we look at these next steps of things that can do more complete tasks, you can like, imagine a little agent that you can say, go write this whole program for me. I'll ask you a few questions along the way. But it won't just be writing a few functions at a time That'll enable a bunch of new stuff. And then again, it'll do even more complex stuff. Someday maybe there's an AI where you can say, you know, go start and run this company for me. And then someday there's maybe an hour where you can say, like, go discover new physics. And it's. The stuff that we're seeing now is very exciting and wonderful, but I think it's worth always putting in context of this technology that. At least for the next 5 or 10 years, we'll be on a very steep improvement curve. These are the stupidest the models will ever be, but coding is probably the area, the single area from a productivity game we're most excited about today. Massively deployed and it's scaled usage at this point. Healthcare and education are two things that are coming up that curve that we're very excited about too. But the thing that's that is a little daunting is unlike previous technology improvements, this one could improve.


AIs unique Science (13:41)

Very rapidly and there's kind of no upper bound. I mean, the idea that it achieves human levels on a lot of areas of work, even if it's not doing unique science, it can do support calls and sales calls. I guess you and I do have some concern along with this good thing that it'll force us to adapt. Faster than we've had to ever before. That's the scary part. It's not that we have to adapt. It's not that humanity is not super adaptable. We've been through these massive technological shifts, and a massive percentage of the jobs that people do can change over a couple of generations. And over a couple of generations we seem to absorb that just fine. We've seen that with the great technological revolutions of the past, each technological revolution has gotten faster, and this will be the fastest by far. And that's the part that I find potentially a little scary, is just the speed with which. Society is going to have to adapt and the labor market will change one aspect. Of AI is robotics or blue collar jobs when you get sort of hands and feet that are at human level capability and the incredible ChatGPT breakthrough has kind of gotten us focused on the white collar thing which is super appropriate. But I do worry people are losing the focus on the blue collar piece. So how do you see robotics super excited for that? We started robots too early. And so we had to put that project on hold. It was hard for the wrong reasons. It wasn't helping us make progress with the difficult parts of the ML research. And we were like dealing with bad simulators and breaking tendons and things like that. And also we realized more and more over time that what we really first needed was intelligence and cognition, and then we could figure out how to adapt it to physicality. And it was easier to start with that with the way we've built these language models. But we have always planned to come back to it. We've started investing a little bit in robotics companies. I think on the physical hardware side, there's finally for the first time. That I've ever seen really exciting new platforms being built there. And at some point we will be able to use our models as you were saying with their language understanding and future video understanding to say, all right like let's do amazing things with a with a robot. But if the hardware guys you know who've done a good job on legs actually get the arms, hands, fingers, piece and then we couple it, you know, and it's not ridiculously expensive. That could change the job market for a lot of the blue collar type work pretty rapidly. Certainly the prediction, like the consensus prediction if we rewind 7 or 10 years, was that the impact was going to be blue collar work first, white collar work second, creativity, maybe never, but certainly last because that was magic and human. Obviously it's gone exactly the other direction and I think there's like a lot of interesting takeaways about why that happened. You know, creative work actually, the hallucinations of the GPT models is a is a feature, not a bug. It lets you discover some new things. Whereas if you're having a robot move, having machinery around you better be really precise with that. And I think this is just a case of you've got to follow where technology goes and you have preconceptions, but sometimes the science.


Slack (17:13)

Doesn't want to go that way. So what application on your phone do you use the most? Slack. Really, I wish I could say Chachi BT. Even more than e-mail, Way more than e-mail. The only thing that I was thinking possibly was messages, but like imessages, But more than that and so inside opening eye, there's a lot of coordination going on. What about you? It's Outlook. I'm this old style e-mail guy. He's out of the browser because of course a lot of my news is coming through the browser. I didn't quite count the browser. It's yeah, no, that's possible. I use it more, but I still don't. I still would bet Slack. I'm like we're I'm on Slack all day. Incredible. We've got a turntable here, and I asked Sam, like I have for other guests, to bring one of his favorite records. So what have we got? So I brought the new Four Seasons of Vivaldi recomposed by Max Richter. I like music with no words for working. And this had like the old comfort of Vivaldi and like pieces I knew really well, But enough new notes that it was just a totally different experience. And there's these like pieces of music that you like form these strong emotional attachments to because you were like, listen to them a lot in a key period of your of your life. And this was something that I listened to a lot, lot while we were starting Open AI. I think it's like very beautiful music. It's like soaring and optimistic and just like perfect for me for working.


Music (18:38)

And I thought that the new version is just super great. Is it performed by an orchestra? It is the Cheneyke Orchestra. Oh, fantastic. Should I play it? Yeah, let's. This is like the intro to the center we're going for. Do you wear headphones or and do your colleagues give you a hard time about listening to classical music? I don't even know what I listen to because I talk headphones. But it's very hard for me to work in silence. Like, I can do it, but it's not my natural. Yeah, I know it's fascinating songs with words. I agree I would find that distracting. But this is more of a mood type thing. And I put it I have a quiet like I can't listen to loud music either, but it's somehow just always what I've done. No, it's fantastic. Thanks for bringing it.


AGI and artificial controls (20:05)

Now with AI, to me, if you do get to the incredible capability, AGI, AGI plus I guess I there's three things I worry about. One is that a bad guy is in control of the system. And so if we have good guys who have equally powerful systems, that hopefully minimizes that problem. There's the chance of the system taking control. And for some reasons, I'm less concerned about that. I'm glad other people are. The one that sort of befuddles me is human purpose. I get a lot of excitement that hey, I'm good at working on malaria and malaria eradication and getting smart people in a primary sources that when the machine says to me, Bill, go play pickleball, I've got malaria eradication. You're just a slow thinker then. It is a philosophically. Confusing thing and how you organize society. Yes, we're going to improve education, but education to do what if you get to this extreme, which we still have a big uncertainty but for the first time the chance that might come in the next 20 years is not zero.


Achilly_WorldBig_Cloud (21:19)

There's a lot of psychologically difficult parts of working on the technology, but this is the, for me the most difficult because I also. Yeah. And it's like, in some real sense, this might be like the last hard thing I ever do. Our minds are so. Organized around scarcity, scarcity of teachers and doctors, and good ideas. That partly I do wonder if the generation that grows up without that scarcity will find the philosophical notion of how to organize society and what to do. Maybe they'll come up with solution. And I'm afraid my mind is so shaped around scarcity I didn't have a hard time thinking of it. That's what I tell myself too, and what I truly believe that although we are giving something up here. In some sense, we are going to have things that are smarter than us. If we can get into this world of post scarcity, we will find new things to do. They'll feel very different. You know, maybe instead of solving malaria, you're deciding which Galaxy you like and what you're going to do with it. I'm confident we're never going to run enough problems, and we're never going to run out of different ways to find fulfillment and do things for each other and sort of understand how we play our human games for other humans in this way. That's going to remain really important. It's going to be different for sure, but I think the only way out is through. We just have to go do this thing. It's going to happen. This is like now, an unstoppable technological course. The value is too great and I'm pretty confident. Very confident we'll make it work, but it does feel like it's going to all be so different. The way to apply this to certain current problems, like getting kids a tutor and helping to motivate them or discover drugs for Alzheimer's. And I think it's pretty clear how to do that, whether AI can help us, you know, go to war less, be less polarized. You'd think it should drive intelligence. And, you know, not being polarized kind of is common sense and not having more is common sense, but I do think that's an. A lot of people would be skeptical, so I'd love to have people working on the hardest human problems, like whether we get along with each other, you know? I think that would be extremely positive if we thought the AI could contribute to humans getting along with each other. I believe that it will surprise us on the upside there. The technology will surprise us with how much it can do. We've got to find out and see. But I'm very optimistic and I agree with you. What a contribution would that be. In terms of equity technologies, often expensive like APC or Internet connection and it takes time to come down in cost. I guess the cost of running these AI systems, it looks pretty good that the cost per evaluation is going to come down a lot. It's come down an enormous amount already. GPT 3, which is the model we've had out the longest and the most time to optimize in the three years, that's three and a little bit years that's been out, we've been able to bring the cost down by I think a factor of 40. O for three years time. That's like that's a pretty good start for. 3.5 We've brought it down. I would bet close to 10 at this point. 4 is newer. So we haven't had as much time to bring the cost down there, but we will continue to bring the cost down. I think we are on the steepest curve of cost reduction ever of any technology I know way better than Moore's Law. And it's not only that we figure how to make the models more efficient, but also as we understand the research better, we can get more knowledge, we can get more ability into a smaller model. And so I think we are going to drive the cost of intelligence down to so close to 0. That it will be just this before and after transformation for society. Like right now my basic model of the world is cost of intelligence, cost of energy. Those are the two kind of biggest inputs to like quality of life, particularly for poor people. But overall if you can drive both of those way down at the same time, the amount of stuff you can have, the amount of like improvement you can deliver for people, it's quite enormous and we are on a curve at least for intelligence. We will really really deliver on that promise. But even at the current cost which again this is the highest it will ever be in much more than we want for 20 bucks a month you get a lot of GPT 4 access and. Way more than 20 bucks worth of value. So I think we're already like, we've come down pretty far, and what about the competition is that kind of a fun thing that many people are working on this all at once? It's both like annoying and motivating and fun. I'm sure you've felt similarly, but it does push us to be better and do faster and do do things faster. We're very confident in our approach. We have like a lot of people that I think are like skating to where the puck was and we're going to where the puck is going and feels all right. I think people would be surprised. At how small Open AI is, how many employees do you have? About 500. So we're a little bigger then. OK, but by Google and Microsoft, Apple standards and have and we have to not only run the research lab, but now we have to run a real business and product, two products. So that the scaling of all your capacities, including talking to everybody in the world and listening to all those constituencies, that's got to be fascinating for you. Right now, it's very fascinating. Is it mostly a young company? It's an older company than average, OK. It's not a bunch of 24 year old programmers. It's true, my perspective is warped because I'm in my 60s. I see you and you're younger. But you're right, it's 40. You have a lot in 40s, thirties, 40s. Yes. Yeah. So it's not the early Apple, Microsoft, which we were really kids. Yeah, it's not. And I've reflected on that. I think companies have gotten older in general, and I don't know quite what to make of that. I think it's like somehow a bad sign for society. But I tracked this at YC and like, the best founders have trended older over time. Yeah, that's fascinating. And then in our case, it's a little bit older than the average, even. Still, no, you have got to learn a lot. By your World Wide Combinator helping these companies, I guess that was good training. That was what you're doing now. That was super helpful, yeah. Including seen seeing mistakes, totally. Open AI did a lot of things that are very against the standard YC advice. We took 4 1/2 years to launch our first product. We started the company without any idea of what a product would be. We were not talking to users and I still don't recommend that for most companies, but having. Learned the rules and seen them at Ysief made me feel like I understood when and how and why we could break them and we really did things that were just like so different than any other company I've seen. The key was the talent that you assembled and letting them be focused on the big, big problem, not some near term revenue thing. I think Silicon Valley investors would not have supported us at the level we needed because we had to spend so much capital on the research before. Getting to the product, we just said like eventually the model will be good enough that we know it's going to be valuable to people.


Organized Business Structure And Philosophy

Why Sam Did Silicon Valley (28:49)

But we were very grateful for the partnership with Microsoft because. This kind of way ahead of revenue investing is not something that venture capital industry is good at. No. And the capital costs were reasonably significant almost at the edge of what venture would ever be comfortable with maybe pass, maybe passed and I give you such incredible credit for. Thinking through how do you take this brilliant day eye organization and couple it into the large software company and it has been very, very synergistic.


Separating our own personal desires from the organization (29:21)

It's been wonderful. Yeah, you really touched on it though. This was something I learned from Y Combinator. We just said we are going to get the best people in the world at this. We are going to make sure that we're all aligned at where we're going and sort of this AGI mission. But beyond that, we're going to let people do their thing and we're going to realize it's going to go through some twists and turns and take a while. And we had a theory. That turned out to be roughly right, but a lot of the tactics along the way turned out to be super wrong, and we just tried to follow the science. I remember going and seeing the demonstration and thinking, OK, what's the path to revenue on that one? What is that like?


Great people like to work with great colleagues (30:07)

And in these frenzied times you're still holding on to? An incredible team. Yeah, great people really want to work with great colleagues. And so, like, there's a that's an attractive force deep center of gravity there. And then also people. I mean, it sounds so cliche and every company says it, but people feel the mission so deeply, like everyone wants to be in the room for the creation of AGI. No, it must be. Exciting and I can see the energy when you come up and blow me away again with the demos that I I'm seeing new people, new ideas.


Building a team with the right mix of skills (30:36)

You're continuing to move at a really incredible speed. What's the piece of advice you give most often? Well, I think there's so many different forms of talent and. Early in my career I thought, OK, just pure IQ like engineering IQ and of course you can apply that to. Financial and sales. But that turned out to be so wrong. And building teams where you have the right mix of skills is so important. And so getting people to think for their problem, how do they build that team that has all the different skills?


Math and Science is Cool, Awesome (31:20)

That's probably the one that I think is the most helpful. I mean, yes, telling kids maths, science is cool if you like it, but. It's that talent mix that really surprised me. What about you? What advice do you give? I think it's something about how people, most people are sort of miscalibrated on risk and they're like afraid to leave the, like, soft cushy job behind to go do the thing they really want to do when.


Personal Growth And Setting Priorities

Being clear about what you want in life (31:39)

When in fact, if they don't do that, like they look back at the end of their lives, like, man, I never went to go start this company. I want to start or I never tried to go be an AI researcher. I think that's sort of much riskier. And related to that, being clear about what you want to do and asking people for what you want, I think goes a surprisingly long way. And so I think a lot of people get trapped in spending their time and not the way they want to do. And probably the most frequent advice I give is to try to fix that some way or other. Yeah, if you can get people to into a job where they feel they have a purpose. It's more fun and. Sometimes that's how they can have gigantic impact, that's for sure. Thanks for coming. It was a fantastic conversation. And in the years they had, I'm sure we'll get to talk a lot more as we try to shape AI in the best way possible. Thanks a lot for having me, I really enjoyed it. Unconfused Me is a production of the Gates notes. Special thanks to my guest today, Sam Altman. And remind me what your first computer was. A Mac LC2O. Nice choice. It was a good one. I still have it. Still works.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Wisdom In a Nutshell.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.