Radio Expert, Podcast Pioneer, and Bleeding Edge of Podnews
The Exciting Journey of Podcasting: From Curiosity to Global Impact.
Transcription for the video titled "Ex Machina's Scientific Advisor - Murray Shanahan".
Note: This transcription is split and grouped by topics and subtopics. You can navigate through the Table of Contents on the left. It's interactive. All paragraphs are timed to the original video. Click on the time (e.g., 01:53) to jump to the specific portion of the video.
So I think the first question I wanted to ask you is that given the popularity of AI, or at least the interest in AI right now, what was it like when you were doing your PhD thesis in the 80s around AI? Yeah, well, very different. I mean, it's quite a surprise for me to find myself in this current position where everyone is interested in what I'm doing. The media are interested, corporations are interested. So certainly when I was a PhD student and when I was a young postdoc, it was a fairly niche area. So you could just kind of be either away in your little corner doing things that you thought were intellectually interesting and being reasonably secure that you weren't going to be bothered by anybody. But it's not like that anymore. And so what exactly was the subject matter at the time? What were you working on? At the time when I was at my thesis, well, I worked on how you could use... This is a tricky question. I know you're asking me to go back... Let me think. What is it? Like 30-something years? 30-something years. Yeah, 30 years ago I finished my thesis. Okay, so what did it look at? So I was interested in logic programming and prologue type languages. And I was interested in how you could speed up answering queries in prologue-like languages by keeping a kind of record of the thread of relationships between facts and theorems that you'd already established. So instead of having to redo all the computations from scratch, it kind of kept a little collection of the relationships between properties that you'd already worked out so that you didn't have to redo the same computations over again. So that was the main contribution of the thesis.
I'm amazed I could remember anything about it. That's very impressive. I did my thesis like five years ago and I barely remember. And so did you pursue that further? No, I didn't. I kind of... Well, one other thing that I discussed in my thesis was I had a whole chapter on the frame problem. So the frame problem is... There are different ways of characterizing it, but the frame problem in its largest guys is all about how a thinking mechanism, a thinking creature or a thinking machine, if you like, can work out what's relevant and what's not relevant to its ongoing cognitive processes and how it isn't overwhelmed by having to rule out just trivial things that aren't irrelevant. And so that comes up in a particular guys when you're using logic and when you're using logic to think about actions and their effects and there you want to make sure that you don't have to spend a lot of time thinking about the non-effects of actions. So for example, if I move around a bit of the equipment like your microphone here, then the color of the walls doesn't change. And you don't want to have to explicitly kind of think about all those kinds of trivial things. So that's one aspect of the frame problem, but then more generally it's all about sort of circumscribing what is relevant to your current situation and what you need to think about and what isn't. And so how did that translate to what folks are working on today? Well, so it's actually... So this thing, the frame problem has recurred throughout my career. So although there's been a lot of variation in what I've done, so I worked for a long time in classical artificial intelligence, which is there it's all about it was and still is all about using logic-like or sentence-like representations of the world and you have mechanisms for reasoning about those sentences and a rule-based approach. And so that approach of classical AI has fallen out of favor a little bit and I sort of got a bit disillusioned with it back in a long time ago.
So I buy kind of the turn of the millennium. I'd more or less abandon classical AI because I didn't think it was moving towards what we now call AGI, artificial general intelligence, the big vision of human level AI. And so I thought, well, I'm going to study the brain instead. And because that's the example that we have of an intelligent thinking thing. It's the perfect example. So I want to try and understand the brain a bit more. So I started working on building computational neuroscience style models of the brain and thinking about the brain from a larger kind of perspective and thinking about consciousness and the architecture of the brain and big questions. And now I'm getting around to answering your question by the way, eventually. But now I'm interested in machine learning. There's been this resurgence of interest in machine learning. So I've kind of moved back to some of my interests in artificial intelligence. And I'm not thinking so much about the brain or neuroscience or that kind of empirical work right now. And I've gone back to some of the old themes that I was interested in in good old fashioned AI classical AI. So that's sort of an interesting trajectory. Actually, the frame problem, interestingly, is has been a recurring theme throughout all of that stuff.
And because it keeps on coming up in one guise or another. So in classical AI, there was the question of how can you write out a set of sentences that represent the world where you don't have to write out a load of sentences that encompass all the trivial things that are irrelevant. And somehow the brain seems to solve that as well. The brain seems to manage to focus or and attend to only what's relevant to the current situation and ignore all of the rest. And in contemporary machine learning, there's also this kind of issue as well. There's also a challenge of being able to build systems, especially if you start to rehabilitate some of these ideas from symbolic AI. You want to think about how you can build systems that that focus on what's relevant in the current situation and ignore things that are not. For example, if you if a lot of work here at DeepMind has been done with these Atari retro computer games. So if you think of a retro computer game like Space Invaders, then if you think about the the little invader going across the screen, it doesn't really matter what color it is. In fact, it doesn't really matter actually what shape it is either. It won't really matter is that it's kind of dropping bombs. And you need to get out of the way of these things. So in a sense, a really smart, a really smart system would learn that it's not the color that matters. It's not the shape that matters. It's these little shapes that fall out of the thing that matter. And so that's all about kind of working out what's relevant and what's not relevant to solving getting a good score in the game. Sorry for the interruption, everyone. We just got to see Gary Kasparov talk. It's pretty amazing. Yeah, that was fantastic, wasn't it? Yeah. So Gary Kasparov in conversation with Demis Soussovis. Yeah, he gave a great talk about the history of his computer chess and his famous match with Deep Blue. So yeah, he wouldn't just have to pop upstairs to watch that right now. So now with part two. Yeah, kind of one of those ones in a lifetime, thanks. It also seems like he got out at the exact right time. Yeah, maybe. Yes, he did. Yeah. So Demis, at the beginning of the interview, said that he thought that he was perhaps the greatest chess player of all time. And so he was there just at the right time to be knocked out by a computer in a way. Yeah, knocked off the top spot. Very cool. Yeah. And he also said that maybe accurately that any iPhone chess player now is probably better than Deep Blue. Then Deep Blue. In 1997, yeah, which is interesting. Yeah, I also thought it was interesting that he was saying that just anybody in their living room now can sit and watch two grandmasters playing a match and can use their computer to see as soon as they make a mistake and can analyze the match and can follow exactly what's going on. Whereas in the past, it took expert commentators, sometimes days to figure out what was going on when two great players were playing. So that was interesting. What struck me was how he was kind of analyzing the current players and how they relied so heavily on the computer, or at least he thinks they rely so heavily on the computer, that they're kind of reshaping their mind. Right, yeah. And that certainly, I think, is going to be true with Go and with AlphaGo. So it's been interesting watching their reactions of the top Go players, like Lisa Dole and KGA, who are very positive in a way about the impact of computers on the game of Go. And they talk about how AlphaGo and programs like it can help them to explore parts of this universe of Go that they would never otherwise have been able to visit. And it's really interesting to hear them speak that way. Yeah, it seems like they're going to open up just kind of new territories for new kinds of games to actually be created. Yeah, indeed. Yeah. Well, so we've already seen that with AlphaGo in the match with Lisa Dole. So as you probably know, there was a famous move in the second match against Lisa Dole, move 37, or where all the commentators, all these sort of nine Dan Masters were saying saying, "Oh, this is a mistake. What's AlphaGo doing?" And this is very strange. And then they gradually came to realize that this was a revolutionary kind of tactic to put the stone in that particular rank, in that particular time in the game. And since then, the top Go players have been exploring this kind of play about moving into that sort of territory of when the conventional wisdom was that you shouldn't.
Yeah, I mean, the augmentation in general, I find fascinating across the board. Yeah. And he was hinting that as well. Yeah, he was. Yeah, so he was very positive about the prospects of human machine partnerships, and where humans provide maybe a creative element and machines can be more analytical and so on. What was that law that he mentioned? I forgot the name of it. I wrote it down. Oh, Moravech is more of it. Yeah, yeah. Named after Hans Moravech, the roboticist who wrote some amazing books, including Mind Children. So, this book called Mind Children. And this phrase, Mind Children, alludes to the possibility that we might create these artifacts that are like children of our mind and that they have sort of lives of their own and they are the children of our minds. You know, it's a challenging idea. This is an old book coming from the late 80s. Okay, do you buy it? Maybe in the distant future. Okay, well, then maybe we ought to segue back into what we were talking about, which is kind of related to your book, your two books ago. Embodiment in the inner life. Yeah, yeah, which came out in 2010. Okay, because that was kind of an integral question to the movie, Ex Machina, right? Because you didn't necessarily have to have a person like AI. And more importantly, you didn't have to have an AI that sort of looked like a person that sort of looked like an attractive female. That also looked like a robot, right? They teed up in the beginning, Nathan tees it up in the beginning. Yeah, yeah. I mean, obviously, to a certain extent, those are things that make for good film. And so, there are artistic choices and cinematographic choices. And I mean, in the film, we actually have, of course, a disembodied AI. And so, it's possible to make a film out of disembodied artificial intelligence as well. But obviously, a lot of the plot and what drives the plot forward in Ex Machina is to do with Ava's embodiment. And the fact that Caleb is attracted to her and sympathizes with her. But there's also kind of a philosophical side to it too, which is certainly, certainly, I think that it, well, no doubt, when it comes to human intelligence and human consciousness, our physical embodiment is a huge part of that. It's in it's, it's where our intelligence originates from, because what we're, what our brains are really here to do is to help us to navigate and manipulate this complex world of objects in 3D space. And and that is, so our boredom is an essential fact here. We have got these hands that we used to manipulate objects. And when we've got legs that enable us to move around in complicated spaces. And so, that's in a sense, is what our brains are originally for. The biological brain is to, is there to make for smarter movement. And all of the rest of intelligence is a flowering out of that, in a way. And so, did you buy the, the, the gel that he showed Caleb in the beginning? Oh, yeah. So, so, so that it's interesting because the way the film is, is constructed is that, so Alex Garland, you know, the writer and director, so he sometimes says that the film is set 10 minutes into the future. It's just, you know, it's like a really meant to be like, really a lot like our world. Yeah. Just very slightly into the future. Yeah. And so, when you see the, you know, Nathan's lair, his, his, his sort of retreat in the, in the wilderness, it's, there's nothing particularly science fiction about that. It's design nourishing. Of course, it is, in fact, a real hotel in, or a real, you can actually go and stay in this place. Where is it? In Norway. And so, so it, so it, you know, it doesn't have a particularly futuristic feel. And almost everything you see is not very futuristic. It's not like Star Wars. But then there are a few things, a few carefully chosen things that look very futuristic and they're Ava's body. So the way you can see, you know, the sort of the insides of, of her torso and, and her head. And then when he shows the, the brain, which is made of this, this gel. And so I think that was a good choice, because we don't at the moment know how to make things that are like Ava that have that kind of level of artificial intelligence.
So that's the, the point at which you have to go sci-fi really. Well, I mean, those like lifelike melding elements, have you watched the new show at the HBO show? Is it Westworld? Do you know, I haven't. No, I mean, I, I, yeah, I really, I really am, it really is on my to watch list because, but I've heard a lot about it. Yeah, because they, they definitely original with your Brenner. But I haven't watched the, the series concert yet. Yeah. Yeah. They definitely take cues. I mean, I guess it's, it's probably like in the sci-fi canon that you have this basement layer where you create the robots and then they become lifelike through this whole process. Even if you just watch kind of a, the opening title credits, it's exactly that. It's like the very, the 3D printed sinews of the muscles. It looks exactly like Nathan's layer. And so what I was wondering is, um, as you were consulting on the show, how much of that were they asking you about? And were they saying like, is this like remotely 10 minutes in the future or is this 50 years in the future? Yeah. Well, it wasn't really like that actually. I mean, I'll tell you the sort of whole story of, of, of how the, the, how the kind of collaboration came about or how, how, how. So, so I, I got this email from Alex Garland, you know, unsolicited email out of the blue. It's the kind of unsolicited email you really want to get from, you know, famous writer, director who wants you to work on a science fiction film. And he basically said, I, I read your book and brought him into the, in the, in the life and it helped me to kind of crystallize some of the ideas about this around this script that I'm writing for a film about AI and consciousness. And you know, do you want to get together and have a chat about it? So I didn't have to think very hard about that. And so, so we got together and had lunch and, and he sent me the script. And so I read through the script by the time I got to see him. And, and he really, he certainly wanted to know whether it sort of felt right from the standpoint of somebody working in the field. And, and I have to say it really did. There was, there was nothing. I mean, it was as a script, it was a great page turner actually, because it's, it's interesting being in that position, because now, um, X Machina and the image of Ava is become kind of iconic. And, and you know, you see it everywhere. And, but, but of course, when I read the script, all of that imagery didn't exist. And so I was reading it. Yeah, I had to kind of conjure it up in my own head. And so he didn't give you any kind of preview of what he was thinking, aside from text.
So the, no, because the nothing had been nobody had been cast then, at that point. And I think actually, when we met up, if my memory serves me right, he did have a few, he did have some images of some mockups that are from artists of, of what Ava might look like. Yeah. But I hadn't seen it when I read the script. So, so for me, it was just kind of the script. And the characters really leapt off the page. The character of Nathan in particular was, was really very vivid. And, and, you know, you really didn't like this guy, just reading the script. Anyway, so, so then Alex really wanted to, so I, so I sort of grabbed the title of scientific advisor. I'm not sure if I ever really was officially, you know, a scientific about that. But, but Alex really wanted to meet up and talk about these ideas. He wanted to talk about consciousness and about AI. And so we met up several times during the course of the filming. And, and I, I think there's very little that I contributed to the film at that point. And in a sense, perhaps I'd already done my main bit by writing the book. I mean, there were a few little phrases that I, that I corrected tiny, tiny things. But otherwise I just, I just thought, you know, great. You know, gosh, really, really, very good. And there's some, there are some lines in the film that I just thought were so spot on. Anything you remember and like what line of particular? Yeah, well, I mean, so, so a favorite one is where, where Caleb was, so initially Caleb is told that he's there to be the human component in a Turing test. And of course it isn't a Turing test, but, you know, then he, in Caleb says that pretty quickly, he says, well, look, you know, it's in the real Turing test that the, the judge doesn't see whether it's a human or, or a machine and so on. But of course, I can see that, you know, Ava's a robot. And, and Nathan says, oh yeah, we'll, we're way past that. The whole point here is to show you that she's a robot and see if you still feel she has consciousness. And I thought that that was so spot on. I thought that was an excellent, really an excellent point, making a very important philosophical point in, you know, in this one little line in the middle of a psychological thriller, which is pretty, pretty cool. So I call that the Garland test. So I found it very like, yeah, that was really astute. I was wondering like which, which texts influence him most when he was writing it.
And in particular, like where you found that your, your work had seeds like planted throughout the movie. Yeah. Where do you think it was most influential? Well, it's a good question. You know, you need to ask him. Yeah, maybe. So, so certainly that so my book is very heavily influenced by Wittgenstein. And, and in a sense, Wittgenstein is all about when it comes to these deep philosophical questions. It's very, in a sense, it's very down to earth.
It's always, it's saying, well, what do we mean by consciousness and intentionality and, you know, all these kind of big, difficult, difficult words and Wittgenstein is always taking a step back and saying, well, what are the role of these words in ordinary life? And the role of these words in ordinary life with something like consciousness is all to do with, you know, the actual behavior of the people we see in front of us. And so, you know, in a sense, I, I judge others, well, I don't actually go around judging others as conscious. That's the point that he would make as well. He's just, I just naturally treat them as conscious. And so why do I naturally treat them as conscious? Because their behavior is such that that's that they're just like fellow creatures. And I just do, and that's just what you do when you encounter a fellow creature. You don't think carefully about it. And so this is an important Wittgenstein point that I bring out in the book very much. And in a sense, that's very much what happens to Caleb. So Caleb doesn't, you know, isn't sort of sitting, making notes saying therefore she is conscious. But rather, through interacting with her, he just gradually comes to feel that she is conscious just to, and to start treating her as as conscious. And so that's a very, there's something very Wittgensteinian about that. And then I think probably that comes from, I'd like to think that comes from my book to an extent.
Well, I'd never, I guess it seems very cinematic that it would be like over the course of a week, the Turing test. But I had never seen a Turing test frame that way. Yeah. I mean, I guess it's not, you know, it's a Garland test. But did you, did you coach him in any way of like the natural steps that someone would take as the test elevates? No, not at all. No, this is all Alex Garland's stuff. I had, I had no input on that side at all. So the script was already, and the plot was already, the whole script was already, you know, 95%, you know, done, you know, when I first saw it. So there are a few differences in the final film from what you see in the script that I saw, and indeed in the, in the publish. So that was actually, that was a question from Twitter. This is kind of a seemingly suited amount Twitter. Someone trench shovel. They ask, were there any parts of the script that were changed or left out because they weren't technically feasible or realistic? Ah, well, well, so there was a bit that was in the script that was, that was left out in the final filming, which I think is very significant. Okay. And so it's a right. So spoilers, a little ferret for the few people. I see him if you're listening to this, you've seen the film. So right at the end of the film, where Ava is climbing into the helicopter, having escaped from the compound, then she's got to fly off. And she, and we see her have a few words with a helicopter pilot. And, you know, I wonder what she says actually, you know, that's an interesting, that just fly me away from here. Anyway, and then off the, off the, the, off the helicopter goes. Now in the written script, there's a, there's an instruction there, which says something along the lines. Or we see, you know, we see waveforms and we see the facial recognition vectors fluttering across the screen. And we see this, that and the other. And it's utterly alien. This is how Avers sees the world. It's utterly alien. And, and now in the end, I, so, so the very first version of the film that I saw was long before all the, was before all the VFX had been properly done and everything.
So it was a first crude cut. And they had, they put sort of this scene and they start, they put a little bit of this kind of visual effect in. And then I think they decided this didn't really work terribly well to, to do that at that point. So they, they kind of cut it out. So in the version that we see, you don't actually see that, that you just see her speaking to the helicopter pilot, she climbs into the helicopter. But it's very, it's a very significant direction because, you know, we're left, I mean, I think one of the great strengths of the film is that it leaves so many unanswered questions. You know, you don't really, you're left thinking, is she really conscious? You know, is, you know, does she really, is she really capable of suffering? Is, is she just a kind of machine that's gone horribly wrong? Or is she a person who's, who's understandably had to, had to do, commit this act of violence in order to save herself, you know, which of these is it? And you never really quite know. And, although I think people are leaning more towards the kind of, oh, she's conscious in a straightforward kind of way, then that's the, but that, that version of the ending just points to the fact that there's a real ambiguity there, because if that had been shown, you might be leaning more the other way, you might be thinking, Oh gosh, you know, this, this is a very alien creature indeed. And, and she still might be really genuinely conscious and generally capable of suffering, but it would, it would really throw open the kind of question, you know, how alien is she in?
To me that, that would also, so just so I understand it was, it was a VFX over the actual image, right? Well, I mean, doesn't, in the script, it doesn't specify exactly how it to be, how it would be done. So it just says something like, we see, you know, facial recognition effect is fluttering and we, and I can't remember the exact words, but the, the, it was, you know, the, obviously the idea was to give an impression of what things look like and sounded like forever in some sense, which of course is very, in a sense is impossible to convey, but you just have to, we would, I think maybe that's why they, they thought, how do we, how would we do this? Well, I didn't know if they were also trying to avoid some kind of, I guess it's not really like a fourth wall, but it's also trying to, trying to avoid the situation where they, the author or the author of the Alex right of the movie is saying like, we're in a simulation, like this is what you're seeing as you are the mind of some unartificial intelligence. Yes, well, I think it was meant to be shown from her point of view. So, so, right. So that wouldn't have been an interpretation of it. If they got it right, I would imagine. So maybe, but I don't know why exactly they decided not to put it in, but, but it's just the fact that that direction is there in the script. And by the way, that's in the published script. So I'm not giving anything away. But there is the published version of the script has this little direction in it.
Yeah, I rewatched it. Yeah, I rewatched it last night. And I remembered the ending and it's like, it's so vague. Yeah. And so vague, what happens in, yeah. I do, I do remember, because I quite like that ambiguity, you know, the way you just, you don't really know, really, you know, is she conscious at all? Is she conscious just like we are? Is she conscious in some kind of weird alien way? You never really know. And it's a, this is a deep philosophical question. And there's also the, there's a moment where right at the end where she's coming down the stairs, having escaped basically, she's coming down the stairs at the top floor of Nathan's compound. And she smiles. She does that. She goes up the stairs, kind of looks back in surveys. Yes. And she smiles. And she smiles. And, and I remember saying to, I said, I don't, I don't think you should have that after I've seen the first version. I said, I don't think you should have that smile there, because it's, it's too human. And, and, and, and, you know, and he, he was, you know, really thought it was important to have the smile there, because, because I think he would, he would say, yeah, so, so I think he, I think Alex would, I don't want to put words in his mouth. So I apologize to Alex if he's listening to this. But I think he would, he would say that people, of course, can have their own interpretations. And that's, of course, that's, you know, but he would probably lean towards the interpretation that she is conscious in the way that we are. And the evidence for that is, well, why would anybody smile to themselves privately if they weren't conscious in the, just like we are? And what else in those conversations, you know, you're watching edits of the movie, what else did you guys work through? Well, so there's the Easter egg. Yeah, that's a good one.
So we can talk story of the Easter egg. Yeah. So, so the first time I saw any, you know, any kind of clip of X Machina, Alex sent me an email and, and said, or do you want to come in and see a, see a bit of X Machina? I was, you know, it's in the can as they, as these film people say, though there are no cans anymore for the film to go in. But, so come and see a kind of like, you know, come to the cutting room. So, so I went along and, and he showed me some, some scenes. And at one point, he kind of stopped the, stopped the machine and, and he said, and this is the moment where Caleb is reprogramming the security system in order to release all the locks to try and get out. And, and so Alex froze the frame there and said, oh, right now you see these computer screens where Caleb is typing into these computer screens. And he said, you see this window here. Now this window is all full of kind of some junk code at the moment. And it says, you, but you can be sure there are going to be, there are going to be some geeky types out there who the moment this thing comes out on a DVD, they're going to freeze that frame and they're going to say, what does this do? This thing was, yeah. So let's give them an Easter egg. And so let's give them a, let's give them, you know, a little surprise. Yeah. So he said, so he said, so he said, that basically you said that window is yours, put something in there and some kind of hidden message. And he said, maybe make it an illusion to your book. So I went, so I thought, this is very cool.
This is the best product placement ever, ever, you know, I probably sold one other copy, thanks. So I went home that evening. And I made the mistake that evening of buying a bottle of sake. And I, and I was drinking this sake, and I'm going to do for this. And I got got down coding some coding something up in Python and I was having a good laugh at what I was going to do. So I thought, okay, it got to be vaguely kind of to do with security, so an encryption. So let's have something that kind of has some primes in it. So I wrote this little piece of the Sivavera Tostanese, a classic way of computing primes. And I wrote this, so instead of kind of getting off Wikipedia or something, I sat there and coded it myself after four, after four glasses of sake. And I was coding this thing up. And then basically computes some big array of prime numbers. And then there's this thing that indexes into the prime numbers and then adds some random looking other numbers to the numbers that it's, and then those are ASCII characters. And then it prints out the, what those ASCII characters actually look like. Okay. So when you look at this code on the screen, it's just gobbledygoot, but something to do with prime numbers. If you run it, it prints out ISBN equals and the ISBN of my book in bottom is in a life. Anyway, so that was very, so I was very, very, very pleased with this, kind of handed it over to them and they put it in the, in the thing. But, but I have to say Alex was wrong.
It wasn't when the DVD came out that, that, that, that, oh no, it only had to be on bit torrent for 24 hours long before the DVD came out before there was, there was pages about this thing on the internet. So there was a whole Reddit thread. There's a GitHub repository with my piece of code and there's a, the Reddit thread includes a whole lot of criticism about my coding style. It's not Pepe compliant. I don't know. And it's, it's like, really funny. And it's true. It's really true. But, but why really regret was that the loop, I, I put the wrong terminating condition on the loop. Instead, I, you know, you can terminate the, the sieve of Eratostin is after n squared. You don't have to go all the way to n over two. But for some reason, I wasn't paying attention for glasses of sake. And I put, you know, and it terminates after n over two. It's inefficient. Well, that's fine. Maybe that's the bug in her code. And it will always be a bug. It's not a bug. It's just, I mean, give me some, give me some, you know, I mean, it's not actually a bug. I mean, it does meet the specification, but it's not efficient. Fair enough. Fair enough. We should ask some of these questions from Twitter. I know people are very excited to, to ask you questions. So, so we already asked one. So, Patrick, out water, let's get to his question. This is, okay, so this is, we should, so Craig, ask how much closer we are to the sort of general Hollywood style AI now than we were in the 50s. In the 50s? So, I think what he's alluding to is the, the flying car, you know, pastel version, like it's kind of the crazy futuristic version of the AI in the 50s. Yeah. And then the AI that they're portraying in the movie. But I can tell you that we're precisely 60 years closer than 50s. But I don't think that's the kind of answer that we can treat them that. So, well, of course, you have to remember that in Ex Machina, as in all films, the way that AI is portrayed, you know, really a lot of it is to do with making good film and making good story.
And I mean, in particular, people love stories where, you know, where the AI is some kind of enemy nemesis and so on. Actually, Gary Kasparov, who we just heard speak, made a very interesting point, didn't he, about this? He said that there's been a kind, he pointed out, and I think he's right, that there's been a kind of change from very positive views in science fiction of utopian, we're going to views where we're going to kind of get to the stars and to more dystopian views of things where it's, you know, like the Terminator and so on. But anyway, it certainly makes for a good story if your AI is, you know, is bad. And it also makes for a good story if your AI is embodied and if your AI is very human-like. And whereas in reality, you know, AI, insofar as it's going to get more and more sophisticated and closer and closer to human-level intelligence, it's not necessarily going to be human-like. So, it's not necessarily going to be embodied in robotic form. Or if it is embodied in robotic form, it might not be in humanoid form. So, in a sense, a self-driving car is a kind of perfect robot. So, I think that things, you know, will be a bit different from the way they seem, the way the Hollywood has portrayed them. And of course, if you go back to the 50s, and if we, I mean, it's very interesting to look as retro science fiction, I love retro science fiction. You look at something like the Forbidden Planet, then Robbie, of course, in the Forbidden Planet, is this metal-hunk thing, you know, which is completely impractical. And you think, how would it get around at all? And how would it do anything with these kind of claw arms that it's enhanced that it's got? So clearly, we've changed a lot in our view of what we think we can, the kinds of bodies we think that we might be able to make in there. And I think it's also quite difficult because there's not really a clear benchmarking happening right now, because it's not obvious. If it was just like, you know, energy and compute going into this, then the race would be, I mean, it wouldn't be over, but it would be very obvious as who's winning and what's going on, where there, it seems to be that there are clear breakthroughs that have to happen. Yeah, that's certainly my view. So, if we're thinking about now the question of when might we get to human-level AI or artificial general intelligence, then I think we really don't know. And certainly some people can, you know, you can draw graphs that extrapolate computing power and the sort of the how fast the world's fastest supercomputers are. And you know, we're pretty close to, well, depending on how you calculate it, we're pretty close to human brain scale computing already in the world's fastest supercomputers. And we will get there within the next couple of years. But that doesn't mean to say we know how to build human-level intelligence. That's all together a different thing. And also there's controversy about how you make that calculation as well. I mean, do you, how, what do you count, do you count a neuron? How do you count the computational power of one neuron or one synapse? And some people, you know, it may be that some of the immense complexity in the synapse is functionally irrelevant. It's just, you know, there's chemically important and so on, but it might be functionally irrelevant to cognition. So we really, there's a lot of open questions there. But even if we allow kind of a conservative estimate, and we assume that we're going to have enough computing power that's equivalent, the computing power that's equivalent to that of a human brain by say 2022 or 2020. Yeah, we still would need to understand exactly how to use all of that computing power to realize intelligence. And I think there are probably an unknown number of conceptual breakthroughs between here and there. Yeah, I mean, specific AI, absolutely happening, but this general AI that he's talking about. Yeah, exactly. So, so, so, yeah, so clearly, there's lots of specialist artificial intelligence where we're getting really good at things like image recognition and image understanding is an and speech. So speech recognition is more or less being cracked. They're just the process of turning the raw waveform into text. So that's being cracked. But then again, under real understanding of the words, that's a whole other story. And and while today's personal assistants, you know, it can be quite cool and they're going to get better and better. They're still a way of displaying any genuine understanding of the words that are being being used. I think that will happen, you know, in due cause, but but but we're not quite there yet. Yeah, I mean, fortunately or unfortunately, because that also that underlies one of the other questions that I that I did want to ask. So this is from I think Mecca and Mecca Floss on Twitter. So their question is excellent movie, but why is asthma as maf's law forgotten? That would be the absolute first thing they asked. So just for people who don't know what that is, there are three laws of robotics, right? So I wrote these down. So a robot may not injure a human being or through inaction, allow a human being to come to harm. That's the first one to a robot must obey orders given to it by human beings, except where such orders would conflict with the first law. And then the third law is a robot must protect its own existence, as long as such protection does not conflict with the first or second law. And so their point is basically like, you know, why is the first law broken in X Machina? Yeah. Well, of course, asimov's laws are themselves the product of science fiction.
Yeah, they're not they're not real law. They're not clear. So so so and so Asimov wrote those laws down in order to make for great science fiction stories. And and all of the science fiction stories, Asimov's stories are, you know, center on the ambiguities and the difficulties of interpreting those those laws or realizing them in actual machines. And the kind of often the sort of moral dilemmas, as it were, that the robot is faced with in trying to uphold those laws. So so even if even if we did suppose that we wanted to somehow put something like those laws into into I mean, into a, I mean, it's not relevant to robotics today, but if we do want to put them into into a robot, it will be immensely difficult. So I should take a step back and say, why is it irrelevant to robotics today? So of course, let me let me qualify that. Of course, there are people who are who want to build autonomous weapons and all kinds of things like that. And and and you might you might say to yourself, well, I would very much like it if somebody was trying to pay attention to things a bit like asimov's laws and say, well, you know, you should you shouldn't build a robot that is capable of killing people. But that's that's a law that the designers and or that would be a principle. If we were to have it, that the designers and engineers would be, will be exercising, not one that the robot itself was exercising. So that's the sense in which it's not relevant today, because we don't know how to today make an AI that is capable of even comprehending those laws. So that's kind of the first point. So so why doesn't but okay, but then when we're thinking about the future, of course, this is in in in X machina. So why not? Well, it would obviously again make a very different story. If if if if if asimov's laws were put into Ava, but but let's suppose that it was a world where we were minded to put asimov's laws into Ava. Well, maybe Ava might reason that she is human. You know, what is the what is the difference between herself and a human? And maybe she she would reason that that that that she shouldn't allow herself to come to harm. And therefore, she was justified in what she was doing. Who knows? I mean, it's just a story. Right. So I think we have to remember that it's just a story and and and it's actually very important. I think science fiction is really, really good at making us think about the issues. But at the same time, we always have to remember that it that it's that it's just stories that there's a difference between fantasy and reality. And I think it's also it is also kind of covered in the movie when Nathan Caleb are debating.
I think he he Nathan's criticizing Caleb over going with his like gut reaction as ego and not in like if he were to think through every logical possibility for every action, he would never do anything. Right. Right. Which is kind of like directly against all these laws against yeah, Eva would never do anything if she could harm someone possibly down the road by, you know, burning fossil fuel by being in the helicopter. Well, indeed. Yeah. Yeah. I mean, I guess we're all we're all we all have to confront those sorts of dilemmas all the time. And indeed, you know, moral philosophers have got plenty of examples of these kinds of kinds of dilemmas that make it obvious that there's no simple, single rule really is enough by itself. Trolley problems, if you know that, where the trolley is heading down the track. And there are points and for some unknown reason, somebody is tied across the tracks on one on one form. It's very cinematic. It's very somatic. And on the other fork, three people are tied across the tracks. And the points are currently such that the three people of the trolley is going to go over the three people and kill them. And you are faced with the possibility of changing the points so that the trolley goes down the first track and kills only one person. So what do you do? And, you know, philosophers can spend entire conferences debating what the answer to this is and thinking of variations and so on. And that little problem, that little thought experiment of Philippa Foote's thought experiment there is a distillation of much more complex moral dilemmas that exist in the real world. Absolutely. So before we go, I do want to talk about your thoughts on broader things. Obviously, you work here. We haven't been broad enough. Yeah, no, I could borrow things then X Machina. So obviously, you're here deep mind your ad imperial as well, 20% of the time. Can you talk a little bit about things you're excited about for the future as far as it relates to what you're working on? Yeah. Well, so I've recently got very interested in in deep reinforcement learning. So deep reinforcement learning is one of those things that DeepMind has made famous really. So when they published this paper back in 2014 and the nature version in 2415 about, so they published this paper about a system that could learn to play these retro Atari games completely from scratch. So all of this, all the system sees is just the screen, just the pixels on the screen. It's got no idea what objects are present in the game or any, it just sees raw pixels and it sees the score. And it has to learn by trial and error how to get a good score. And they managed to produce this system, which is capable of learning a huge number of these Atari games completely from scratch and getting, in some cases, superhuman level performance and other cases, human level performance. And in some cases, it wasn't too good at the games. And so they, I think that opened up a whole new field. And to my mind, that, so that's a system that's called DQN. To my mind, DQN is in a sense, one of the very first general intelligences, because it learns completely from scratch, you can throw a whole variety of problems at it. And it, you know, it doesn't always do that well. But in many cases, it does, it does pretty well. Yeah. So, so to answer your question, so I've got very interested in this field of deep reinforcement learning. But when I sort of first, you know, long before I joined DeepMind, I first started playing with their DQN system when they made the source code public. And I pretty quickly realized that it's got quite a lot of shortcomings as well, as, as today's deep reinforcement learning systems all have, which is it is very, very slow at learning for a start. And when you watch it learning, you think, actually, this thing is really stupid, because because it might get to superhuman performance eventually, but my goodness, it takes a long time to do it. Yeah. Yeah. Or even, or even pong or something, something like that. So it takes an awful long time to do it. Whereas a human very quickly is able to work out some general principles. What are the objects? What are the sort of rules of, then you work it out very quickly. And so it made me think about my ancient past in classical artificial intelligence, symbolic AI. And it made me realize that there were various ideas from symbolic AI that could be rehabilitated and put into deep reinforcement learning systems in a more modern guise. So that's the kind of thing that I'm most interested in right now. Very cool. Yeah, that was actually one of my favorite questions from the Kasparov talk today. Someone who was working on Go asked exactly that. Yeah. Like, how can humans compute so quickly all that like, they compute what is not relevant to the game. And they can just, they execute the game, I guess it was chess, right? In 50 moves rather than 100 moves. Yeah. Yeah. And it's very much that framing. Yeah. Yeah. That was Tori, Tori, who's one of the people on the AlphaGo team. And yeah, that's a very deep question, I think, he was asking that. Yeah. Yeah, it's fascinating. Cool. So if someone wants to learn more about you or more about the field in general, what would you recommend? If I wanted to learn more about me, I can't think why they would want to. But if they can, they can Google my name and find my website. If they want to learn more about the field in general, well, we're in a very fortunate position of having an awful lot of material out there on the internet these days that people can find and all kinds of lectures and TED talks and TEDx talks and so on. And people want to know a bit more technical detail. There are some excellent tutorials and about deep learning and so on out there that people can find. There are lots of massive MOOCs, massive online open courses. So there's a huge amount of material on the internet out there.
Do you have a budding career in technical advising or is there Ex Machina 2? So people often ask me about Ex Machina 2, which of course is none of my business. But whenever I've heard Alex Garland asked about that, he always says he's got no intention of producing an Ex Machina 2, that it was a one-off. As for scientific advising, yes, so I have been involved in a few other kind of projects. There was a theatre project I was involved in that I enjoyed with Nick Payne at the Dom Malwarehouse here. What's that called? So this play by Nick Payne was called Elegy and it's about an elderly couple where one of them has got a dementia-like disease and it's set also 10 minutes into the future.
One of them has got a dementia-like disease but techniques have been developed whereby these diseases can be cured but the cost that you have to pay is that you lose a lot of your memories. So the play really centers on the difficulties for the partner knowing that her partner's memories of their first meetings and so on and their love is going to actually sort of vanish. So it was about that and it was more of a neuroscience kind of stuff. But I've also been involved with an artistic collective called Random International and Random International do some amazingly cool stuff so I highly recommend Googoo.
And so they were famous for this thing called Rain Room and at MoMA in New York. Yes, that's right. So it's toured but it wasn't a Rain New York indeed. So all of their art is using technology in various kind of interesting ways and often about how we interact with technology to make kind of art. So in Rain Room the idea is it's a room with sprinklers at the scene where you walk around in this room and it's raining everywhere but there's some clever technology that senses where you are. And you worked on that? No. I shouldn't. I should finish better. So there's some clever technology that senses where you are and turns off the sprinklers immediately above your head. So you walk around in this room miraculously never getting wet. So that's one of the things. So they also worked on this amazing sculpture called 15 Points. And this is based on point-light displays. So a point-light display is one of these little displays where on the screen you've just got say 15 dots and these 15 dots move around and you suddenly you see that it's a person because the 15 dots are like the elbow joints and then the neck and the head and the torso and the knees and so on. And you see these things moving around and you instantly interpret it as motion. In fact you can even tell whether the person is running or walking or digging or often whether it's even whether it's a man or a woman just by these 15 points moving. So they construct this beautiful sculpture which has these sort of rods that have little lights on the end, rods and motors.
And so it's very much a piece of mechanics, mechanical, robotic-like mechanical thing. And when you just see it stationary it's just like this weird kind of contraption. But then it starts moving and all the lights on the end and then you suddenly you see there's this person appears walking towards you. And I thought that was a wonderful example of how we see someone there when there isn't. And of course that for me that was very interesting because it made me think about when we do that with machines. Where we often we do maybe we think that there's someone at home when there isn't. So a lot of their art is all about those kinds of questions. That's so cool. I think that's a perfect place to end it. And we'll link up to all their work as well. Okay, great. All right thanks for. Yeah sure thank you.
Inspiring you with personalized, insightful, and actionable wisdom.