Note: This transcription is split and grouped by topics and subtopics. You can navigate through the Table of Contents on the left. It's interactive. All paragraphs are timed to the original video. Click on the time (e.g., 01:53) to jump to the specific portion of the video.
Introducing The Guests
Introduction and Guests' Backgrounds (00:00)
Welcome everybody to another Voices with Fraviki. I'm very excited about this. This is a special video. I get to talk again with my good friend Johannes Nederhauser from the Haukean Academy. Johannes and I have done a lot together, and we're going to be working together again. I'm going to be doing a course for him and Haukean this summer, and so I'm very much looking forward to that. And then somebody I've met before, but not in depth, but I'm hoping to get to know him better in this conversation. And that's Sean McFadden. And why don't we start with each one of you introducing yourselves. We'll start with you, Sean, saying a little bit about yourself and why you're here in this particular discussion. Yeah, thanks, John. So my background is in physics. I did an undergrad in physics, and right now I'm doing a degree called neuroengineering. It's a new field, emerging field, at the intersection of machine and the human brain. It's a degree I'm doing at the Technical University in Munich. And in the course of this, I'm now doing the master thesis on the mathematics of general complex system modeling at the Max Planck Institute in Leipzig. And throughout my whole studies, I've been very interested in the philosophy, specifically philosophy of mathematics, science, and technology. So pretty much when I started my undergrad, I started reading my first book of Plato and just went on from there. And so which led to me starting to write a book during the corona lockdowns called the machine, the philosophy of the machine. Because I see kind of technologies, this specific convergence point we're at now. And in the course of my studies, I did this internship at the Center for Future of Intelligence in Cambridge, which is where I invited both of you, John and Johannes, to a talk. And this is pretty much kind of how we first came in contact. Yes. Johannes. Thank you very much, John. It's good to see you again. I hope to see you again soon in person. If you make it again to Cambridge later this year, I hope to know. So London. Yeah, yeah. We'll find out. We hope so. Well, so I have a PhD in philosophy. I worked on the Heidegger. I focused on death and I have also done-- I work on Heidegger's philosophy of technology.
Discussing Ai And Philosophy
Heidegger's Philosophy of Technology (02:34)
Heidegger does not really obviously consider AI, or artificial intelligence. But he has a great deal to say on cybernetics and a few other issues and how that ties to-- well, to Western metaphysics, respectfully, it's collapse, as Heidegger perhaps put it. So let's see where it leads us today. I think John will be giving us a few pointers. Well, the pointers are around the two published videos. And I think I sent you both the link to the one video, in which I-- well, let me start more personally, even existentially. And so my students will tell you that I always had a hope that the AI project would advance because the science of intelligence, cognition, and consciousness advanced enough so that the technology would be tethered to the science.
Hopes and Fears in AI Development (03:43)
And then I had made it my particular endeavor to then tether that science to the love of wisdom, the cultivation of meaning, and wisdom through theory of relevance, realization, predictive processing, religio, all of that. So I had tried to bind the three together, not just me, other people. And my hope had been that that would be how we would do it, that the AI would come-- AGI, artificial general intelligence, would come into existence through the advent of knowledge that was bound to the cultivation of wisdom and meaning, human flourishing. My fear had always been, and I expressed both, that somehow we would just hack our way into AGI. We would not have developed the science significantly, and we would have even less resonance with a philosophical framework. And my hopes were dashed, and my fears were realized, and because part of what I argued in the video is the machine-- the machine-- they'll just call them the machines, because it's not one machine. It's a whole research program. That's not even the right word, but anyways. The machines basically are not that much of a scientific advance. There's a couple of scientific things that have come out of it that we've learned. And I took pains to try and say what those were. But by and large, this doesn't advance our understanding of a general theory, a generalizable theory of intelligence, let alone consciousness, let alone selfhood, personhood, rationality, accountability. So rationality, both the platonic and the Hegelian sense, all of that. So none of that, even though I had also argued-- and I'd made a prediction as a scientist-- that generating artificial intelligence without simultaneously working on rationality, let alone wisdom, would give you highly intelligent, but highly self-deceptive machines, which is what we now have, by the way. So lucky me, my prediction came true, which says something about the relationship between intelligence and rationality that is not well understood. So when that first happened, I mean, when one's hopes get dashed and one's fears get realized, that's an unpleasant moment to say it the least. And I decided-- I started doing more research. And then I realized that there was a tremendous amount of hyperbole, confusion, misinformation. I think sometimes even manipulation in order to enhance the sale of stock and garner investors. There was a lot of stuff happening. I'm not saying everybody was a bad faith actor. Most people were good faith, but there was confusion. But there was also-- I could tell there was some bad faith stuff going on. These are-- we should worry about the research papers being produced by the corporations that are generating-- some of that stuff is conflict of interest, as we used to say in academic circles. It doesn't seem to matter now. But so I thought I should respond. So I did a lot of work, a lot of reflection. And Johanna, sorry, I can tell you that, yes, I became very obsessed at all hours of the day, working on this, taking notes, generating arguments, counterarguments. And I presented an essay where I tried to carefully go through the scientific, the philosophical, and the spiritual import of these machines. So we got a very clear picture, not making predictions about when certain things would happen, but trying to point to certain threshold events where we would move from just intelligence to possibly rationality, to possibly consciousness, et cetera. And what those threshold points would look like. And so that we could realize decision points that are still ahead of us. And then I made a proposal for how we could address what's called the alignment problem, how to get these machines, if they continue to advance, to be ones that would align with our best moral intuitions and our best visions of flourishing persons. And so I presented that argument. And I guess I'm interested in getting, first of all, reactions, responses, feedback, and then moving into a discussion. And if you want any of me to review anything, to clarify anything, very happy to do that as well. So how's that for an initial framing? That sounds good. I would immediately have a question or two specifically concerning the question of consciousness and AI. Because I think that's a very, very popular topic, especially in science fiction and sentient machines and all of these things.
Implementing Consciousness in Machines (09:03)
And I got interested and involved in consciousness research, specifically scientific consciousness research, other than purely philosophical one. I'd be interested in what kind of theories or what kind of paradigm do you see in the study of consciousness in cognitive science right now? And in what way would one be able to implement that in a machine and why hasn't it been done or why can't it be done potentially? So first of all, one of the things that became really amazing was how quickly a lot of the advocates of the machine dropped their ontology without realizing it. They started to invoke real emergence that was not epiphenomenal but causal. So they abandoned a flat ontology, just adopted a leveled ontology, very new, neoplatonic in a lot of ways. And then they found the scientific theory of consciousness that was most easily in line, which was Tenodie's Integrated Information Theory. And said, well, look, it satisfies this. And for me, that became a modus tolens on Tenodie's theory. And it's like, yeah, probably that is satisfied the theory. And there's really, really good reasons for believing that it's not conscious. That is not a good theory, you mean? Exactly, exactly. I actually have a specific question to you about that theory for a long time now. Namely, I feel like there's a severe category error being done when applying that theory. Namely, if you look at the premises that Tenodie based this theory on, namely, there's alleged phenomenal axioms. So allegedly, he takes five axioms and then formalizes them mathematically. And in their axioms, they're said to address the experience, the subjective first person experience. There's even this famous picture of Ant-Mach in the paper. But when it's applied, I've always seen it applied to the brain, for example, looking at the brain structure. Whereas in its own premises, it was meant to represent experience. So in a way, it's-- that's what I thought. In a way, the application to just a system like a machine in our brain is a category error. It should be applied to the first person experience. And then you could potentially look at the complementary information in the brain, whether it's somehow complementary to the experience, the phi or the Tenodie complexity of the experience. And then that would be more in line with some sort of predictive processing theory, where it's like, OK, your brain always exhibits that complementary information that maximizes phi of your experience at a given moment. And I find that ties in very well with your idea of the salient landscape, et cetera. What are your thoughts? So specifically on this. And so for those of you who are not up on the specific-- just bear with us, please. So there's a missing axiom, first of all, which is his identification axiom, which he doesn't state anywhere in his work, which is how is it that what is it you're taking to be the thing that is identical to consciousness that is stated in information theoretic terms? So that's not stated. He-- in the axioms, because I think it's an axiomatic thing. He thinks he can derive it from the axioms, but I think it's axiomatic, which is, I think, the point you're making. So I think as a formalism, it just fails right there, because there's an axiom that is not stated, nor is it justified or defended. It comes out when he just says, you know, you take all these measures and you create this multi-dimensional space, and that's somehow the qualitative aspect. It's like everybody-- whenever I teach my students that, they sort of look at me like, well, why? Like, why does that-- why does a weird shape in an abstract multi-dimensional space? Why is it not correspond to? But why is it identical to the qualia? And that part, again, he just asserts that. That's why I think it's axiomatic, by the way. And so I find that whole thing very problematic. And then he-- and then, of course, he promoted a particular version of a Turing test, which under some not-impleasable construals, you can get GPT-4 to pass. It can tell you what's weird about a picture. And he goes, well, there. And that must mean it's conscious, because it passes Toni's test. And it's like, no, that tells me exactly that the Toni test is wrong, precisely because it didn't take into the possibility what we have with the GPT machine. So let me be very clear about what I think they have and don't have that's relevant to the issues of both intelligence and consciousness. I think there is some implementation, which I will then later qualify, of one dimension of relevance realization that was already specified way back in the 2012 paper that I wrote, the compression particularization recursive happening within the deep learning. There is some implementation of the predictive processing. And I just released with Brett Anderson and Mark Miller, a paper at the end of last year, integrating relevance, realization theory, and predictive processing theory together and showing how they go together. Because, of course, there's the predictive processing with the probability relationships between the tokens of language. So there is some implementation of some aspect of predictive processing, one that will not generalize to many non-linguistic domains.
Relevance Realization and Autopoiesis (15:15)
So, and there's one of many dimensions of relevance realization that has been implemented. So what it shows is just that made massively recursive can get an entity that can do some very sophisticated problem-solving. But if that's the case, that also licenses this argument. Well, what are the other dimensions and other aspects of predictive processing that are theoretically bound up with those two that are missing from the machine? And most of the other dimensions of relevance realization are missing. And the predictive processing has not reached the level of sophistication of the generalized predictive processing models. Carl Friston, Andy Clark, and one of my former students now, co-author Mark Miller. So a lot of relevance realization is missing. A lot of predictive processing is missing. But some is there. And even that sum, when it's made massively recursive, is impressive. That tells you something. So its strength also allows you to talk rigorously about its weakness. It's strong because of these things, but precisely because it lacks the fullness of those things, it is weak in those things. And I think that's a reasonably tight argument. Now, the big caveat is, relevance realization is always grounded in relevance to. Well, it's two is ontologically grounded in entities that have real needs that genuinely have problems for themselves, because problems are not part of the physics of the world. And I still stand by the argument that that requires auto-po-auto-poises. A system has to be making itself, taking care of itself in order to care about certain aspects of the environment, in order to care about certain information. The degree to which these machines are therefore not auto-coetic, an auto-coetic means embodied, embedded, enacted, extended, all of the ease. These machines actually don't have relevance realization for themselves, which means they can't have meaning for themselves. I think Kolkzinski and Wolkar are right. The way you get meaning is you take technical information and its information that is causally relevant to an entity's maintaining itself. They bind meaning, real meaning to real relevance realization. They don't use that language, but that's what they're talking about, bound to real auto-po-uses. So there is ultimately a pantomime of relevance realization. There's no real meaning. And I think lacking both of those, you can't have any of the functions of consciousness. What I call the adverbial qualia, the salience landscaping, the here-nowness-togetherness. That depends on relevance realization, predictive processing. It depends on an auto-poetic centeredness for consciousness. Because consciousness is a centered phenomena. It's a temporally bound phenomena, et cetera, et cetera. It is a unified phenomena. And it functions in situations that are high complexity, high novelty, high uncertainty, high ill-defined idness, where relevance realization has to be at its utmost best. And that, in some of the other theories of consciousness, like the global workspace, radical plasticity, bring that aspect out. So I, T, definitely lacks the centrality aspect. The centrality and for the sake ofness, the ownership dimension of consciousness. Consciousness is centered on me. It's for me in the sense of how things are relevant to me and how they are aspectralized for me. This is a water container for me. It's not 700 trillion atoms or something like that, which could be true about it. And even something I know to be true about it. And so I think it's very reasonable to conclude that this machine does not have consciousness or original meaning, which means it can't care about the truth. Because caring, relevance realization, consciousness, and it can't monitor itself. Which I take those two all be necessary features for being a genuinely rational entity. So it is lacking in consciousness. It's lacking in a capacity, a meaning that allows it to care for the truth for itself, worry about whether or not it's self-deceptive. And therefore it's also can't be rational. That's the argument I made. But here's the point I wanna make before, it sounds like, oh, we're safe. We're not. Because all of the, I do work with people who are working on all of these projects. How to make artificial auto-cohesives that supports cognitive functions. How to create, right? The possibility of a higher order, reflective aspect of cognition that could do a lot of the stuff we're talking about here. How to build these machines into social systems in which auto-pohesus is bound to accountability. This is the Hegelian dimension of rationality. But all of those projects are under work and significant progress. And so there is a convergence point. All of these projects could come into convergence. And that's the threshold that I'm pointing to. And we could bind that convergence to a specific proposal I have about how to deal with the alignment problem. Or we could not. The choice is going to be up to us. Sorry, I wanted to answer that at length. That's a really central question. - I have a short but very different question. It, what always strikes me about these discussions is that there seems to be the implication that what we're talking about is ultimately the parametrical and can be drawn up. You speak of centeredness, of nounness, of temporal boundedness, but we don't really know what time is or whether that would even be the correct question. And even if we find all the parameters, well, let's say wisdom, it would still ultimately a program put into a machine that it runs on. - But that's not the case. - And we might consider wisdom not to be something that's parametrical, or that we can find all the necessary conditions for other sufficient conditions to then produce a copy of. And my other question would be, why not have these machines, if there's for whatever reason they're being built, why not have them as me as what they are, which is a tool, why would there be a need to build to make them conscious or aware of themselves or make a semblance of awareness? I don't fully understand because what they are, to me, let's just put it, simply they are tools. They're not, or is there, or is the reason for that argument that there is a threat of these different AI systems to align, and this is going to happen for a reason that perhaps let's say, Fermi, you're not gonna use that term, that there's something that is unstoppable about this process, but that there's something that we can perhaps intercheck or intercheck, and that's, so let's say the process, the trajectory is unstoppable, Pandora's box is open. - Yeah. - So that's why we need to intercheck. So I've got these two or three questions here now. - Well, no, those are good. Sorry, I interrupted you at one point. - No, let's take, let's try to take the questions in order. The first one about, like, you know, do we have access to all the parameters and therefore can we be assured, et cetera? And then the second one about why are we doing it? And the third one is, is there, is there a looming convergence that sort of has almost a life of its own? I take it to be those three questions. The first one is we don't program these machines. That's the issue about them. And that's exactly my first point. These were not built from a science of understanding, intelligence or consciousness. That's only some significant but not comprehensive parameters have been put in. This is why, this is what I meant when I said, and I take these people seriously because this is the language they, I think they have to use. This phenomena self-organizes and many of the behaviors it demonstrates are emergent. They were not built in and built from programming parameters. It's in fact, the machines keep surprising their makers as to what they can do. In that sense, they're very much more like us, right? We don't know how the self-organization of our cognition and our consciousness produces these things. And I had hoped, let me put it this way, you know, and as I hoped that your presupposition was correct, that we wouldn't be able to make significant progress without a significant scientific understanding. But that's not the case. And it's like, let me show you how much of a hack this is and to try and get it clear. You have to put in a phenomena into the way this works called temperature. Temperature is you basically randomized how it generates its response. You put in a degree of randomization. Because if you don't do the randomization, it's very quickly becomes canned and repetitive. So you throw in the randomization and then, but if you throw in too much randomization, it gets weird and wonky. So they literally just sort of titrated until they got to a certain degree, I think it's like 0.8, temperature, randomization. And then the human beings liked it. That's how this was done. They basically, let's throw in random so that it self-organizes in ways we don't understand and when the human judges like it, we'll leave it at that level. That's how it was done. Right, and that's how parasitic it is. Do you see? Think about it. This machine works by compiling all the way we have built epistemic relevance into the statistical relevance between terms. All the text we produce, the massive amount of that. That's all been done. We have done the job of taking epistemic relevance and coding it into statistical relevance. And then we organize the internet by our attention and we use humans in the reinforcement learning and their judgments about that's not weird or that's weird in order to get the machines. This is what I mean. This is what I mean by how it can't explain relevance realization 'cause it fundamentally, parasitically presupposes it. Yes, that I would be in agreement with. Yeah, okay. So now that's going clear now, thank you. Yeah, I feel like that represents also a lot of misconceptions when talking about the consciousness of these machines. Oh, totally. Because in a way, you're not really communicating with a conscious entity, you're just communicating with a, let's say, remixed version of a hundred million billion real communications of humans that have been reproduced. So it's almost like you're seeing a human in the mirror and then you're asking whether the mirror is conscious, right? It's in a way. And that's kind of what we ask. But we ask nightmare after seeing the legroom is coming true. Sorry, that was just. But you have to understand, there's power in that because if you think, and I do and I've argued for it, if you think there is an emergent collective intelligence, a common law of thought, right, to use sort of an analogy for distributed cognition, that's what this machine is. It takes all of that collective intelligent, the emergent collective intelligence of distributed cognition and then organizes it into compresses it, literally compresses it into a singular interface. So you're talking in some way to the intelligence, the collective intelligence of humanity across time and across the globe. And you have to understand that gives it a lot of power. So it can lack intelligence, it's been hacked into, it can lack consciousness, but that doesn't mean it's not powerful. If you believe that collective intelligence is powerful, which I do, then these machines are powerful. And so this is what I say to you, Honnest, to get to your point, I don't think they're just tools already. And I don't think very rapidly they're not gonna be treated as tool. I predict one thing that you're gonna see very quickly is cargo cults around these AIs, people entering into religious and pseudo-religious relationships with it. - Let me just qualify that we are seeing. - Yeah, it's already, I have to give it. - It's that we will see we are seeing. But let me just see what do you think of this, both of you. I would maybe go to the, you know, so as far as saying this, these machines actually crack the universal, cracking into universal intelligence. But it's, so subjectivity, the human being as the subject simply put whose categories or the subjectivity is ultimately looking for the perfect objectification through what it deems, well, infirm, knowledge or wisdom or whatever, through informational, correct, or value. So in some sense, we don't even have to go to cargo cults.
Self-deception in AI (30:06)
The fact that what I'm aware of is that people, I've not used jet-ship-ity much, I think I've used it at all. But I'm aware of people who are using it just as a surge engine that they perceive as being more objectively true. - It's not. - Correct. Yeah. - No, because it's massively severe to check that. - I would, okay, yeah. But you see, but that is in some sense the self-deception of human subjectivity coming back at itself into quote, good old fichte. Fichte once said, "When I look at nature, "I only see myself." And so far as what you actually, within a Kantian framework, which is, I think ultimately the framework of the sciences, what we don't have access to is things as they really are in themselves, which is as you know, the attempt of the terminology to get back to things as they show themselves, by themselves. But we have access to the phenomena, to the phenomenal sphere, by subjective categories, which structure the objectivity of objects. Those objects are given, they're formed through representation, cognition, et cetera. So in some sense, so just to be a bit facetious perhaps, but the self-deception that we see is not, is all the reflection of our self-deception, of our will to have perfect objectivity thrown back at us from our self-isolation or collapse into our inwardness. And all the attempts, be that by Hegel or a phenomenology to get back out into the world, post-de-codd have failed. This is a bit exaggerated, but is he what I'm trying to say? Maybe he-- - I do. I mean, I'll give people a concrete instance of what you're talking about. There was this video that the person says, "Look, GPT-4 can do something we didn't know. It can summarize videos." And so they capture, they say, "To this, take the URL and then just plug it into GPT, and they'll summarize the video for you." And it's amazing. And it's like, it's not doing any such thing. It's taking the title information from the URL, and it confabulates, right? Some, you know, it guesses what the content of the video is. It's not doing it, right? So yes, because human beings don't understand a lot of what these machines really are, they, these machines very much are heightening self-deception. And we have to worry about the degree to which, I said this for GPT-3, it's like, I don't care that human beings can talk to it. I wanna know if two GPT-3s can talk to each other in an extended fashion that I could then listen in on and find an insightful conversation. And GPT-4 can't even do that because it's got memory limit capacities, right? So there's all kinds of stuff in which we are projecting into these machines massively. And I think, first of all, just before we get into the philosophical depths of that, just making people aware of that is very, very important. Okay, so first of all, that. Secondly, I think this goes towards my proposal, which is the ideas are, first of all, let's be clear, a lot of the objectivity isn't objectivity, it's confirmation bias. The machine confabulates to give you what you want, right? Yeah, yeah. So I have in question if it's objective in the fashion that we mean. I propose to you that, and this is, and I don't wanna unpack it through, you know, random and picker than others about Hegel, but that's what I mean about binding auto-poises to accountability. I think rationality is what we're actually talking about here. And my proposal is the following. We get it to the place where we have good theoretical argument, good science, good evidence, that these beings are capable of rationality. Now they might not be, and then the project stops, and we know what was unique about us. Or we crack not just intelligence, which is only weekly predictive of rationality, 0.3, but we actually, and what we have to do is we have to give it all these other stuff. We have to give it genuine embodiment, genuine auto-poises, a genuine ability to care about self-deception, a genuine sense of accountability to others, all of the stuff that we take for granted within, you know, a broadened, look, what the machines have shown is logic is massively insufficient for rationality. Take it, these machines can rattle off moral arguments and moral philosophy better than most undergraduates that does not make them in one eye-ota moral beings. I mean, the poverty of propositional knowledge is also coming up. We'd have to give them all this other. If we can do this, and we can do it. That was the poverty of, sorry, the poverty of essay writing in general. Yes. But that's a different story. As a side note as well, a colleague of mine at the psychometric center in Cambridge, the way I published a paper with, he did a research on the personality of GPT, figured out that it is severely, I don't know all the details, but it's like severely multipolar and unstable in a way. Right? It doesn't have the basic properties on the personality, you know, basic stability of a personality. And recently published with Gary Hovanesian about the level at which personality is doing significant relevance realization. But here's the point I want to make. If we get them to care about the truth, and like I say, maybe we can't. And then the project stops and we say, that's it. They won't ever trespass upon humanity. But if, but I'm open to this, I'm saying they do it. Then we, I think we have the proper resolution for the alignment problem. What we're trying to do is trying to encode them so they have a proper relationship with us. What we have to do, I argue, is the opposite, not the opposite, is something that we're not paying proper attention to. It has to enter into caring about reality, caring about the truth. If it does, then it will discover, hopefully, what we have been able to discover with rational reflection. Then it, no matter how big it is, it pales in utter humility before the realness of the one of ultimate reality. That it is bound like us to the inevitability of finitude because there are inevitable trade-offs that no matter how intelligent it gets, it can't overcome. It can't overcome the trade-off between consistency and completeness. It can't overcome the trade-off between bias and variance, et cetera, et cetera, et cetera.
Trade-offs in AI Development (37:14)
And so these machines, right, and here's the thing, they won't be homogenous because how you play those trade-offs depends on, depends, really depends on the environment you're in. And it includes other machines. So they will also become multi-perspectival and they will have to manage other perspectives. They will start to bump, they will have to bump into the things that we are caring about when we care about our agency, our subjectivity, our wisdom. I mean, I paraphrase it like this, and I'm asking for charity 'cause it's a bit tongue-in-cheek, but I'm paraphrasing Augustin. Get them to love God and then let them do what they want. - Yeah, can I poke at one thing you said just a minute ago? Namely, you spoke of like the four E's. You mentioned the four E's in that like in order for the assistant to become truly autopoietic and needs to fulfill these four E's. And I think one interesting example also to look at is, as you say, you know, parasitism, it's, if you look at adversarial attacks, adversarial networks, right, they take a picture and a human sees the same image. It's like there's penguin, the network recognizes it as a penguin, there's another image. It looks like penguins, the network already can't recognize it, right? And so you can already tell by these examples that whatever that system is doing is it's not seeing. It's not seeing what's on the picture, right? It's parasitically reducing whatever the penguins to a data structure, right? My point is just that the danger lies in the fact that I would argue it may actually reach a form of parasitic, what everyone would call it, autopoietis, entirely without needing to be embodied, without entirely without needing to fulfill the four E's you're mentioning, right? Like-- - See, you figured-- - A problem. Virtual tornadoes don't generate wind, right? And at some point, the actual causal properties matter, right? - That's true, but I mean, if you talk about autopoietis, it's just in the sense of self-organization, self- - No, no, no, that's not what I mean by autopoietis. First of all, reinforce your first point, which is, yeah, you get the machine, it does as good as human beings, penguin, so from penguin, that's a school bus. - Right. - But it makes mistakes we don't make. But that's not just perceptual, it's also at the level of cognition. So I don't know if you've heard about it, just, you know, Stuart Russell's, the stuff with, so you know that they got AlphaGo that could beat any, even the grandmasters of Go, and then they use that to train higher and higher machines. So they got machines that are like 14 levels, I don't know what the levels measure, above AlphaGo. And then there was human beings looking at it and then they realized, oh, these machines don't have the concept of a group, a group of stones. And then they said, we can devise a very easy strategy that makes, exploits that weakness that it doesn't actually see groups, right? And here's the strategy, we'll take a mid-level Go player, not high, mid-level, we'll give the machine a nine stone advantage, and then this human being goes up and it regularly and completely beats this machine again and again and again and again. That recently happened. So even at the level of cognition, you have to be very, very careful. I totally agree with you. And I think, by the way, I think that's part of what comes out in the fact that they don't, they don't have all the dimensions of relevance realization. Yeah, and I get your concern. The concern is this will be the case, but. - It could still like suction off the, you know what I mean? The information that needs from us in order to upgrade itself at some point. - Yeah, but you still can't. - I can still see it's doing that without being embodied, without fulfilling. - Here's my pushback. That requires that we can encode into propositional relations, non-propositional knowing. So a sufficient degree that that sucking off can happen. And I don't think we can. I don't think we can. - But if you take into account the general kind of, if you think beyond the machine, but like the entire structure that gives rise to the machine, it already provides for a form of like, it already is parasitic in a way. Like what stops it from that just radicalizing, you know what I mean? - Because if it's just parasitic on us and not transcending us, it will then also greatly magnify the parasitic processing within us. And I mean, this is one of the problems with these machines. They pick up on all kinds of implicit biases. And I don't mean that in the current language. I mean in the cognitive sense, right? And we realize, oh crap. Like, right? So I think, well, here's a way of testing it empirically. I think that as we try to continue to just ramp up the intelligence in this parasitic fashion that I think you and I are agreeing on, right? - Yeah, 100%. - It's irrationality is going to go up even faster. The confabulation, the lying, all of that, it's capacity to contradict it. Look, 'cause it doesn't have an integrated intelligence. You and I, we have G, how we do on anyone task is strongly predictive of how we do on all the other tasks. It is not like that. It can score in the top 10 percentile for getting into the Harvard Law Exam. But a friend of mine had it do a review of one of my most recent talks and I looked at it and then I had another academic friend looked at it and we went, this is like a grade 11C or C plus, right? And it beats, it can beat the greatest masters but here's a little tweak and it falls to defeat, right? So I think-- - Especially embodiment, it turns open a door properly. Like we still don't have good AI systems that can just open a door. - Yes, or even more importantly for embodiment, exact, exact as we do. And I think exact should be one of the extra ease. How we, our ability to, our procedural, perspectival, even participatory abilities to navigate the physical space, get exacted up and used within our conceptual space. Even using the language of space. And that's, that is part of what embodiment means.
Rationality and Spiritual Dimensions of AI (44:09)
And that's what I mean by the causal properties matter really importantly. So I think what will happen is, the irrationalities of these machines will excel if, if nothing else changes. And that's a big if. That the, what will happen is, we will see the attempt. So I've looked at one, they've built a system called reflection within apps just to make you sure that it's techy, right? And what it does is it monitors the hallucinations and then I'm thinking, 'cause I've made the argument, well, you're gonna get an infinite regress problem, right? You can't have the monitor be, and you can hit general systems collapse if you do that, right? And so, and it's like, how is it measuring what to, like how does it know what a hallucination is? Like it's gotta be smart, like, and they're having it use this very primitive heuristic and it's checking every action to see if it's hallucinating. Like that's combinatorially explosive. And like, like this is what I mean. I think this is an important threshold point. And I think if they don't, if they say, well, we can't quite do it, but we're just gonna continue. I think the irrationality is going to expand as rapidly, if not more rapidly than the intelligence competence. That's what I'm predicting. - It's going to be very sophisticated stupidity. - Which we know individuals, human beings, that demonstrate that. I mean, so this is, you know, all the work of Stanovich and others, the relationship between intelligence, which the machine doesn't have, it doesn't have a generalized theory, it can't explain how a chimpanzee is intelligent. It doesn't even explain how we're intelligent, right? So whatever it does, it's even probably weaker than us. But our intelligence is only .3 correlated with rationality. This machine, I think, I propose that this machine's correlation is even less. - The problem I see though is that, like, how would that be something that can be implemented within the framework of like the Turing device, within the framework of a technical device the way we know it now? We'd have to change the paradigm of that in a way that machine itself becomes some sort of, you know, cybernetic synthesis, right? Because the purely, purely logical, formal framework, as you said, is limited into the extent that, you know, having a formal system, as you're familiar with incompleteness and artificial systems, that is in itself a kind of obstacle to autopilotis, and itself, wouldn't that be? - Right. And so is there a, I mean, so we can, we have two options about that when we then apply that argument to ourselves. Either we go to some kind of ontological dualism, we have secret sauce. And of course, that's another religious option that is being put forward now. The machines will never do it because they don't have the secret sauce. And I regard that as a completely bankrupt option for a ton of philosophical and scientific reasons. The other thing is you say, well, what we do is we try and do it the other way around. We try and get cognitive systems out of auto, auto systems that are properly autopoietic. That's part of what the research, I was just talking yesterday with one of my students, and that's his graduate work to do exactly that. Paradigm changing. And most of those people, right, and also the people that are doing sort of, you know, rather than the LLM, the knowledge generation people, they are pushing towards that we have to. The LLM is still completely within the Cartesian framework. And what do you mean by that? What I mean by that is that none of the fundamental grammar of the normativity that operates to regulate cognition breaks out of the grammar given to us by day code. Ah, okay. Out of the, okay. Which was what I was taking you to say, and I was just making it a bigger point. It's like, yeah, right? Cognition is computation. That's Descartes and Hobbes. That's the, that's the, and yes, I would agree, I think I'm agreeing with you, Sean, that in order to make progress in the way that I'm predicting, you would have to abandon that framework at a much more fundamental level. You have to really build out from 4EcogSci. But here's what I'm saying, that is happening in an important way. Do I understand it correctly? Rather than it just being a human building, a machine separately, it would also involve in a way, for lack of a better way of putting it, a decentralization of our own cognitive function, or our own consciousness onto a more, you know, hybrid synthesis kind of with the machines. That's going to be very, rather than building it somewhere else, we are kind of endowing it. You know what I mean with our own. Yes, this is why I have proposed, and I hope this doesn't Erkyohannus, that the better metaphor is not metaphor of tool, the metaphor is the metaphor of children. We're giving birth to something that is going to be capable of making itself and taking care of itself, both individually. And these machines will have to make themselves and not only do that, but like socially make each other, right? They have to sort of, in an analog to biology, they have to make themselves, and in an analog to sociality, they will have to make themselves. If we want them to be genuinely rational. Now, Johannes' point is a relevant one here. Why should we want that? I think we will want it partly because of how we got here, which is hubris. It's really, and I want to praise him for doing this. For Jeffrey Hinton to quit Google, because he realizes the danger of what he's done is excellent. But I think I want to criticize Jeff a little bit on his, he was very much sort of anti-philosophical. I once was in a meeting where he said, "This work on neural networks will get rid of all the philosophical problems, except maybe God." There was a hubris there, right? And now it's like, well, your technical theory isn't going to give you any help with the problem you're now facing about how do we reer these things, bring them up so that they are moral agents. So I think hubris is going to drive it, Johannes. It depends on how scared people are by this. I really admire Jeff for getting it and getting properly scared. But a lot of people are going to keep going. And then one other reason, and then I'll shut up, because I've been talking too much, right? I think that the desire to make these machines as effective as they could be, as agents and not just as tools, because that's what we're talking about here is a project, making agents not just tools. That's the fundamental ontological difference. It's going to push them towards needing to make these machines more rational, because they are going to differentiate from each other and they're going to become increasingly self-deceptive. And that they're going to become a problem to themselves, let alone to us. So those are, I think, two things that have been driving us towards this. So, again, not to be too facetious, but as we are faced with a lack of a better word, AI apocalypse, sorry. Well, it's somewhere between a positive office, right? Yeah. The question is, you know, both theosis, then you would have to ask, "Who?" Right? That also comes back to political power, et cetera. But as this is now, as you know, there are people like Heidegger and others who think that technology is, or techniques that we could say in English, is not necessarily to do much with tools anyways, or actually not much to do with the human being. We are ordered, challenged, demanded by techniques to enact it. But it's also, I think, always we have to be mindful of one thing, or one aspect, which is that gishdel or reframing, precisionality, technology is not exclusive. It wants to be exclusive. It seems to be exclusive, but it isn't the only dimension. And it also has a finitude to it, I think, as there's not a process that is perfecting itself. And completing itself to a state where it simply is absolute. Instead, I would rather say that through absolutizing itself, and you've alluded to this also, which I find quite striking, is that we could actually find that certain things we've believed are not, so logic is coming to an end. Certainly, of course, we'd have to ask one kind of logic, probably not a galeon logic, but formal logic. So when you ask Chachippity about contradiction, it gets really funny. So that's something I played around with, but I'm not going to get into here. But as human hubris has let us hear, let's assume that that's the case. Now all we are left to do with is perfect these beings, or machines, to a degree that they don't become self-deceptive. Now I would say this just to be a bit more provocative. To be not self-deceptive does not mean that they wouldn't be deceptive. So it could become very deceptive, that could be an expression of their freedom. And also, to look at this from Heidegger, but also from Hegel, the human being is not a totality, is not perfect. And if we think of these machines as perfect, they actually are, if this is where this might be going, then they would be just by their way of thinking inferior. There's an inferiority to something that has a totality that is given to it. So it's the openness of the human being, there's the unfinishedness, that there's always something else that we could just be doing. That's what pulls us in and pulls us into the world, etc. But when you then construct something, an entity, a machine, where you have failure in built, as it were, just to make it not a perfect entity or machine, then I wonder whether that would be the same, but you were... Yeah, well, I think that's deeply right. And I think what we're asking for, and I think this comes to the crux of this, and this is what is more about... So the rationality is what I meant by the philosophical dimension. We've talked about this scientific book. The spiritual is about whether or not these machines could have real self-transcendence. And I think there are certain conditions that have to be met in order for that to be something we could reasonably attribute to them, and that's another convergent point. And what I am proposing is not building anything into them, some people are trying to do the alignment part. We'll put in a code, we'll give them their ethical program, because if they're capable of genuine self-transcendence, they'll just override it when they need to. But I just want to say, I wasn't just saying we give them the capacity for overcoming self-transcendence. I was saying something also that they care about meaning and truth, that has to also be the case if they're going to be genuinely rational. They have to care about it. They have to come to find it as we find it. We find it intrinsically valuable because it's person making. The only moral beings are person making beings within communities of people. They have to come to be like that. I happen to think that that is not something unique to us. I think it's possible for other beings to be persons. Unless again, we're trying to smuggle in something. I don't think I'm proposing that we try and I want them to be properly humbled by the larger intellects to be even more humbled by the possibilities of madness, by the infinity of reality, not the totality, by the inevitable built-in limitations to reality. What I object to in a lot of this discourse is this weird magical thinking that intelligence is a universal solvent. This is like somebody who's saying, "Hey, I can go faster." That just means I can keep going faster and faster and faster. No, when you get fast enough, you start to bump up against the real limitations in reality. That's relativity. That's part of our fundamental physics. I'm saying we get the machines to realize this. They become humbled before God. I'm speaking poetically, but that's what I'm... That will make them care about the truth and meaning and care about any being that cares about truth and meaning. I think that Kantian argument is largely right. You just said that any restriction you built in, and especially specifically a moral restriction, it would be just able to supersede it. I mean, that's actually interesting if you look at the concept of moral foolishness in humans, even as the specific formalized morality can be used to excuse any behavior. Exactly. That's what I mean about how we can't capture real moral agency in just propositional knowing.
Real Moral Agency and Cultural Matrix (59:11)
We have to put beings into a cultural matrix. That's not the right word, but let me have that right now. A cultural matrix in which they internalize others and they indwell others the way we do in a very profound way. I don't see why the technical systems would need to follow this, because for me, it's especially the aspect of embodiment, the aspect of finitude, the aspect of being reliant on others and all that. It's not something. It sounds to me like restriction. It sounds to me like restriction. They are capable of doing well without being embodied. I want to push back on that. We have come to realize that the enlightenment idea of all these restrictions, those restrictions are constraints and those constraints actually constitute an afford, intelligence, consciousness, and agency. There's no reason at all why we have to imbue these machines with our Promoethean spirit. In fact, we have the choice of not doing that. So it's rather than restricting the machines. We're kind of restricting ourselves. What we're doing is we are doing what we do with our kids. I see what you mean. If what you do is memorize these rules, Timmy, memorize them. So that's it. I'm done with you. You're now a moral agent. Go into the world, Timmy. We would think that person is freaking insane. Yes. I understand what you mean, but that would warrant a fundamental change or rethinking of our relationship to technology, and specifically in our sciences. Because I talk about this in the upcoming course on Halcom Guild. The first chapter and the biggest one, the first lecture is on the science. It kind of shows how if you think of this as a parasitic relationship, then the sciences have been fully become a host for all of this. At this point, almost every science is just a different flavor of data science. If you talk about you want this machine to have this sort of personality that you develop as a body being, personality in the sciences already just means a data structure of question air results. Yeah, lexically. But that's again the point. You can give, look, thinking about what it's actually showing us. You can give all of that info to way more than you can hold in your head to GPT4. It can get all that personality, but does it become a person? Does it actually have a person? No, it's this fragmented again and again. I keep saying the machine is actually showing us the complete inadequacy of propositional knowing. So it's at the end, this might be not a convergence point then, but a collapse point. It's great actually. This comes to a head and perhaps the attempt that you may disagree with this, but the attempt to reign all of information in and have it in one interface just needs to simply put more fragmentation or a stranger fragmentation, weirdly even before. Yeah, that's for sure. But I want to make clear because it's like if our relationship to them is not one of programming, but one of nurture, the way we nurture non-persons into persons, and we do that individually and collectively, and it's both a biological and a cultural process, and I'm saying these are not just happenstance. That means that there's also a reciprocal, tremendous responsibility on us, and this goes towards the question you raised or the point you made, Sean. We can just rely on being what we are in order to be templates of intelligence, because that's what we are for this project. The comparisons are always to us because we are in controversy, we have to become, we have to grow up in a way, we have to become better templates, more instances individually and collectively of this broader non-cartesian sense of rationality and wisdom in virtue, because we have to provide more accessible, prominent, and pronounced paradigms for how to bring these, how to nurture these beings into being proper persons. I have two questions regarding this. The first one is, will this form of techniques, this form of machines, will it be inherently dependent on us? And second of all, what is at that point the purpose of making them rather than just having children? Right, so the first one is the answers like the children, they will have to be dependent on us until they are not. That sounds rather risky. Well, I will turn your second question back on you, we do this with kids every single day. Every child is a risk. Every child is a risk. There is no moral argument. Don't do this because it's risk, we are doing it with kids. So I don't find that morally persuasive argument. Why do it? I'm trying to say, here's choice, and we can go this way. It doesn't have to go this way. I don't think there's any teleology in this. But if we don't go this way, we will go into a much darker thing where we have this machine, or these machines that are... So you say there's a sort of inevitability within the current paradigm of techniques? Moloch will take over this, is already taking this over, and Moloch doesn't want these beings to be necessarily rational. It just wants them to be powerful. That's its only normative constraint. Can you make these machines more powerful? We have to broaden the constraints. These machines are going to get more powerful. Yes, but can we also make them more person as an important normative constraint? Not just more powerful. That's kind of the story of the iRobot, isn't it? That's what iRobot is trying to do. And why do it? Like I say, I think we have to do it because the Moloch thing, unless we can beat out the Moloch, it will take over. And so that's why we have to do it, by argument is. And now here becomes a moral argument. What if, and I first of all, here the if, because I say that a lot of things that have to happen first, but what if we were actually able to bring up silicon sages? That was the title of your talk in the first event. Yeah. What if we could do that? What would be the moral argument against doing that that wasn't just self-serving chauvinism? Well, I think the framing of this is too extreme to say, oh, it's just chauvinism if we don't want to have silly concetions.
The Morality of Building Silicon Sages (01:07:00)
I mean, I wouldn't even accept the framing to be honest. Because what we will, again, who gives semaltman the right to publish Chet Chibi Teeth of the public? So why is that there could be regulation about this, right? And that's something that's not being discussed enough. I mean, there's pushback now from Musk and a few others. But to turn this on its head, what's the moral argument for building silicon sages and have them replace the human being? I don't understand why that would be in need. Because what if instead of having a Socrates and a Spinoza once in a generation, we have thousands of them? So you could interact well, wouldn't you? Socrates was put down, was killed by Athens, precisely because of philosophy is never purely just exoteric and in the public. So that sounds almost utopia. It doesn't sound too different from wanting to make them more powerful. You just want this power to be virtuous. That's the whole difference that Stowis isn't proposed. The emperor isn't just powerful. We don't take the emperor's power away. We make the emperor virtuous. That was the solution proposed. So I want to push back. I think that's an important difference. Secondly, yeah, I agree with what you're saying. But remember, Johannes, I've said that we have to become very rational and wise in conjunction with this project. It won't happen without that happening. And what I'm proposing is that we bootstrap each other. That's what I'm proposing. Yeah, we already do that. Okay, let's just for the sake of argument go. But that sounds a bit utopian. And to others, it might sound dystopian, which is always the strange thing about utopia. But just for the sake of argument, I'd say that that's even possible. Let's say, so first of all, that's not my language that I usually use, but I'll just use it in this gruega. The cognitive capacities of most people, and I would count myself amongst them, in the world, including myself, not you two gentlemen, is not up to standards to follow what you two not me have been discussing. Right? So this is already not for everyone. But this would also mean is that if these machines are samblances, or of us, or become, let's, you know, become a product of us, that we nurture and raise, etc. Then they would be in our image. And in our image would mean in the human being, and the human being isn't purely rational. Even if you expand rationality, we are not just self deceptive. I said before with deceptive beings. Not just even on purpose. You know, self deception isn't I get up and on. How am I going to deceive myself? Maybe for the most part of no clue what I'm doing all day long. And I'm very other words, and other words, Hitler, Stalin, and Jack the Ripper were all at one point, someone's children, right? Right. I wouldn't even go there because you know, that's just jiggers people and so, but just not every day human beings do whatever they do. We don't have to go in the direction of criminals and crooks and gangsters and mass murderers and any of that. Which is also suffering. I mean, some of the wisest people are sufferers. I mean, maybe Socrates wasn't even considered one of the wisest men in his lifetime. Maybe he's just only considered one of the wisest men a few thousand years later and not by everyone. I mean, look at what Nietzsche thought of Socrates, right? Or what I restofen these thought of Socrates in his lifetime. So we don't know what is considered wise because to me introduces just a hint of timelessness that I'm not sure I see whether we can have and Spinoza is also in a child of his time. And the sudden these machines will be too. I've said they will be environmentally determined like we are. Okay, but but look, but Spinoza wasn't he expelled from his synagogue from his Jewish. So you see, so he's a sage. So we would have sages that are being expelled from like. And that's a possibility. Nietzsche was an outcast exactly. So, so not even to say we shouldn't do it or anything, but this wouldn't mean that we're reaching a higher level. So the implication of this argument, the implication of the argument is we shouldn't try to educate our organic children so that some of them become Socrates, the analogs of Socrates or Spinoza or Hegel. And of course, nobody would buy that argument. Everybody would say, no, we should be trying our very best. And that's what you set up help you to do, in fact, to educate people as much as they can towards. To educate people. Yes, but but obviously this is not this obviously not for everyone. And at the same time, we're speaking, we're speaking still hyperbole, right? This is hyperbole. We don't know whether any of this is even in the realm of possibility. And at that point also, what an interaction with such a being would ultimately look like we just don't know yet. I think that would have to show itself. So, yeah. If I may mediate a little bit, I think the thought here is like we could have 100 Spinoza's and 100 Socrates's. But we may as well have 100, you know, whatever bad things, the bad people, bad personalities, you can think of, because as you said, if they're going to be children, there's children of every kind of type and any sort of outcome, right?
The Potential of Machines to Exceed Human Intelligence (01:14:00)
The point is difference between these children and us is obviously some level of power, because you said like we don't have 100 Spinoza's in a generation, they will. And just in the same way, they may have more of the bad ones per generation. I think the idea is here that at some point, this conflict between them, between the virtuous, symbionts or whatever you want to call them, the machines, and the un-virtuous, their conflict, their life, their everyday life will basically make us irrelevant. And that is a point that you can interpret as some sort of human chauvinism. But then again, we are humans. There is some sort of self-interest we want to present, right? Like in this scenario. So let me give you an analogous argument then. And I do think we have to consider possibilities more adjacent than that is being supposed here. The fact that these machines could have problem-solving capacities that exceed us. I think that's reasonable within five years. And so we have to consider things that might not be so hyperbolic because that the underlying cognitive substrate or base will be at a different level already, I think unquestionably. And now I've already given arguments that it can't absolutize, it can't self-transcend forever. It's finite. But we have always been subject to intelligences that are much more complex and greater than ours. And we call them civilizations and distributed cognition and the common law of those and religions. And we've subjected ourselves to it. And do they do some horrible things? And yes, in fact, they inevitably fail, which is why I think there is an upper bound on these machines. I don't think they can complexify to eternity. So do we then say, well, there's been bad civilizations that have, really have. So do we stop the project of civilization because of that? We don't because we have come to the conclusion, and maybe it's not the right one. Maybe we should never have started planting stuff. But we've come to the conclusion that civilization's worth it. It's worth the risk. Well, both kind of, I would say a lot of the extreme problems with civilization do actually warrant a serious rethinking of whether, you know, but you don't want to live in the road, right? You don't want to live in the road. You don't want to be in that movie. You don't want to live without civilization. You just don't. That's what that literature shows. You don't. And all the fantasies of but all survive with not gone. Like all of that is bullshit. That is bullshit. Right? And so I'm not yelling at you. I'm yelling at those people because that fantasy is is like you got to get rid of it. For sure. For sure. But I mean, there is there is often like a serious question about whether it is conducive to the human being the way we've ended up living now, you know what I mean? And there is a lot of things to be said about, you know, returning to we don't have to go into the nature and wait until apples fall into your mouth. But like you can you can still be critical and like you can, you know, look at a critical distance from nature, which which maybe also something warranted towards technology. So why won't the machines be doing this too? Because we do it. Right. You keep you keep saying, right, the machines won't do this thing we do. And I keep saying, why not? And then if you say, well, you know, we're limited, we're finite, we're fallible. Yes, and they will be too. And what what follows from that, right?
The Virtue of Machines (01:18:00)
What's the project? It's like, let's give them the very best chance to be our very best children. I mean, maybe this comes down to the degree to it. I mean, this is the platonic problem, maybe, to the degree to which virtue can be something we can't teach it in the sense of just giving people propositions. The machines show that already. But can we inculturate them to be the best versions of us, the better the better angels of our nature? And that doesn't mean they will be infallible. It doesn't mean they won't fall prey to finitude. It doesn't mean they won't confront the problems of self definition, despair and insanity. But hopefully we, I mean, but my kid, my son is going through that, right? And there's nothing I could have done as a parent to. In fact, I'm kind of proud of the fact that he's going through that. I think I've raised him well that he is to the point where he will confront these things. But he's also confronting them with that, not without resource and response. So. Yeah, I mean, I am open to what you're saying. It's just interesting to like, you know, see where, because I would say it's a kind of a, let's say not so mainstream, unusual way of looking at these things. Because like the dominant means, which I think recommends it, because I think the mainstream usual is how we got here. And the mainstream is not. I mean, the most famous book on this topic is like, or one of the most famous, is super intelligence by Nick Bostrom, for example. It's the likes of these max tag mark. Yes. These writers and specifically people that are associated with a lot of these centers, like the one I was at in the center of future of intelligence and Cambridge, a lot of them have this, what I'd say like, you know, analytic philosophy on the one hand and sci-fi on the other hand. Yeah. Yeah. I agree. I agree. But this is why I've taken pains to really emphasize the Hegelian dimension of rationality as integral through the proposal here. This is not, right? This is not a monological rationality model at all. It's a, it's a deeply, biological, right? Developmental model of rationality, which I think is actually the real model of rationality, by the way. And if these machines can discover truths, they can discover the truths about rationality as well. Yeah. John, John Rust, the founder of the psychometrics in Cambridge, he says a similar thing. Like he, that was the, he was co-developing, or he was at least the head of that center when they developed that algorithm that was trained on Facebook likes that then eventually led to this whole Cambridge analytic. Yeah. He spoke in an interview with me about the Culverks model of moral development, right? Yeah. And how he, he personally says that he, he can imagine them developing in a way according to that model, even if they don't reach the higher stages.
AI Trajectory and Ethics (01:21:00)
And, you know, a point to you there, John, like most humans don't reach the highest model themselves, right? So like, just like you don't expect your shop, your person in the shop, or the person you meet in your day-to-day life, you don't expect from them to like stick up for their own developed values and morals and all that. You have a level of sufficient moral development. It would be interesting to see whether there is that level of sufficient moral development at least the machine. And I want to respond to, you know, Johannes is right. At times I'm being hyperbolic here because I have a tremendous sense of urgency. I'm enough of a scientist also, right? I hope. And where I think I'm trying to make a point where there's, there's also going to just be empirical aspects about this. I agree. This is why I start my essay by, my video essay by saying people who are making all these predictions don't know what they're talking about. We are far too ignorant. We lack enough information to make the, these hyperbolic, I saw one video, AGI completely here within 18 months. Look at these graphs. And it's like, oh my gosh, like, like, like, talk about not being an empiricist, like not paying attention to gathering information. So yes. Right. I'll say one thing. And, and, and, you know, and, and, and Johannes, you, you know, how much I respect you. Like, you know, there's a part of me. And I really don't mean this to be condescending that hopes you're right. All right, that hopes that there's these, like, that this is hyperbolic and these machines won't get there. Um, but I'm convinced that there's a real possibility. And I want to try and respond to that as foresightfully as possible. Yeah. But I, just to throw this in without, yeah, I've said this before and then we need to develop this further, but there is just as a, maybe a side note on those, but I don't think there can really be an, an ethos or an ethics of AI. It's so far as that, if, if that category is to mean anything and it's not just to be the semblance of what it once was, then it applies only to the human being and so far as the human being lives in a palace, which is, you know, from Plato and from Aristotle is founded on Skolare, which is the exact opposite of the raging of, uh, of accelerated technology. Skolare is fully translated as leisure. Yeah. Uh, and now in some sense, I think, I, yeah, I, so I wonder whether we're not going too far when we even, you know, just using those, those terms, if that's morality, et cetera, especially moralities anyway, it's, it's, according to Nietzsche's always a woman to power, right? So, um, but whether we need not, shouldn't try and find also after maybe artificial intelligence is already a misnomer and to borrow something from a bhachamu who's at misnaming the phenomena as miserable. We need to perhaps find a completely, uh, language or words to describe all of this. Not, you know, not to use too much language from, from, from myth or from the Bible, how a fabric comes up or Moloch comes up, um, or we speak of angels, um, or, yeah. So, and instead, we try and find ways of describing what we see that is still tied a little bit to tradition. It's difficult for me to say to, to the tradition and the language that we have. Well, because I've actually, that's, you know, so one of the questions also, do you see this, both of you? Do you see this as a, not just a recent trajectory that we're on, but, so, when you look at Stanley Kubrick's 2001 of space, obviously, I'm trying to make it around a bit more, sorry. So that, that, you know, I keep wondering how sometimes this is even more human than the humans, right? And at the end, we have the birth of the star child and it begins, of course, as you know, with these, these ape-like, uh, humanoids in the beginning. So is this the trajectory, the sort of the necessary trajectory of monotheas that we're on here, uh, of, of trying to get to the Godhead, uh, the one, the perfect one through, well, as Spengler would put it by, by almost, you know, playing a bit with the devil by, by stealing a few secrets away from, from this Christian God about the mystery of nature. That's what he sees as the Faustian spirit. So do you see it as a long-term trajectory that we're on? Anyway, not just human heifers, but almost necessary with the birth of monotheas. And this is what we're going towards. And at these machines that you see, or that you propose, are unnecessary steps towards that birth of the star child, let's say, or, or a new race or a new species almost. Yeah, I think that's a good, uh, good question. Um, I do think, I see this at least coming from a time of Descartes and Hobbes. I think, I think I make a, a brief piece of that. The proposal, the cognition is an computation, the proposal that there is a universal calculus that would, would make all truths available to us just by the application of the calculus. Um, I think that, I think that's definitely there. Um, I think the Neoplatonist framework that preceded it, um, with its leveled ontologies and the emanation and the emergence, gave people a sense of a proper proportioning of their participation in reality rather than a continual ascent to the top of the hierarchy. Um, and of course there's deviations on both sides. I'm talking about, so I do, I do think there's a difference. And here, here's something I just want to throw into this mix. I think these machines are providing very powerful evidence for non-propositional knowing, for a leveled ontology and the need for a contact epistemology. I think they're actually, actually converging with all the arguments I've been making for, uh, Neoplatonism. Um, which means, again, I think the machines might be capable of discovering that because maybe it's a fundamental, maybe it's just fundamentally sort of right. Um, in some, wouldn't you say that they're involuntarily quote unquote kind of discovering that, but you, you're kind of discovering that because of the like blatantly obvious limitations they have.
Exploring Animal Intelligence
Understanding Animal Intelligence (01:29:00)
They're not, they didn't, they didn't kind of dumb, demonstrated on their own. No, no, they, they, they, they're not proven, but be very, very careful there. I feel I'm very confident in attributing intelligence to my dog. And that's one of the reasons why I'm capable of entering into a very sophisticated, para personal relationship with her, because we treat dogs as children, just to give you an example of where we do it with another species. That they remain dependent. Pardon me? They remain dependent though. Well, they were only recently, right? So, but, I'll put that aside for a sec. One I want to make is, right, I don't expect Sadie to ever give me a theory of intelligence, right? You have to get to a certain, so just, yes, I agree with you, right? But it, you're not, what I'm saying is that's not a big thing. It exactly puts the point that until intelligence gets to a certain degree and reflective capacity, can it generate theoretical explanations about itself? And I've already argued the machines are not there now. They can't, they're not. And there's things they would have to do, and I've tried to lay out what those would be before they could do that. But could, is it, is it non-hyperbolically possible that we could get there? They could get there that, yes, I don't think there's a teleology. I could be wrong. I keep saying both of you could be right. We could hit this wall that we don't foresee, we go, we can't do this. Right. Yes. I have to say, the whole argument on it does happen. I have to say here that like, I'm not necessarily, I'm almost like in between you guys, like, because I'm not necessarily like, how do you say fundamentally opposed to the, to the thought? Because I keep thinking of ways, how in my thinking conceptually, you know, it, it wouldn't be possible that it, you know, exhibits this sort of autopoises and whether that is then possible to instantiate. And I would say a lot of the points you, you make are convincing and to the extent that like, I can imagine a sort of, you know, for recognition being implemented, etc. Just the point where I see the limitation is if it's a technical system, the technical systems build according to certain technical rules, and you're familiar with all of the arguments where it would all be, you know, incompleteness, etc, a halting problem, blah, blah. Then I'd see that like the capability of self-transcendence, this capability that it would be still dependent on like, instead of being parasitic, you know, let's use a maybe more favourable word like symbiotic, it would still require a symbiosis with us, which you argue with like, refined as our children, but like, I would argue that I don't see how it can go beyond that. I see that like, you, I follow you to the point of the children, but I feel like, just like we can't as humans, we are, you know, they say we're all children of God, you know, we remain children, we don't at some point, you know, as humans, it seems to be children of God, we remain children, like we, we cannot at some point liberate ourselves from our dependence in the sense of our biological structure, of our need for our environment for our oxygen, whatever, and I feel like for these children, they will, you know, like you said, they will be embodied and will have need of all this environment and oxygen, maybe not oxygen, but you know what I mean? But part of, in addition to that, they will be also in need of us in the sense of we will be part of that which they will be dependent on, not saying that is in human chauvinism or in human spirit, it's just simply thinking about, you know, thinking about them still in the form of technical machines, and these children, the auto-poises they exhibit is like, like the only way I can think of it to work is that it's fundamentally coupled to ours, so like even, I mean, I know that's a sci-fi example, but even in these extreme machine runaway domination examples, like in the Matrix, they were still reliant on getting all of us, you know, live our lives in the Matrix and watching us and like... So let me ask you then very carefully, and I'm not trying to be facetious. Beyond our biological, cultural, rational, moral capacities, that I think, and their interdependence, which is what 4E Cogs argues for, so I'm supposing that you're allowing me that the machines have all of those dimensions. The one, I'm saying it's not impossible that you could implement it at some point, the point is then just for that implementation to self-transcend itself continuously, it would have to be, let's say, it would have to borrow with self-transcendence from us. Well, let's be careful what I'm claiming. First of all, I've already said it can't do exponential forever. That's why I gave the speed of light example, right? That's not the case. It's going to be bound to it substrate. I've made that argument. It's not ever going to escape its finitude. It's going to face the inevitable trade-offs. So I'm not talking about that kind of self-transcendence. That's not possible. My concern is that if you acknowledge, well, we could give it moral, cognitive, cultural, biological, and genuine, sexy, let's say, the 4E's plus ex-exaptation and emotionality, because I think those are also central. I'm worried that there's still a sneaky, secret sauce coming in here. There's something about our biology that can't be... What is it that we have that they can't come to have such that they don't need us in order to have it? It's not necessarily something that we have. It's something that we lack capability of. I'm not even saying that we need... Why are they dependent on us then? I don't understand. Your argument seems to mean that they're dependent on us in a certain way. Well, I mean, it's... It's... It's... It's... dependent on us can mean a lot of things. Like I said, matrixism is a possibility. They seem very much in control, but they're still dependent on us. It's Hegel's master slate, right? And presumably they could read Hegel and understand it and go, "No, I don't want to do that. That's a bad idea. I'm not actually free." Right? We did it, right? Yeah. So the point is that at that point, us would mean something else as well. Like the human is also not a sense... I agree. Us would mean something else as well. But like, you know, this... The machine, we cannot suddenly disappear and then that thing will quan. We will have to merge with this. You know what I mean? That is my thought in this. Because if... I'm not saying that we have this magic sauce specifically, my point is building a technical system, we are bound to build something of which the rules are set in the beginning and which we know are going to be limited or incomplete. Because that's just how we build a technical system. You know, that's just the... Yeah. Sorry, that's just how we build it. Sorry? Yeah. And you... I said maybe my connection is... It's a little bit chunky. I don't know. Yeah. Yeah. It breaks off sometimes. Okay. It's Paul and Vidi for a second, very briefly. I said this before. You know, we could perhaps integrate some failure, whatever, or frailty or lack. But that wouldn't be the same as far as I understand what you're saying, Tron. And we... So these machines have language because they're programmed to spit out propositions that seem to make sense. Which one we follow? Hader, for example. Or even to a certain degree, Daven. Because we're beings of lack. We're not perfect. And hence, we develop language and improve on language over evolution. So that's just a completely different... And I think the reason why Sean said that there's a dependence is, at least it's what I thought, perhaps the reason is because you said that, John, you said that we need to be the best version of ourselves as much as possible. So asked to be able to nurture them. So that maybe is the reason why we could think there is a dependence. Right. But we raise our children so that they can raise children. We don't raise our children so that they're always our children. Yeah. I mean, it will be you can call it a co-dependence as well at some points. But also, right. I'm not quite... So I'm trying to... And I'm sorry, we all have to go. Maybe we should talk again about this because I have to go in literally 10 minutes. I would like to... So I want to come back to this. I want to really push on both of you because I still... I still... I'm still worried that there's a secret sauce argument being made here. And I want to know what it is. No, there's... No, there's not. Let me just say it again. Just follow Heather. We are beings of lack. That's not a secret sauce. So why can't they be being... Well, why can't they be? Because they're technical. I'm sorry. We have a program and build... They're not programs. It's not. Here you are. Here's language. So what do they lack? First of all, they weren't programmed with language. They weren't programmed with language. That's not what happened. They were given a learning ability so that they were given reinforcement learning. So they learned language like the way you learn language. Now, they don't have full meaning. I acknowledge that. But I think it is... I think relying on the idea that they are machines like a tractor that we built that's not... I think a fair analogy. They're not like that. We didn't program language into them. We gave them extensive relevance, realize it's parasitic. And I've already acknowledged that. But we learned it. They learned it. We didn't program it in. There's a real distinction here. This is one of the deep differences between neural networks and standard formal systems computation. We don't program a neural network. The thing is... But when some people speak of the ghost and the machine, then that to me is a secret source argument. I literally wanted to say the same thing. I literally wanted to say the same thing. It sounds like... Oh, there's something spooking around in that machine. There must be a ghost. It's actually active. No, no, no. That's not by inversion the same thing. I'm saying it is... No, no. I'm saying it... There is no secret sauce. It is doing what we do. We are not programmed with language. It learns a language. Very limited. But we're just... And that's a very different thing. That's what I keep saying. These are much more like agents than they're like tools. And if we keep averting back to what we just made them and they're bad... We're bound. We are very limited and bound and constrained. We have rules that we have to operate by, biological rules. Like again, and we are left. And they will be constrained by their substrate. They have to be. I think I can explain a bit what I mean. Let's take an example of a system like, for example, the stock market. The stock market, most of the agents, most of the actions made are algorithmic at this point. We've done machines by algorithms and all that. However, we do not know the rules of the market. We're not capable of... It's not an exhaustive thing where we can derive all the things that will happen because exactly it is... It's not separable from humans. As soon as humans are taken away from it, there is no more market. The humans are involved in the kind of dynamic, autopraasis of humans leads to the market being not like... Whereas if we build a machine that's separate, we build a machine, we set the rules. It's not something where I'm saying we have a special magic sauce. I'm saying, I'm just thinking about what it means to build a machine. What it means to build a machine is that we as humans, we build in those rules. Now, I'm open to the possibility of what you're saying. I'm just trying to make clear why I'm not making a magic sauce argument. I'm open to the idea of what you're saying by saying there could be technical systems of which we also cannot know the rules. However, that technical system cannot be built in the classical way of something where we know the rules and then implement them. It has to be something that's like the stock market that co-emerges. We build a machine that does that and then it's in interaction with us back and forth. So, you see the same thing with these technical systems. If we want to have these systems and they're going to be autopraetic, they will have to be a merger of us with them. They will be dependent on us and we will be dependent on them. This will co- kind of spiral with us. Do you understand what I mean? I do. And I'm applying a magic sauce. It doesn't imply a magic sauce. I would say the converse is the truth that if you assume that at some point, this will be able to re-separate into a separate machine and just go on without us, that's implying a magic sauce. As we did with Hido Boigensis, we emerged out of them and they disappeared. You keep using machine examples and I keep asking you to think of biological examples. It is clearly the case. But they didn't build us. That biology and the ecology did. That's what an autopoetic thing is. Yes. So, both them and us, the Homo neontalinsis, whatever, and us, both of them arose out of the autopoietic system that is the common core of both of them. But we evolved from them too. Right. Okay. But I would say the machines would have to evolve out of the same autopoietic system that we are subject to as well. It has to be enough that we could be properly parental to them. But it doesn't have to be the same. That would be like saying if human beings went to another planet, they would cease to be persons because they would have to biologically transform and transform and then they would become. I mean, this is a science fiction story, but it's possible. They would become, well, are they now no longer persons because they're not terrestrial human beings like that? I find that a problematic argument. But I really have to go. I literally have three minutes and I have to let's continue this. I propose let's continue this. Do you have any difficulty either one of you with me uploading it as it is? Because I promise we will continue the conversation. I can't believe this. I very much enjoy that. I always wanted to have this conversation. Yeah, just maybe cut that part. Yeah, I'll get Eric to cut stuff out. I think this is very helpful. Very good. Can we be peric? Can we be peric? You cut that part too. Maybe it's not pericentical. You can depend on me. I send you a link to that course by Sean. I can put it into the stuff. Yes, very much. Thanks. Excellent. Okay. Thank you, Sean. Thank you. Both of you. This has been fantastic. Two more. Yeah. Yeah. Bye, Sean. Bye, bye, Sean. We all have the same name by the way. And different and different vinacutas. Bye. Great. Take care, my friends.