Ep. 31 - Awakening from the Meaning Crisis - Embodied-Embedded RR as Dynamical-Developmental GI | Transcription

Transcription for the video titled "Ep. 31 - Awakening from the Meaning Crisis - Embodied-Embedded RR as Dynamical-Developmental GI".


Note: This transcription is split and grouped by topics and subtopics. You can navigate through the Table of Contents on the left. It's interactive. All paragraphs are timed to the original video. Click on the time (e.g., 01:53) to jump to the specific portion of the video.


Intro (00:00)

Welcome back to a make, "Wakening from the Meaning Crisis." This is episode 31. So last time we were taking a look at trying to progress in an attempt to give at least a plausible suggestion. We have a scientific theory of how we could explain relevance realization. And one of the things we examined was the distinction between a theory of relevance and a theory of relevance realization. And I made the argument that we cannot have a scientific theory of relevance precisely because of a lack of systematic import. But we can have a theory of relevance realization and then I gave you the analogy of that which I'm building towards something stronger than an analogy of Darwin's theory of evolution by natural selection. And that went to which Darwin proposed a virtual engine that regulates the reproductive cycle so that the system constantly evolves the biological fittedness of organisms to a constantly changing environment. And then the analogy is there is a virtual engine in the embodied brain and why it's embodied will become clear embodied embedded brain will become clear in this lecture. But there is a virtual engine that regulates the sensory motor loop so that my cognitive interactional fittedness is constantly being shaped. It's constantly evolving to deal with a constantly changing environment. And what I in fact need as I argued is a system of constraints because I'm trying to write between selective and enabling constraints to limit and zero in on relevant information. And then I was trying to argue that the way in which that operates we saw that what needs to be sort of related to an auto poetic system. And then the way that operates the self organization I suggested operates in terms of a design that you see many scales and we need member of multi scalar theory in terms of your biological and cognitive organization. And that's in terms of opponent processing. We took a look at the opponent processing within the autonomic nervous system that is constantly by the strong analogy evolving your level of arousal to the environment opposing goals but inter related function. Then I propose to you that we could look for the kinds of properties that we're going to be talking about the level at which we're going to be pitching a theory of relevant realization, which is the theory of bio economic properties that are operating not according to normativity of truth or validity not logical normativity but logistical normativity. And the two most important logistical norms I would propose to you our efficiency and resiliency. And then I made an argument that they would be susceptible to opponent processing precisely because they are in a trade off relationship with each other. And that if we could get a cognitive virtual engine that regulates the sensory motor loop by systematically playing off selective constraints on efficiency. And enabling economic constraints on resiliency then we could give an explanation, a theory deeply analogous to Darwin's theory of the evolution across individuals of biological fittedness. And we could give an account of the cognitive evolution within individuals cognition of their cognitive interaction fittedness the way they are shaping the problem space. So as to adaptively be well fitted to achieving their interaction goals with the environment. Before I move on to try and make that more specific and make some suggestions as how this might be realized in the neural machinery of brains. I want to point out why I keep emphasizing this embodied embedded. And I want to say a little bit more about this because I also want to return to something I promised to return to why I want to resist both sort of an empiricist notion of relevance detection and a romantic notion of relevance projection.

Understanding Mind-Body Interdependence And Relevance Realization

Why the Mind and Body Are Interdependent (05:18)

So the first thing is why am I saying embodied? Because what I've been trying to argue is there is a deep dependency, a deep connection and the dependency runs from propositional through to down to participatory. But there's a deep dependency between your cognitive agency as an intelligent problem solver and intelligent general problem solver. And the fact that your brain exists within a bioeconomy. The body is not Cartesian clay that we drag around and shape according to the whims or desires of our totally self enclosed or bake art immaterial minds. The body is not a useless appendage. It is not just a vehicle. So here even here I'm criticizing certain platonic models. The body is an autopoetic bioeconomy that makes your cognition possible. Without an autopoetic bioeconomy you do not have the machinery necessary for the ongoing evolution of relevance realization. The body is constitutive of your cognitive agency in a profound way. Why embedded? And this will also lead us into the rejection of both sort of an empiricist and a romantic interpretation. Why embedded? The biological fittedness of a creature is not a property of the creature per se. It is a real relation between the creature and its environment. Is a great white shark intrinsically adapted? No. It makes no sense to ask that question because if I take this supposedly apex predator really adapted and put it in the Sahara Desert it dies within minutes. Its adaptivity is not a property intrinsic to it per se. Its adaptivity is not something that it detects in the environment. Its adaptivity is a real relation on affordance between it and the environment. In a similar way, while I would argue that relevance is not a property in the object, it is not a property of the subjectivity of my mind. It is another property of objectivity nor a property of subjectivity. It is precisely a property that is co-created by how the environment and the embodied brain are fitted together in a dynamic evolving fashion. It is very much like the bottle being graspable. This is not a property of the bottle nor a property of my hand, but a real relation, a real relation on how they can be fitted together, functioned together. I would argue that we should not see relevance as something that we subjectively project as the romantic claims. We should not see relevance as something we merely detect from the objectivity of objects as perhaps we might if we had an empiricist bet. I want to propose a term to you. I want to argue that relevance is in this sense transjective. It is a real relationship between the organism and its environment. We should not think of it as being projected. We should not think of it as being detected. This is why I have consistently used the term we should think of relevance as being realized. The point about the term realization is it has two aspects to it. I am trying to triangulate from those two aspects. What do I mean by that? There is an objective sense to realization, which is to make real. If that is not an objective thing, I do not know what counts. Making real. That is objective. But of course there is a subjective sense to realization, which is coming into awareness. I am using both these senses of the same word. I am not equivocating. I am trying to triangulate to the transjectivity of relevance realization. That is why I am talking about something that is both embodied, necessarily so, and embedded necessarily so. Notice how non or perhaps better anti-cartesian this is. The connection between mind, if what you mean by mind is your capacity for consciousness and cognition. And body is one of dependence, of constitutive need. Your mind needs your body. We are also talking not only about it being embodied, embedded. It is inherently a transjective relation of relevance realization. The world and the organism are co-creating, co-determining, co-evolving, the fittedness. Let's now return to it the proposal. Notice what this is telling us. This is telling us that a lot of the grammar by which we try to talk about ourselves and our relationship to reality. The subjective objective. Both of these are reifying and their inherent claims. They are the idea that relevance is a thing that has an essence that adheres in the subject. Or relevance is a thing that has an essence that adheres in the object. Both of those that standard grammar and the adversarial partisan debates we often have, I am arguing need to be transcended. And I would then propose to you that that's going to have a fundamental impact on how we interpret spirituality.

The Similarity Between Subjective and Objective (12:47)

If again by spirituality we mean a sense and a functional sense of connectedness that affords wisdom, self-transcendence, etc. So back to the idea of efficiency, resiliency trade-offs. I would point you to the work of Marcus Breed. He's got work sort of mathematically showing that when you're creating networks, especially neural networks, you're going to optimize, and we talked about optimization again in the previous video, you're going to optimize between efficiency and resiliency. That's how you're going to get them to function the best you can. And what I want to try and do is try to show you the relationship, the poles of the transjectivity and how that's going to come out, or at least point towards the generative relationship that can be discussed in terms of these poles. So I argued that initially the machinery of relevance realization has to be internal. Now again, this is why I just did what I did. When I say internal, I don't mean subjective. I don't mean inside the introspective space of the mind. When I'm talking about the goals are internal, I mean internal to an embodied embedded brain-body system, an auto-poetic system of adaptivity. In fact, there's many people who are arguing cognitive science that those two terms are interdependent. Just like I'm arguing that relevance realization is dependent on auto-poecis, being an adaptive system and being an auto-pietic system are also interdependent. The system can only be continually self-making if it has some capacity to adapt to changes in its environment.

reversing organizational realization (14:59)

And the system is only adaptive if it is trying to maintain itself, and that only makes sense if it has real needs if it's an auto-poetic thing. So these things are actually deeply interlocked. Relevance realization, auto-poecis, and adaptivity. So, as Marcus Brede has argued and other people, and I'm giving you independent argument, you want to get a way of optimizing between efficiency and resiliency. You don't want, remember with the autonomic nervous system, this doesn't mean getting some average or right stable mean. It means the system can move. Sometimes giving more emphasis to efficiency, sometimes giving more emphasis to resiliency. Just like your autonomic nervous system is constantly evolving, constantly recalibrating your level of arousal. Now, what I want to do is pick up on how those constraints might cash out in particular, I'll put this a little bit farther over here, how these logistical norms understood as constraints can be realized in particular virtual engines. So, and I want to do this by talking about internal bioeconomic properties, and then for lack of a better way for this contrast. And again, this does not map on to the subjectivity objectivity. I don't have to keep saying that correct. External interaction properties, by external I mean that these eventually are going to give rise to goals in the world as opposed to the consistitive goals in the system. And what I want to do is show you how you go back and forth. Now, it'll make sense to do this in terms of reverse engineering because it will just help to make more sense because I'm starting from what you understand in yourself and then working. So often I will start here and go this way. So, you want to be adaptive. We said you want to be a general problem solver. And that's important. But notice that that means there's two kinds of, and people don't like when I use this word, but I don't have an alternative word.

general vs special purpose machines (17:50)

But so I'm just going to use it. There's two kinds of machines you can be. By that, what I mean by that is a system that is capable of solving problems and pursuing goals in some fashion. If I want to be adaptive, what kind of machine do I want to be? Well, I might want to be a general purpose machine. Now, these terms are always, and I keep showing you that, are always relative. They're comparative terms and relative. I don't think anything is absolutely general or absolutely special purpose. It's always a comparative term. But let me give you an example. My hand is a general purpose machine. My hand is such that it can be used in many, many different contexts for many, many different tasks. So it's a very general purpose. Now, the problem with being a jack of all trades is that you are a master of none. So the problem with my hand being general purpose is that for specific tasks, it can be out-competed by a special purpose machine. So, right, although this is a good general purpose machine, it is nowhere as good as a hammer for driving in a nail. No where as good as a screwdriver for removing a screw, etc. So, in some contexts, special purpose machines outperform general purpose machines. But you wouldn't want the following. You wouldn't want, you know, you're going to be stranded on a desert island, like maybe Tom Hanks and Castaway. And he lost all of his special purpose tools. They sink to the bottom of the ocean. That causes him a lot of distress. Literally what he starts with at first is his hands, the general purpose machines. And you see that, wow, they're not doing very good. If I just had a good knife, right? But the problem is you wouldn't want, well, not Tom Hanks, but his character. If we get the character's name, I think it's Jack. You wouldn't have Jack, Jack, I'm going to cut off your hands. And, you know, I'm going to attach a knife here and, you know, a hammer here. And now you have a hammer and knife. It's like, no, no, no. I don't want that either. I don't want just a motley collection of special purpose machines. Okay? So sometimes you're adaptive by being a general purpose machine. Sometimes you're adaptive by being a special purpose machine. So general purpose machine, you use the same thing over and over again. Sometimes we make a joke about somebody using a special purpose machine as a general purpose machine, right? When all you have is a hammer, everything looks like a nail, right? And the joke there is, right? And it strikes us as a joke because we know that hammers are special purpose things and everything is a nail. It's not so much a joke if I say, you know, sometimes when all you have is a hand, everything looks graspable. That's not so weird. Okay? So what am I trying to get you to see? What I'm trying to get you to see is you want to be able to move between these. This is very efficient. Why? Because I'm using the same thing over and over again, the same function. Over and over again. Or at least this same set of tightly bound functions, right? The thing about special purpose, right, is I won't use it there. You know, I don't use it that often. I use my hammer sometimes and my saw sometimes and my screwdriver sometimes. And I have to carry around the toolbox. Now the problem with that is it gets very inefficient because a lot of the times I'm carrying my hammer around and I'm not using it. So I have to bear the cost of carrying it around and I'm not using it. So it's very inefficient. But you know what it makes me? It makes me tremendously resilient. Because when there's a lot of new things, unexpected specific issues that my general purpose thing can't handle, I'm ready for them. I have resiliency. I've got differences within my toolbox kit that allow me to deal with these special circumstances. So notice what I want to do. I want to constantly trade between them. Now what I'm going to do, I did that to show you this. I'm then now going to reorganize it this way. Because what I'm going to show you, what I'm arguing is general purpose is more efficient. Special purpose is making you more resilient. And you want to trade between them. Okay, so those are interactional properties. And you said, I sort of get the analogy, what does that have to do with the brain and bioeconomy? So how would you try to make information processing more efficient? Well, what I want to do is I want to try and make the process I'm using, the functions I'm using, to be as generalizable as possible. That will get me general purpose. Because if I can use the same function in many places, then I'm very efficient. How do you do that? How do you do that? Well, here's what I want to pause and I want to introduce just a tiny bit of narrative in here. When I was writing this paper with Tim Lillicomp and Blake Richards, but especially, this was Tim's great insight. You've got to get in, if you're interested in cutting edge AI, you really need to pay attention to the work that Tim Lillicomp is doing. Tim's a former student of mine. He's calling it in many ways. Of course, he's greatly surpassed my knowledge and expertise. He's one of the cutting edge people in artificial intelligence. And he had a great insight here.

Artificial intelligence applications for reverse engineering (24:04)

I was proposing this model, this theory to him, and he said, "But you should reverse engineer it in a certain way." I said, "What do I mean?" He says, "Well, you're acting as if you're just proposing this top down, but what you should see is that many of the things you're talking about are already being used within the AI community." The paper we published was Relevance Realization and the Emerging Framework in Cognitive Science, namely that a lot of the strategies are going to talk about here are strategies that are already being developed. Now, I'm going to have to talk about this at a very abstract level, because which one of the particular architectures, a particular application, is going to turn out to be the right one. We don't know yet. That's still something in progress. But I think Tim's point is very well taken, that we shouldn't be talking about this in a vacuum. We should also see that the people who are trying to make artificial intelligence are already implementing some of these strategies that I'm going to point out. And I think that's very telling. The fact that we're getting convergent argument that way. Okay. So how do I make a information processing function more generalizable? How do I do that? Well, I mean, you know how we do it, because we've talked about it before, but you do it in science. All right. So here's two variables, for example. It's not limited to two. Right. And so, right, I have a scatter plot, and what they taught you to do was a line of best fit. All right. This is standard move in Cartesian graphing. Now, why do you do a line of best fit? You know, and my line of best fit might actually touch none of my data points. Does that mean I'm being ridiculously irresponsible to the data? I'm just gauging an arm truss speculation. No. Why do we do this? Why do we do a line of best fit?

Progression and Pred proportion sketch (26:16)

Right. Well, why we're doing this is because it allows us to interpolate and extrapolate. It allows us to go beyond the data. Now, we're taking a chance. And of course, all good science, and this is this is the great insight of power. All good science takes good chances. Right. But here's the thing. I do this so that I can make predictions, right, what the value of y will be when I have a certain value of x that I've never obtained. I can interpolate and extrapolate. That means I can generalize the function. So this is data compression. This is data compression. What I'm trying to do is basically pick up on what's invariant. The idea is that the information always contains noise, and I'm trying to pick up on what's invariant and extend that.

Contextualizing (27:14)

And of course, that's part and parcel of why we do this because in science, we're trying to do the inductive generalizations, et cetera, et cetera. So the way in which I make my functionality more general, more general purpose is if I can do a lot of data compression. So if the data compression allows me to generalize my function, and that generalization is feeding through the sensory motor loop in a way that is protecting and promoting my auto poetic goals, it's going to be reinforced.

Particularization, (27:55)

But what about the opposite? Well, it was interesting at the time, I think some people have picked this up on the term. We didn't have a term for this. And I remember there was a whole afternoon where Tim and I were just trying to come up with what do we want for the trade off. So this is making your information processing more efficient, more general purpose, what makes it more special purpose, more resilient. And so we came up with the term "particularization." And Tim's point, and I'm not going to go into detail here, Tim's point is this is the general strategy that's at work in things like the wake sleep algorithm that is at the heart of the deep learning promoted by Jeffrey Hinton, who was at U of T, and Tim was a very significant student of Jeff's. And so this is the abstract explanation of how that strategy that's at work in a lot of the deep learning that's at the core of a lot of successful AI. What particularization is, is I'm trying to keep more in track with the data. I'm trying to create a function that over fits in some sense to that data. That will get me more specifically in contact with this particular situation. So this tends to emphasize what is invariant. This tends to get the system to pick up on more variations. So this will make the system more cross contextual. It can move across context because it can generalize. This will tend to make the system more context sensitive. And of course you don't want to maximize any one of these. You want them dynamically trading. And notice how they are, is this the right word? I hope so, obeying. It sounds so anthropomorphic. Notice how they are obeying the logistical normativity trading between efficiency and resiliency. And there's various ways of doing this. Right? And there's lots of interesting ways of engineering this into, but it's creating a virtual engine. Engineering this, creating sets of constraints on this. So this will oscillate in the right way and optimize that way. And so the idea is when you've got this as something that's following the completely internal bioeconomic logistical norms, it will result in the evolution of sensory motor interaction that is going to make a system, an organism, constantly, adaptively, like moving between being general purpose and being special purpose. It will become very adaptive. Now different organisms will be biologically skewed one way or the other. Even individuals will be biologically skewed. So there are people now proposing, for example, that we might understand certain psychopathologies in terms of some people are more biased towards overfitting to particularizing. And some people are more biased towards compressing and generalizing. These people tend towards many connections where there aren't connections. And these people tend to be very featurely bound. Okay, what's another one, another? Oh, so this is compression particularization. Right? We called this cognitive scope. And we called this applicability. How much you can apply your function or functions? The idea is if you can get scope going the right way, it will attach to, it will get coupled to. It's not representing, it will get coupled to this pattern of interaction which will fit you well to the dynamics of change and stability in the environment. Okay, what's another thing? Well, and a lot of people are talking about this. You'll see people even talking about this in AI very significantly. Exploration versus exploiting versus exploration. So here's another trade-off. This tends to be in terms of the scope of your information. This has to do more with the timing. So here's the question. Should I stay here and try and get as much as I can out of here? Or should, that's exploiting, or should I move and try and find new things, new potential sources of resource and reward? We're in a trade-off relationship because the longer I stay here the more opportunity cost I accrue. But the more I move around, the less I can actually draw from the environment. So do I want to maximize either? No, I want to trade between them. I'm always trading between exploiting and exploring. There's different strategies that might be at work here. I've seen recent work in which this is you try to reward when a system doesn't make an error and then you reward when it makes an error. And of course those are in a trade-off relationship. And this sort of makes it more curious. This makes it more sort of conscientious if I have to speak anthropomorphically. One way we thought you could do this is you could trade. So one way you can do this is you can reward error. All right, sorry. Reward error reduction. Reward error increase. The way we talked about in the paper is you can trade off between what's called temporal displacement learning and inhibition on return. I won't go into the dynamics there. What I can say is there's different strategies being considered and being implemented.

Compression and Exploration (34:55)

And this is cognitive tempering having to do with both temper and the relationship between top and time. And this has to do with the projectability of your processing. Now, first of all, a couple of things. Are we claiming that these are exhaustive? No. They're not exhaustive. They are exemplary. They're not exhaustive. They're exemplary of the ways in which you can trade between efficiency and resiliency and create virtual engines that will adapt by constrain, by setting up systems of constraints. The sensory motor loop, the interactions with the environment in evolving manner. So, why is exploitation efficient? Because I don't have to expand very much. I can just stay here. But it depends on things sort of staying the same. Exploration is I have to expend a lot of energy. I have to move around and it's only rewarding if there's significant difference. If I go to B and it's the same as A, you know what I should have done? Stayed in A.

Cognitive Prioritization (36:13)

So, do you see what's happening? All of these in different ways, this has to do with the applicability, the scope, this has to do with the projectability, the time. But all of these, you're trading between that sometimes what makes something relevant is how it's the same, how it's invariant. Sometimes what makes something relevant is how it's different, how it changes. And you have to constantly shift the balance between those because that's what reality is doing. That's what reality is doing. What's another one? Well, another type of one. I think there are many of these. And they are not going to act in an arbitrary fashion because they are all regulated by the trade-off normativity, the opponent processing between efficiency and resiliency. Notice these are both what are called different cost functions. They are dealing again with the bioeconomics. How you're dealing with the cost of processing. So, playing between the costs and benefits of these, etc. But you might also need to play between these. So, it's also possible that we have what we call cognitive prioritization in which you have cost functions being played off against each other. So, here's a cost function, there's a cost function, there's a cost function one, cost function two, playing off against each other. And you have to decide here, and this overlaps with what's called signal detection theory and other things I won't get into. You have to be very flexible in how you gamble because you may decide that you will try and sort of head your bets and activate as many functions as you can. Or you may try to go for the big thing and say, "No, I'm going to give a lot of priority to just this function." Of course, you don't want that to maximize, you want flexible gambling. Sometimes you're focusing, sometimes you're diversifying. You create a kind of integrative function. All of this can be, and if you check in the paper, all of this can be represented mathematically.

The Space of Relevance Realization (39:24)

Once again, I am not claiming, this is exhaustive, I'm claiming it's exemplary. I think these are important. I think scope and time, right, cost functions and prioritizing between cost functions. I think it's very plausible that they are part and parcel of our cognitive processing. What I want you to think about is, I'm representing this abstractly, think about each one of these. Here's scope, here's tempering, and then of course there is the prioritization that is playing between them. I want to think, you think of this as a space and these functions because they are all being governed, regulated, sorry, regulated in this fashion. Relevance realization is always taking place in this space and at this moment it's got this particular value according to tempering and scope and prioritization. And then it moves to this value and then to this value and then to this value and then out to this value. It's moving around in a state space. That's what it is, that's what's happening when you're doing relevance realization. But although I've represented how this is dynamic, I haven't shown you how and why it would be developmental. I'm going to do this with just one of these because I could teach an entire course just on relevance realization. Okay, when you're doing data compression, you're emphasizing how you can integrate information, remember like the line of best fit, you're emphasizing integration and because you're trying to pick up on what's invariant. And of course that's going to be versus, this is going to be versus differentiation. Now I think you can make a very clear argument that these map very well onto the two fundamental processes that are locked in opponent processing that Piaget, one of the founding figures of developmental psychology said driving, said drive development. This is what Piaget called assimilation. Assimilation is you have a cognitive schema and what is a cognitive schema again? It is a set of constraints. And you have a cognitive schema, right? And what that set of constraints do is it makes you integrate, it makes you treat the new information as the same as what you got, you integrate it, you assimilate it, that's compression. What's the opposite for Piaget? Well it's accommodation and that's why of course when people talk about exploratory emotions like awe, they invoke accommodation as a Piagetian principle because it opens you up. What does it do? It causes you to change your structure, your schemas. Why do we do this? Well because it's very efficient. Why do we have to do this? Because if we just pursue efficiency, if we just assimilate, our machinery gets brittle and distorted. It has to go through accommodation, it has to introduce variation, it has to rewire and restructure itself so that it can again respond to a more complex environment. So not only is relevance realization inherently dynamic, it is inherently developmental. When a system is self-organizing, there is no deep distinction between its function and its development. It develops by functioning, but by functioning it develops. When a system is simultaneously integrating and differentiating, it is complexifying, complexification. A system is highly complex if it is both highly differentiated and highly integrated. Now why? But if I'm highly differentiated, I can do many different things. But if I do many different things and I'm not highly integrated, I will fly apart as a system. So I need to be both highly differentiated so I can do many different things and highly integrated so I stay together as an integrated system. As systems complexify, they self-transcend. They go through qualitative development. Let me give you an analogy for this. Notice how I keep using biological analogies. That is not a coincidence. I started out life as a zygote, a fertilized cell, a singular cell. The egg and the sperm, or zygote. Initially, all that happens is the cells just reproduce. But then something very interesting starts to happen. You get cellular differentiation. Some of the cells start to become lung cells, some of them start to become eye cells, some start to become organ cells. But they don't just differentiate. They integrate. They literally self-organ eyes into a heart organ, an eye. So I do develop through a process, at least biologically, of biological complexification. What does that give you? That gives you emergent abilities. You transcend yourself as a system. When I was a zygote, I could not give this lecture. I now have those functions. In fact, when I was a zygote, I couldn't learn what I needed to learn in order to do this lecture. I did not have that qualitative competence. I did not have those functions. But as a system complexifies. Notice what I'm showing you.

General Intelligence Enabled By Relevance Realization (46:20)

As a system is going through relevance realization, it is also complexifying. It is getting new emergent abilities of how we can interact with the environment and then extend that relevance realization into that emergent self-transcendence. If you are a relevance realizing thing, you are inherently dynamical, self-organizing, autopoetic things, which means you are an inherently developmental thing, which means you are an inherently self-transending thing. I want to make clear an argument that might, or I want to respond to a potential argument that you might have. I get all of this, but maybe relevance realization is a bunch of many different functions. First of all, I'm not disagreeing with the idea that a lot of our intelligence is carried with heuristics. Some of those are more special purposes and some of them are general purposes and we need to learn how to trade off between them. However, I do want to claim that relevance realization is a unified phenomenon. I'm going to do this in a two part way. The first is to first assert and then I will later substantiate that when we're talking about general intelligence, in fact that's what this whole argument is bid, we're talking about relevance realization. This goes to work I did with Leo Ferraro, who was a psychometrist, somebody who actually does psychometric testing on people's intelligence. One of the things we knew from Spearman, way back in the 20s, is he discovered what's called the general factor of intelligence, sometimes called general intelligence. There's a debate about whether we should identify those or not, I'm not going to get into that right now. What Spearman found was that how kids were doing in math was actually predictive of how they were doing in English and even how they were doing contrary to what our culture says, how they're doing in sports. There's how I'm doing in all these different tasks was how I did an A was predictive of how I did in B and vice versa. This is what's called a strong positive manifold. There's this huge inter predictability between how you do in all these very many different tasks. That is your general intelligence. Many people would argue and I would agree that this is the capacity that underwrites you being a general problem solver. Often when we're testing for intelligence, we're testing for therefore general intelligence. I'll put the panel up as we go along. Leo showed me, we made a good argument, is that the things we study when we're studying, when you're doing something like the Weschler's test or something like a psychometric test. You will test things like the comprehension subset and of course you'll concentrate on similarity judgments. You'll also do what the similarities of pictures, other people have talked about your ability to adapt to unpredictable environments. This is other work by governments and others. The ability to deal with complex workplaces, what are called G loaded that require a lot. Now when you trace these back, what this points to is your capacity for problem formulation. The similarity judgments and what are called the adduction abilities, the ability to draw out latent patterns. This of course is similarity judgment, this is a similarity judgment and pattern finding. The complex, this is basically dealing with very ill-defined dynamic situations. The thing is, and adapting to complex environments. This is general intelligence. This is how we test general intelligence. We test people across all these different kinds of tasks and what we find is a strong predictive manifold. There's some general ability behind that. Notice these, problem formulation, similarity, you know, similarity judgments, pattern finding, dealing with ill-defined dynamic situations, adapting to complex environments. That's exactly all the places that I've argued we need relevance realization. Relevance realization, what I would argue, is actually the underlying ability of your general intelligence. That's how we test for it. This is the things that came out. You can even see comprehension aspects in here, all kinds of things. Relevance realization I think is a very good candidate for your general intelligence. So far as general intelligence is a unified thing. This is one of the most robust findings in psychology. It just keeps happening. There's always debates about it, blah, blah, blah. People don't like the psychometric measures of intelligence. I think that's because they're confusing intelligence and relevance and wisdom will come back to that. The thing is, this is a very powerful measure. If it's reliable, this is from the 1920s, and this keeps getting replicated. This is not going through a replication crisis. And if I had to know one thing about you in order to try and predict you, the one thing that outperforms anything else is knowing this. This will tell me how you do in school, how you do in your relationships, how well you treat your health, how long you're likely to live, whether or not you're going to get a job. This crushes how well you do in an interview, in predicting whether or not you'll get and keep a job. Is this the only thing that's predictive of you? No. And I'm going to argue later that intelligence and rationality are not identical. But is this a real thing? And is it a unified thing? Yes. And can we make sense of this as relevance realization? Yes.

Exploring Intelligence

Intelligence (53:47)

Is relevance realization therefore a unified thing? Yes. So relevance realization is your general intelligence. And when I'm arguing, well at least that's what I'm arguing, and that your general intelligence can be understood as a dynamic developmental evolution of your sensory motor fittedness that is regulated by virtual engines that are ultimately regulated by the logistical normativity of the opponent processing between efficiency and resiliency. So we've already integrated a lot of psychology and the beginnings of biology, some neuroscience. And we want to, and we've definitely integrated with some of the best insights from artificial intelligence. What I want to do next time to finish off this argument is to show how this might be realized in dynamical processes within the brain. And how that is lining up with some of our cutting edge ideas. I'm spending so much time on this because this is the linchpin argument of the cognitive science side of the whole series. I'll try to show you how everything feeds in to relevance realization. If I can give you a good scientific explanation of this in terms of psychology, artificial intelligence, biology, neuroscientific processing, then it is legitimate and plausible to say that I have a naturalistic explanation of that. And if the history is right pointing towards this, what I'm going to then have the means to do is to argue how this, and we've already seen it, how it's probably embedded in your procedural, perspectival, participatory knowing. It's embedded in your transjective dynamical coupling to the environment and the affordance of the age and arena relationship, the connectivity between mind and body, the connectivity between mind and world.

Intelligence, rationality and wisdom (56:13)

We've seen it central to your intelligence, central to your functionality of your consciousness. This is going to allow me to explain so much. We've already seen it as affording an account of why you're inherently self transcending. We'll see that we can use this machinery to come up with an account of the relationship between intelligence, rationality and wisdom. We will be able to explain so much of what's at the center of human spirituality.

The Importance Of Relevance Realisation

Relevance Realisation (56:46)

We'll have a strong plausibility argument for how we can integrate cognitive science and human spirituality in a way that may help us to powerfully address the meaning crisis. Thank you very much for your time and attention.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Wisdom In a Nutshell.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.