Ep. 29 - Awakening from the Meaning Crisis - Getting to the Depths of Relevance Realization | Transcription

Transcription for the video titled "Ep. 29 - Awakening from the Meaning Crisis - Getting to the Depths of Relevance Realization".

1970-01-01T12:43:48.000Z

Note: This transcription is split and grouped by topics and subtopics. You can navigate through the Table of Contents on the left. It's interactive. All paragraphs are timed to the original video. Click on the time (e.g., 01:53) to jump to the specific portion of the video.


Introduction

Intro (00:00)

Welcome back to Awakening from the Meaning Crisis. This is episode 29. So last time I went through with you a series of arguments trying to show you the centrality of the issue of relevance realization.


Conceptual Framework And Influential Figures

Relevance Realization_ Ausleitungsperversion (00:31)

I want to review that with you and then try and begin an account of how we might come up with a naturalistic explanation of relevance realization and then build that into an overall plausibility argument about using that notion of relevance realization to explain many of the features that we consider central to human spirituality, meaning making, self transcendence, altered states of consciousness and wisdom. Before I begin that I want to remind everybody of how much the work I'm talking about now has been done in collaboration with other people, especially the work with Tim Lilleclap and Break Richards in 2012, the article we published in Journal of Logic and Computation, work with Leo Ferraro in 2013, some current work with Leo Ferraro, Anderson Todd, Richard Wu, current work I'm doing with Christopher Massimo Pietro, and some past work with Zachary Irving and Leo Ferraro on the nature of intelligence. So we want to take a look at what we did last time. We did very quickly to remind you a series of arguments, a series of arguments that pointed towards how central relevance realization is. We did arguments around the nature of problem solving. And remember we saw the idea there of the search space as proposed by Newell and Simon, and we faced a couple of important issues there. We faced issues of combinatorial explosion, and what we need is right problem formulation or problem framing. That allows us to avoid combinatorial explosion by zeroing in on relevant information. I also proposed to you, and I'll return to this later, that problem solving is our best way of trying to understand what we mean by intelligence, your capacity as being a general problem solver. Also we noted the problem of ill-defined edness. Very often a problem formulation is needed in order to determine what the relevant information is and what the relevant structure of that information is. So that again points us into relevance. These two together also pointed towards a phenomena we have, we've already talked about insight and the fact that you often have to solve a problem by altering your problem formulation and redetermining what you consider relevant.


Reconciliation in the Desert (03:32)

We then took a look at categorization, I'll come back to this again in another way a little bit later in this lecture, but we took a look at how categorization ultimately depends on judgments of similarity, and we can get into an equivocation there. We can equivocate between purely logical notion of similarity, in which case any two objects are indefinitely similar or dissimilar to each other. And if we mean instead of logical similarity which would not help us to categorize psychological similarity, then we're talking about making a comparison of two things in terms of the relevant features of the comparison, the relevant aspects. So we're into relevance and we're also introducing an important idea, I want you to remember this notion of an aspect, a set of relevant features that cohere together and are relevant to us, especially in projects like categorization. So we keep getting this, of course, if you remember, doing good cockside, I do a convergence argument to get a trustworthy problem or construct, and then I basically do a divergence argument to show how it has the potential to explain many important phenomena and establish a relevant balance between them. And so that's what I'm building here, right now we're on this side, how all these things are converging on relevance realization, and then as I said, can we use this to explain many of the features that seem to be central to human spirituality, meaning making, self transcendence, altered states of consciousness and wisdom. We then took a look at communication. I don't remember which order we I think we did we might have done the robot first doesn't matter. We did communication, and we saw the issue here there is the fact that you have to convey more than you can say, and then that led us into the work of Christ and the series of maxims that make conversational implicature possible, and that got us into that all of the maxims collapse to the maximum of being relevant. We then did, or we can now remember doing, the issue of robotics, the actual interaction with the environment, here's the idea of being an agent, and we saw the robot was trying to pull the battery that also, it's on the wagon, and that wagon also has the bomb on it. And what we saw is the problem of the proliferation of side effects. You can't ignore all side effects or you'll be grotesquely stupid, you can't check all side effects or you'll be grotesquely incapable, and so therefore you have to zoom in on the relevant side effects. So again and again and again, everything is centering on this. I want you to also now remember a couple of other things from previous lectures.


Theobias Gunther (06:48)

How we talked about the convergence argument, this is an independent convergence argument. When we talked about consciousness, not the nature of consciousness, but the function of consciousness, all the convergence arguments that what's going on in consciousness is doing relevance realization, especially in complex, ill-defined situation. In which our agency is directly involved, so consciousness seems to be bound up with relevance realization, and we also talked about how this overlaps with how working memory, the work of Lynn Hasher, the job of working memory is to be a relevance filter, and to screen off irrelevant information and allow in to processing, deeper processing, more relevant information. I also pointed out, I want you to see how all these connections are forming, that there's deep connections between working memory and your measures of your general intelligence, how intelligent you are. So we see that we're getting actually a very powerful convergence argument towards the centrality of relevance realization, as, how do I want to say this, as constitutive, as constitutive of your intelligence, your cognitive agency, as significantly contributory towards your existence as a conscious being. And then I also suggested to you last time that this notion of relevance realization, and this is what we're going to develop today, may be a way of explaining that sort of fundamental aspect of meaning, the kind of meaning that was lost in the meaning crisis, that's expressed in the three orders, in which we're pursuing coherence and significance and purpose, that sense of connectedness, connectedness.


Braces Patan Muconseco (08:37)

And I'm going to try to argue that as we understand what relevance is, that relevance is exactly that sense of connectedness. So there will be deep connections between meaning and relevance, and of course, this will make sense, right? See what I'm arguing here, there's deep connections between meaning and all this relevance, right? There's deep connections between relevance and agency, that's the whole point about the robot and communicating, right? And there's going to be deep connections we've already seen between, of course, meaning and agency, that one of the whole things about agency is its relationship to the arena, the agent of arena relationship, and how that grounds, how that's the meta meaning grounding of all of our other more specific meaning making projects. All right. So I hope I've made at least a good convergence argument for you that many things converge upon, many things that we're interested in, many central defining features of intelligence and agency and aspects of the functionality of our consciousness. Everything is sort of converging on this relevance realization. What I want to try and show you now is how you might move towards, and this has been sort of the core of my, I guess you'd call it my scientific work, how you move towards trying to offer a scientific explanation of relevance, and what that would look like in the difficulties you face doing so.


Intellectual Landscapes (10:15)

I also want to try and argue that there's good reason to believe that we're talking about a unified phenomena, a unified thing here, relevance, that this isn't just a family resemblance term for a lot of disconnected things. That there's a reason to believe this is a central thing. Let's start with trying to offer theories of relevance. And there are good ones out there. There's the work of Sperber and Wilson and others, and I will refer to some of that work as we move along. But let's try and work towards sort of a bit of a, at the meta level. What do we need for a good theory of relevance to do? What kind of mistakes do we need to avoid when we're trying to explain relevance? The main mistake that I want to point to is a mistake in which we are arguing in a circle. If you remember this is part of what goes into things like the homuncular fallacy. Remember when I try to explain vision with the little man in my head, having vision, I don't want to use whatever I'm using, let's put it this way, whatever process or entity that I'm trying to use to explain. Relevance should not itself require relevance. What I mean by that, if I have something X and I'm using that to explain relevance, what this cannot, this cannot itself, cannot presuppose relevance for its function. Because if it does, then I'm ultimately arguing in a circle. I have to find processes that are themselves not processes that realize relevance if I'm going to explain, though, I'm going to explain in terms of those processes, relevance realization itself. Another way of putting this is I ultimately want to explain intelligence in terms of processes that are not themselves intelligent. Because if I don't, if I'm always explaining intelligence in terms of processes that are themselves intelligent, that is no different than the homuncular fallacy of explaining vision in terms of internal processes that are themselves visual processes. That's going to be a guiding methodological principle.


RelevanceUnderstandingRepresentations and Meaning (12:57)

Now that turns out to be very powerful, and as many people have pointed out, Froder famously has pointed out in repeated places, it's actually very difficult to explain relevance without presupposing relevance in the machinery that you're using to explain it. Let's take a look at some candidates. We might think that we could explain relevance in terms of how we use representations. This is a very powerful way we think about the mind, that there are things in the mind, ideas, pictures, that stand for, represent the world in some way. We might think that perhaps it's much more that relevance is a function of computation, computational processes. Or we might think that we explain relevance in terms of what's called modularity, that there's a specific area of the brain dedicated to processing relevance. I want to take a look at each one of those, and I want to try and argue as to why I think they're inadequate and what that helps us to see. What I want you to see is that, and I'll try to show this along the way, that as we come if, if, and I'm trying to make it more than an if, but if relevance realization is so central to our meaning-making and our cognition, and our consciousness, and our self-transcendence, etc. As we learn about how we have to best try to explain or understand it, we should garner lessons about how to best think about and reflect upon human spirituality, at least in terms of the terms that I have defined it for us. Okay, so representation. Now, this is just a terrifically hot issue, both in terms of interest and controversy within cognitive science in general, and I'm not going to try and completely decide this issue right now, although I think I'll say things that are pertinent to that debate. But let's take it that what we mean by a representation is something, as I said, some mental entity that stands for, refers, directs us towards an object in the world. That's all I need. Whatever else representations are in all that controversy, that's all I need for the point I want to make. Because I want to show you something very important about a representation, and I mentioned it a few minutes ago, and this is a point that John Searle has famously made. Representations are aspectual. Okay, so I hold this thing up, and you form a representation of it. Remember all the things we talked about when we talked about categorization, we talked about similarity, etc. So when you form a representation, you do not grasp all of the true properties of this object, because all of the true properties, the number is combinatorial explosive. You've already seen that. So out of all of the properties, you just select some subset. So what subset do you pick? Well, you pick a subset that is, and here it comes, relevant to you. Are they just a feature list? No, we've already seen that a long time ago. They have a structural functional organization. They're made relevant to each other. So here's what we've got. A set of features that are relevant to each other, and then a set of features, that set of features that have been structurally functional organized so that they have co-relavants, is then relevant to me. That's what an aspect is. So whenever I'm representing anything, this is a marker. However, I could change its aspectuality. It's now a weapon. And we do that all the time. In fact, one of the ways we check to people's creativity is to do exactly that. We will give some object and say, "How many different ways can you use it? How many different ways can you categorize it? Namely, how many ways can you categorize it? Namely, how many different ways? How flexible are you in getting different aspects from the same object?" So representations are inherently a-spectual, but notice the language I was using. You're zeroing on relevant properties out of all the possible properties. You're structuring them as how they're co-relevant to each other, and then how that structural functional organization is relevant to you. Aspectuality deeply presupposes-maybe that's an incorrect arrow. This is what presupposes, deeply presupposes your ability to zero in on relevance, do relevance realization. That means that representations can't ultimately be the generators of the creators of relevance. They can't be the causal origin of relevance. Now, can our representations feed back and alter what we find relevant? Of course, nobody's denying that. That's why we use representations. But what they can't serve, they can't serve as the ontological basis, the stuff in reality that we're trying to use to generate a non-circular account of relevance realization. Now, that's going to tell us something really interesting. It's going to tell us that if this meaning and the spirituality is bound to relevance realization, that the place to look for it is not going to be found at the level of our representational cognition.


OperationsOnRealityRepresentationsDont RealizeRelevance (19:08)

The level of our cognition that is using ideas, propositions, pictures, etc. Once again, I am not saying that those things do not contribute or affect what we consider relevant. What I'm saying is they are not the source, the locus of how we do relevance realization. I want to show you how this caches out even in an empirical manner. This goes to some really interesting work done by Zen in Polition. What's called multiple object tracking. Multiple object tracking is really interesting. Basically, what you do is you give people a bunch of objects on a computer screen. They'll have X as a nose and they'll be different colors, they can be different shapes, all kinds of things like this. What I have the objects move around. Let's say this was a red X. After it moves around, I ask you where's the red X and you have to point to it. I may ask you where's the green circle, where's the blue square, you get the task. What's interesting is how much you can do this. You can track about eight objects reliably. What's really interesting about them, what's really interesting is the more objects you track, the less and less features you can attribute to each object. What do I mean by that? Suppose I'm tracking the red X and I have to keep it. I can, after lots of movements, say, "Oh, it's there now." It started there and it's there now. What I won't notice during that is that the red X has become, for example, a blue square. All of its content properties get lost. All I'm tracking, and I need you to remember this, is what you might call the "hearness". Where is it? And the nowness of it. Where is it? It's here now, it's here now, it's here now, it's here now, it's here now. Everything else, its shape, its color, its categorical identity, all get lost. So he calls this, he calls this "fintsting". This stands for "fingers of instantiation". It's basic ideas like this. Your mind has something equivalent to putting your finger on something. I don't know what this is. Suppose I didn't know what it was. I put my finger on it. I don't know what it is. I just know it's here nowness. It's here now, it's here now. Here and now are indexicals. These are terms that just refer to the context of the speaker. So here now, right? And it moves around and my mind can keep in touch. Notice my language, in touch, in contact, in touch with, right, something. But that's all that's doing. It's just tracking the "hearness". Well that's really cool. Now why do we have this ability?


ActionGraphsExampleHowImportantIs Pablo Picasso UniqueHowImportantIs Pablo Picasso (22:20)

Well first of all, I'm going to propose a way of thinking about this. He doesn't use this language but I think it'll be helpful. I don't think it's in any way inconsistent. This ability to do this is like "salience tagging". When I touch this, I'm making this "hearness" salient to me. This "hear now" is salient to me. Not the bottle, not even the flat surface because remember I lose all of those particular qualities.


ArtTheTaoSo Much Depends Upon Our Being Still (22:51)

All I have is the "hear nowness" is salient to me. And we do this with the "monstrative terms" like this. Notice the word "this" is not like the word "cat". "Cat" refers you to a specific thing. "Meow, meow!" The animal that pretends to love you. Actually I know some cats now that I'm actually convinced do love me. So I have to amend my usual comments about cats. But this isn't like "cat". This can go, watch this. This. This. This. This. Okay? It doesn't refer to a specific thing. It picks out, it does a salient, it makes something, it doesn't make something. It just makes some "hearness" and "nowness". Sorry for talking about this but this is how we have to talk. "Salient to you". Now, I want you to pick up on something I just said with this. Terms like "this" and "hear" and "now" but especially this, these are linguistic terms. And they do what it's called "demonstrative reference". They do not refer to a particular thing. They do not refer to the bottle or to the marker or to wall. But this, this, this, this. Okay? All they do is "salience tagging". This and that. Now why is that important? Well, "polition" wants you to understand "finsting". "Finsting" is obviously not a linguistic phenomenon. I'm not speaking in my head when I'm doing this. In fact, if you try and speak in your head, you're going to mess yourself up. So, he's using demonstrative reference as a linguistic analogy for something you enact. So, I'm going to try and draw that out by calling it "enactive demonstrative reference" rather than "linguistic demonstrative reference".


Enactive Demonstrative Reference (24:52)

So, "enactive demonstrative reference". Which I've tried to explain to you with this notion of "the salience tagging of hereness and nowness". Why is this so important? Well, here's where the analogy can help me. I need demonstrative reference. I need inactive demonstrative reference before I can do any categorization. Look, if I'm going to categorize things, I need to mentally group them together. This is mental grouping. This, this, this, this, this. That's what mental grouping is. Mental grouping is to "salience tag things" and bind them together in a "salience tagging". So, what am I showing you? What am I trying to show you is any categorization you have depends on inactive demonstrative reference. An inactive demonstrative reference is only about "salience and hereness". You see, all of your concepts are categorical. That whole conceptual, representational, categorical, pictorial, all of that depends on this. But this depends on something that is pre-categorical. Pre-conceptual. And you say, but you're using concepts to talk about it. Don't confuse properties of the theory with properties of what the theory is about. Of course, I have to use words to talk about it. I have to use words to talk about atoms. That doesn't mean that atoms are made out of words or dependent on words. I have to use words to talk about anything. And I don't want properties of my theory and properties of the phenomena of the theory to be confused. I want a theory about, for example, "vagueness" to itself be clear. I want a theory about illogicality to itself be logical. I want a theory about irrationality to itself be rational. Do not confuse properties of the theory with properties of the thing being referred to. Yes, I have to use language and concepts to talk about it, but that does not mean that the theory is not a theory. It means that the thing itself is made out of or dependent on concepts and categorization. I've given you an argument and I've given you empirical evidence towards this claim. And they massively converge together. Notice this is a fundamental connectedness to reality you're getting with the Finsting, inactive demonstrative reference when you're getting that initial salience tagging. Because it's like the mind being in contact with the world. That's why Polition even uses the metaphor of contact. Alright, so the representational level is not going to give us what we're looking for. In fact, we need to think about ways in which we need to pursue something that is sub-representational. In Kog's side, we would call that the representational level is called the semantic level. Because this is the level at which words have meaning or by analogy at which representations have representational meaning. We have to go sub-semantic, we have to go sub-categorical, we have to go sub-conceptual. Now, is that such a bizarre claim? We saw in higher states of consciousness that people claim to have the most profound sense of meaning. And it is precisely ineffable. They reliably, across traditions, across historical contexts, claim that it is not conceptual.


Representational Level (29:23)

It can't be grasped categorically. And they use the language of "hearness and nounness" to describe it. It's fully present. It's like, you know, eternal "hearness and nounness". So this is actually not a bizarre claim to consider. Now, it's difficult for us because we habitually identify with, that's our ego structure, I would say, tends to, we tend to identify with the way in which we are running representations in our mind. Inter-picture is inner speech, etc. Alright, so perhaps we could consider the computational level, the level in which we could explain relevance realization. Because we have found that the semantic level of representations is inadequate. This is often called the syntactic level. Semantics is about how your terms refer to the world. Syntax is about how your various terms have to be coordinated together within some system. So, for example, you know that there are grammatical rules in English about how you can put certain things together. That's the syntax. So, in computation, what we're usually doing is we're thinking about the relationship between our symbols. I don't mean symbol in the religious sense. I just mean the things that we're using within, for example, code and program or something like that. We're talking about the relationship between them. Now, there's been a lot of issues around this. And I want to point to a core argument by one of the strongest defenders, one of the originators and defenders of the computational theory of mind. So, this is a tradition, and you remember it goes back to Hobbes, the idea that cognition is computation. We talked about this, the manipulation of an abstract symbolic system, like generally logical or mathematical symbolic system. The manipulation of that is what it is to think. To think is to do computation. Now, Fodor has pointed out, and I think these are arguments in many ways analogous to Wittgenstein, and you have to remember, he's a defender of the computational theory of mind. He's considered one of its founding figures within cognitive science. So, when he criticizes it, we have to first of all do two things. He died not that long ago, but we have to congratulate him on his honesty as a researcher. The capacity for self-criticism is, for me, a demonstrative measure of how good a researcher is.


Self-Criticism (32:25)

If you're finding people that are incapable of self-criticism in their intellectual pursuits, then I suggest you give them quite a wide berth in how much confidence you place in their work. So, the fact that he does that is important, and the fact that he launches into that self-criticism means he's not being motivated by his own particular theoretical bias. All that being said, what's the nature of the criticism? Well, the nature of the criticism is, right, you have to make a distinction ultimately. You have to make a distinction between implication, right, and inference. People sometimes confuse these together. So, implication is a logical relationship based on syntactic structures and rules, a logical relationship between propositions. So, here's an abstract symbol. So, if I have A and B, and I know that's true, I can conclude that B is true. I don't know what B is. See, I don't have any semantic content. It is purely syntactic, but I can derive that. Now, when we try to think about implications, what we have to remember is an inference is when you're actually using an implication relation to change your beliefs. The thing about beliefs is that they have content, right? So, when I'm making an inference, I'm not just making an implication. I'm using implication relations in order to alter belief, changing belief.


Understanding Relevance Realization

Modularity (34:22)

Okay, you say, well, why does that matter? Because changing beliefs to us brings up the important issue right away, right? The important issue right away is what beliefs should I be changing? What beliefs should I be changing? Let me try and show you what I mean. Any proposition, technically, it's defined in terms of its logical syntactic structure by all of its implication relations. And it depends on, I mean, logicians can get, we can get very technical here about where, whether or not negation and implication and our identical blah, blah, blah. I'm just going to speak very broadly here, because that's all I need. So, a proposition, it's logical, it's computational identity is defined by all of its implication relations to other propositions. So, for example, part of the identity of this, A and B, is that it implies B. It also implies A and all kinds of things. Now, the issue that we have, and this is a point that was made also independently by Cerniac, is the number of implications, logical relations between any proposition and all the other propositions, is combinatorially explosive. Combinatorially explosive. You cannot ever make use of, and we talked about this in how you can't be comprehensively logical, right? You can't make use of all of the implications of any proposition ever. You cannot be completely logical ever. What you do is, out of all of the implications, you decide which one of the ones you select, right, which one of the ones are going to be used in an inference. Right? Fodor and Cerniac both independently talk about this as a kind of cognitive commitment. Which of the implications, are you going to, are you going to commit to? And this matters to you. It matters to you because commitment is an act that makes use of your precious and limited resources of attention, memory, time, metabolic energy. You cannot afford, you cannot afford to spend them on all possible ones. You cannot even afford to spend them on inferences that are not, and here's what you knew I was going to say, relevant to the context. Which beliefs do I need to change, and that can mean strength, by the way. Which beliefs do I need to change, right, in this context? So, notice what? Out of all of these, what am I doing? I'm choosing, and this is what Cerniac specifically argues. This is his term, not mine. Right? What makes, according to Cerniac, somebody rational, will come back to whether or not this is a good definition of rationality. But it's at least what makes you intelligent as a cognitive agent, is, right, that you select, out of all the possible implications, the relevant ones, because those ones are relevant to the context because they're going to affect the beliefs that you've already done relevance, realization on, as applying to this situation, or representing this situation well. So, inference massively presupposes relevance, realization. Now, you may think, well, but I can get around that, because logic, you know, logic isn't just implications, it's the rules governing the implications, and maybe all I need to talk about is the rules. And then here's the argument that comes to Wittgenstein, but I think ultimately goes back to Aristotle, is how rules work. Right? And this is an argument that Brown and others have made, made very, very clear. Rules, rules are, obviously they're propositions. They're not just propositions, they're propositions that, and this is perhaps why you're considering them, propositions that tell you where to commit your resources. Now, the problem with that is that, of course, every rule requires an interpretation. Every rule requires a specification in its application. I assume that many of you have this rule be kind, which means in a situation, I will derive inferences, sorry, I will use inferences to derive actions and changes of belief, and those will fit together in a certain way, that will result in me achieving kindness towards others. So I have this rule, it tells me which implications to pay attention to, which beliefs I should make salient, etc. Now, what's the issue about this? Well, think about being kind. What do I mean by this problem of interpret, interpret, interpretation by specifying the application of the rule? The way I'm kind to my son Spencer, what it means to be kind to Spencer, should I use that in how I'm trying to be kind to my partner, Sarah? No. That would be inappropriate. It could be condescending, it could be patronizing. Now, I want to be kind to both of them. In fact, I love both of them deeply, but I'm not going to be kind to them in the same way most of the time. Well, what about how I'm kind to a friend? Should I be kind to a friend the way I'm kind to either Spencer or Sarah? That doesn't seem right either. What about how I'm kind to my student? Should I be like I'm kind to a friend? No. How I'm kind to Spencer? No. How I'm kind to Sarah? No. What about how I'm kind to a stranger? Should be like I'm kind to my students? No. How about when I'm kind to myself? Should it be like any of those? So here's the thing. And this is bound up with the fact that we have to always convey more that we can say you can probably see that. I cannot specify all the conditions of application of the rule in the rule because the rule always has to convey much more than it can say. If I try to specify it in the rule, the rule will become unwieldy because it will become combinatorially explosively large. It will no longer serve. What you say, well what you might do is put in a rule on how to use this rule. A higher order rule. That's not going to work because the same problem is going to happen here. And this was Wittgenstein's point. You can't ultimately get an explanation of how you follow rules in terms of just the rules. Your ability to follow rules is actually based on something else. Brown calls this in his book on rationality in 1988. The skill of judgment. Notice what we've moved here. We've moved out of the propositional language of a rule. And we've moved into the procedural language of a skill. The skill knowing how to judge what is relevant pertinent in this situation. Now, again, notice how we can't even maintain the two things that are supposed to be central to computation. We can't use inference because it presupposes relevance. We can't use rules because what is this procedural skill of being able to determine what's appropriate or what fits in the context, what fits the people or the situation, what fits the problem or task at hand. Well, that's the skill of relevance realization. So we're seeing that the computational level isn't going to do it for us. I want to stop here before we go to this modularity issue and point out something really interesting. Notice what we got with Fodor and Wittgenstein, and like I said, I think this ultimately goes back to Aristotle. Notice how the propositional, and this is one of Wittgenstein's famous arguments, ultimately depends on the procedural. One of my favorite quotes from Wittgenstein has to do exactly with this. He said, "Even if lions could talk, we would not understand them.


Modularity (47:47)

What about modularity? What about modularity? Well, the idea would be something like this, right? And to be fair, this comes up a lot. The idea, you know, here's the mind or the brain, right? And here's something like, here's the central executive or something like that. It's weird we use a business term for an aspect of our cognition. This is used in psychology. And the idea is that central executive is making all kinds of important decisions. Well, maybe the central executive is responsible for relevance realization. And a lot of people, and I know this because I interact with psychologists, they're like, "Oh, yeah, that's it. That's the answer." Well, it's not an answer. It's not an answer at all. Because if it's right, it's ridiculously homuncular. Because what does the central executive have to possess? Inside the central executive is a capacity for relevance realization. I haven't explained it. I've just pointed to a place. The problem is you shouldn't... Okay, so first of all, I haven't explained it. It's a moncular. And secondly, you shouldn't point to a place. Look, relevance realization can't be in any one place. It has to simultaneously. You know this. We've talked about this with how attention works. Remember, you know, that you're always going from feature to gestalt and from gestalt to feature. Attention has to be moving out towards the gestalt and down to the features. Relevance realization has to be happening both at the feature level and the gestalt level in a highly integrated interactive fashion. You can't point to one place and say that's where relevance realization is going on. Because relevance realization has to be happening at multiple levels of cognition in a simultaneous self-organizing fashion. That's why it can lead to insight. And as I said, pointing to any one thing and then labeling it is not an explanation. It is a homuncular divergent. That's all it is. Okay. Let's try and draw this all together. What are we learning? Well, I'm trying to show you we're already learning something very interesting about meaning making. But we're learning what we need, the kinds of properties and processes we need in order to explain relevance realization. First of all, our account of relevance realization and bear with me on this because there's an important way in which I'm going to modify this. But our account of relevance realization has to be completely internal. Now what do I mean by that? It has to work in terms of goals that at least initially are internal to the brain and emerge developmentally from it. Look, any goal in which the brain is representing or referring to something in the world can't be the place where we can generate an explanation of relevance. Because in so far as I'm representing a goal to myself, I've already got the capacity for relevance realization. The goals that are the originating source of relevance realization have to be internal to the relevance realization process.


What blocks us from understanding (44:16)

Even if they could use all of our words, we would not understand them because their skills of what is relevant or important or central to them would be are very different from ours." He called this a form of life. Their form of life, the way they exercise across many contexts, the skill of doing judgments of what's relevant, what's salient and important to them is fundamentally different from ours because they're cats rather than humans, and therefore even if they spoke, we would not understand them. We see that the propositional actually depends on the procedural, but notice, and this is really important. If I'm exercising a skill, I'm going to throw this or do a martial art block or something, that depends on what's called situational awareness. If I'm a good martial artist, I don't just have my skills and just apply them mechanically. It's a great thing of you spar with somebody that's fighting mechanically because they don't have situational awareness. Now, what is situational awareness? When I'm exercising a skill, it depends on my situational awareness. What is situational awareness? What do you know what it is? We've already talked about it. It's your perspectival knowing. It's your ability to do salience landscaping. It's ability to foreground, background, formulate the problem while it's all that perspectival.


How we should think about relevance realization (45:45)

My situational awareness is how is my salience landscaping foregrounding what's most relevant to the task? Is it and is it backgrounding what's irrelevant? How is it adjusting as the situation is changing so that the way I'm applying my skill is more adaptive and more fitted to the situation? So your procedural knowing depends on your perspectival. You know what I'm going to go with this. Your perspectival knowing ultimately depends on how well the agent and arena fit together and generate affordances of action and affordances of an intelligibility. If the agent and the arena need to be in a conformity relationship, they need to be well fitted together. You've seen lots of arguments to this in order for my salience landscaping to function appropriately. So the perspectival ultimately depends on the participatory. Now, of course, it goes this way, right? They affect each other in multiple interactions. So I was not originally drawing the arrow of causal interaction. I just did that, but what I was trying to draw originally was the arrow of dependence, asymmetric dependence. This depends on this and this depends on this and this ultimately depends on this. So we're getting a lot about how we should think about relevance realization, where we should look for it and notice it's starting to give us a way of connecting and thinking about the four kinds of knowing.


Modularity (34:22)

Okay, you say, well, why does that matter? Because changing beliefs to us brings up the important issue right away, right? The important issue right away is what beliefs should I be changing? What beliefs should I be changing? Let me try and show you what I mean. Any proposition, technically, it's defined in terms of its logical syntactic structure by all of its implication relations. And it depends on, I mean, logicians can get, we can get very technical here about where, whether or not negation and implication and our identical blah, blah, blah. I'm just going to speak very broadly here, because that's all I need. So, a proposition, it's logical, it's computational identity is defined by all of its implication relations to other propositions. So, for example, part of the identity of this, A and B, is that it implies B. It also implies A and all kinds of things. Now, the issue that we have, and this is a point that was made also independently by Cerniac, is the number of implications, logical relations between any proposition and all the other propositions, is combinatorially explosive. Combinatorially explosive. You cannot ever make use of, and we talked about this in how you can't be comprehensively logical, right? You can't make use of all of the implications of any proposition ever. You cannot be completely logical ever. What you do is, out of all of the implications, you decide which one of the ones you select, right, which one of the ones are going to be used in an inference. Right? Fodor and Cerniac both independently talk about this as a kind of cognitive commitment. Which of the implications, are you going to, are you going to commit to? And this matters to you. It matters to you because commitment is an act that makes use of your precious and limited resources of attention, memory, time, metabolic energy. You cannot afford, you cannot afford to spend them on all possible ones. You cannot even afford to spend them on inferences that are not, and here's what you knew I was going to say, relevant to the context. Which beliefs do I need to change, and that can mean strength, by the way. Which beliefs do I need to change, right, in this context? So, notice what? Out of all of these, what am I doing? I'm choosing, and this is what Cerniac specifically argues. This is his term, not mine. Right? What makes, according to Cerniac, somebody rational, will come back to whether or not this is a good definition of rationality. But it's at least what makes you intelligent as a cognitive agent, is, right, that you select, out of all the possible implications, the relevant ones, because those ones are relevant to the context because they're going to affect the beliefs that you've already done relevance, realization on, as applying to this situation, or representing this situation well. So, inference massively presupposes relevance, realization. Now, you may think, well, but I can get around that, because logic, you know, logic isn't just implications, it's the rules governing the implications, and maybe all I need to talk about is the rules. And then here's the argument that comes to Wittgenstein, but I think ultimately goes back to Aristotle, is how rules work. Right? And this is an argument that Brown and others have made, made very, very clear. Rules, rules are, obviously they're propositions. They're not just propositions, they're propositions that, and this is perhaps why you're considering them, propositions that tell you where to commit your resources. Now, the problem with that is that, of course, every rule requires an interpretation. Every rule requires a specification in its application. I assume that many of you have this rule be kind, which means in a situation, I will derive inferences, sorry, I will use inferences to derive actions and changes of belief, and those will fit together in a certain way, that will result in me achieving kindness towards others. So I have this rule, it tells me which implications to pay attention to, which beliefs I should make salient, etc. Now, what's the issue about this? Well, think about being kind. What do I mean by this problem of interpret, interpret, interpretation by specifying the application of the rule? The way I'm kind to my son Spencer, what it means to be kind to Spencer, should I use that in how I'm trying to be kind to my partner, Sarah? No. That would be inappropriate. It could be condescending, it could be patronizing. Now, I want to be kind to both of them. In fact, I love both of them deeply, but I'm not going to be kind to them in the same way most of the time. Well, what about how I'm kind to a friend? Should I be kind to a friend the way I'm kind to either Spencer or Sarah? That doesn't seem right either. What about how I'm kind to my student? Should I be like I'm kind to a friend? No. How I'm kind to Spencer? No. How I'm kind to Sarah? No. What about how I'm kind to a stranger? Should be like I'm kind to my students? No. How about when I'm kind to myself? Should it be like any of those? So here's the thing. And this is bound up with the fact that we have to always convey more that we can say you can probably see that. I cannot specify all the conditions of application of the rule in the rule because the rule always has to convey much more than it can say. If I try to specify it in the rule, the rule will become unwieldy because it will become combinatorially explosively large. It will no longer serve. What you say, well what you might do is put in a rule on how to use this rule. A higher order rule. That's not going to work because the same problem is going to happen here. And this was Wittgenstein's point. You can't ultimately get an explanation of how you follow rules in terms of just the rules. Your ability to follow rules is actually based on something else. Brown calls this in his book on rationality in 1988. The skill of judgment. Notice what we've moved here. We've moved out of the propositional language of a rule. And we've moved into the procedural language of a skill. The skill knowing how to judge what is relevant pertinent in this situation. Now, again, notice how we can't even maintain the two things that are supposed to be central to computation. We can't use inference because it presupposes relevance. We can't use rules because what is this procedural skill of being able to determine what's appropriate or what fits in the context, what fits the people or the situation, what fits the problem or task at hand. Well, that's the skill of relevance realization. So we're seeing that the computational level isn't going to do it for us. I want to stop here before we go to this modularity issue and point out something really interesting. Notice what we got with Fodor and Wittgenstein, and like I said, I think this ultimately goes back to Aristotle. Notice how the propositional, and this is one of Wittgenstein's famous arguments, ultimately depends on the procedural. One of my favorite quotes from Wittgenstein has to do exactly with this. He said, "Even if lions could talk, we would not understand them.


Modularity (47:47)

What about modularity? What about modularity? Well, the idea would be something like this, right? And to be fair, this comes up a lot. The idea, you know, here's the mind or the brain, right? And here's something like, here's the central executive or something like that. It's weird we use a business term for an aspect of our cognition. This is used in psychology. And the idea is that central executive is making all kinds of important decisions. Well, maybe the central executive is responsible for relevance realization. And a lot of people, and I know this because I interact with psychologists, they're like, "Oh, yeah, that's it. That's the answer." Well, it's not an answer. It's not an answer at all. Because if it's right, it's ridiculously homuncular. Because what does the central executive have to possess? Inside the central executive is a capacity for relevance realization. I haven't explained it. I've just pointed to a place. The problem is you shouldn't... Okay, so first of all, I haven't explained it. It's a moncular. And secondly, you shouldn't point to a place. Look, relevance realization can't be in any one place. It has to simultaneously. You know this. We've talked about this with how attention works. Remember, you know, that you're always going from feature to gestalt and from gestalt to feature. Attention has to be moving out towards the gestalt and down to the features. Relevance realization has to be happening both at the feature level and the gestalt level in a highly integrated interactive fashion. You can't point to one place and say that's where relevance realization is going on. Because relevance realization has to be happening at multiple levels of cognition in a simultaneous self-organizing fashion. That's why it can lead to insight. And as I said, pointing to any one thing and then labeling it is not an explanation. It is a homuncular divergent. That's all it is. Okay. Let's try and draw this all together. What are we learning? Well, I'm trying to show you we're already learning something very interesting about meaning making. But we're learning what we need, the kinds of properties and processes we need in order to explain relevance realization. First of all, our account of relevance realization and bear with me on this because there's an important way in which I'm going to modify this. But our account of relevance realization has to be completely internal. Now what do I mean by that? It has to work in terms of goals that at least initially are internal to the brain and emerge developmentally from it. Look, any goal in which the brain is representing or referring to something in the world can't be the place where we can generate an explanation of relevance. Because in so far as I'm representing a goal to myself, I've already got the capacity for relevance realization. The goals that are the originating source of relevance realization have to be internal to the relevance realization process.


Constituative Goals (51:37)

Now what does that mean? The goals have to be goals that are constitutive. What are constitutive goals? Constituative goals are goals that a system or process have that helps to constitute it for being what it is. And this is especially the case for auto poetic systems. We've talked about this. Living things are not only self-organizing. Living things are self-organized because they have the constitutive goal of preserving their own self-organization. To be alive is to have, or maybe even better, to be the goal of preserving the self-organization that is giving rise to you. That's a constitutive goal. Auto poetic things are self-organized such that they can protect and promote. They're constituted to protect and promote their own self-organization. Which means we should see that there's going to be a deep connection between your ability to do relevance realization and being an auto poetic thing.


Relevance Realization (52:41)

Because relevance realization ultimately has to work in terms of auto poetic systems. Systems that have goals that are completely internal in the constitutive sense. Now that's important because that means there's going to be a deep connection between doing relevance realization and being a living thing. Next. So when I say internal, I mean auto poetically internal. Our theory of relevance realization has to talk in terms of processes that are scale invariant. Relevance realization has to act simultaneously at multiple levels, local and global, feature, gestalt. And it has to do it in a self-organizing fashion such that it is capable of insight, self-correction. And that means of course, and that ties in again with being auto poetic, that the relevance realization process has to be fundamentally self-organizing in nature. Okay. Now we hit a problem here. And it's a problem that might derail the whole project.

Relevance Realization (55:44)

Thank you very much for your time and attention.


Romantic Fatalism

A Fatalism in Terms of Romanticism (54:07)

It might make it sound like the attempt to give a scientific explanation of relevance realization is impossible. Now notice I've been sort of playing between those and treating them as synonymous. A theory of relevance and a theory of relevance realization. That's ultimately because I've been dodging an issue because I'm going to argue can't identify them. Because here is what I want to argue. At least I'm going to state what the argument is going to be and then we're going to pick it up in the next video. I'm going to argue that we cannot have a scientific theory of relevance. We cannot have a scientific theory of relevance. I'm going to try and argue that that tells us something very deep about the nature of relevance and therefore something deep about the nature of meaning and our attempts to explain, articulate and celebrate our meaning-making capacities. But I'm going to ultimately argue that that is no reason for despair. Because what I'm going to argue is that the fact that we don't have a theory of relevance doesn't preclude us from having a theory of relevance realization. In fact it will give us a good understanding of what a theory of relevance realization is. And that will help us because we will realize, pun intended, that all we ever needed was a theory of relevance realization.


Reiteration Of Relevance Realization

Relevance Realization (52:41)

Because relevance realization ultimately has to work in terms of auto poetic systems. Systems that have goals that are completely internal in the constitutive sense. Now that's important because that means there's going to be a deep connection between doing relevance realization and being a living thing. Next. So when I say internal, I mean auto poetically internal. Our theory of relevance realization has to talk in terms of processes that are scale invariant. Relevance realization has to act simultaneously at multiple levels, local and global, feature, gestalt. And it has to do it in a self-organizing fashion such that it is capable of insight, self-correction. And that means of course, and that ties in again with being auto poetic, that the relevance realization process has to be fundamentally self-organizing in nature. Okay. Now we hit a problem here. And it's a problem that might derail the whole project.

Relevance Realization (55:44)

Thank you very much for your time and attention.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Wisdom In a Nutshell.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.