Ep. 32 - Awakening from the Meaning Crisis - RR in the Brain, Insight, and Consciousness | Transcription

Transcription for the video titled "Ep. 32 - Awakening from the Meaning Crisis - RR in the Brain, Insight, and Consciousness".

1970-01-01T10:32:47.000Z

Note: This transcription is split and grouped by topics and subtopics. You can navigate through the Table of Contents on the left. It's interactive. All paragraphs are timed to the original video. Click on the time (e.g., 01:53) to jump to the specific portion of the video.


Introduction

Intro (00:00)

Welcome back to Awakening from the Meaning Crisis. So last time we were taking a look at the centrality of relevance realization, how many central processes, central to our intelligence, possibly also to at least the functionality of our consciousness, presuppose, require, are dependent upon relevance realization. So we had gotten to a point where we saw how many things fed into this, and then made the argument that it is probably at some fundamental level of a unified phenomena, because it comports well with the phenomena of general intelligence, which is a very robust and reliable finding about human beings. And then I proposed to you that what we need to do is two things.


Deep Dives Into Various Concepts

Time For Pastorsom Past Audio Series (00:58)

We need to try and give a naturalistic account of this, and then show if we have naturalized this, can we then use it in an elegant manner to explain a lot of the central features of human spirituality. And I already indicated in the last lecture how some of that was already being strongly suggested.


Making Art While Struggling With Chronic Pain (01:32)

We got an account of self-transcendence that comes out of dynamic emergence that is being created by the ongoing flexification, and this has to do with the very nature of relevance realization as this ongoing evolving fittedness of your sensory motor loop to its environment under the virtual engineering of bioeconomically logistical constraints of efficiency that tend to compress and integrate and assimilate, and resiliency that tend to particularize and differentiate. And when those are happening in such a dynamically coupled and integrated fashion within an ongoing opponent processing, then you get complexification that produces self-transcendence.


What Is Considered Wisdom Nowadays? (02:19)

But of course much more is needed. Now, I would like to proceed to address, now I can't do this comprehensively, not in a way that would satisfy everybody who is potentially watching this. This is very difficult because there are aspects of this argument that would get incredibly technical. Also to make the argument comprehensive is beyond what I think I have time to do here today. There are put notes for things that you can read. I'll point you to if you want to read it more deeply. What I want to do is try to give an exemplary argument, an argument of an example of how you could try and bring about a plausible naturalistic account of relevance realization.


How To React When Someone Intentionally Hurls Anger At You Before They Turn To You And Tear Into You (03:05)

Now we've gone a long way towards doing that because we've already got this worked out in terms of information processing processes. But could we see them potentially realized in the brain? And one more time I want to advertise for the brain. I understand why people want to resist the urge of sort of a simplistic reduction that human beings are nothing but their brain. That's a very bad way of talking. That's like saying a table is nothing but its atoms. That doesn't ultimately make any sense. It's also the structural functional organization of the atoms.


The Brain Is Very Dynamic Self Creating Plastic Capable Of Very Significant Qualitative Development (03:57)

The way that structural functional organization interacts with the world, how it unfolds through time. So simplistic reductionism should definitely be questioned. On the other hand we also have to appreciate how incredibly complex, dynamic, self-creating, plastic, capable of very significant qualitative development the brain actually is. So, propose to you that you can one aspect of relevance realization, the aspect that has to do with trading between, being able to generalize and specialize as I've argued is a system going through compression. Remember that's something like what you're doing with line of the best fit and particularization when your function is more tightly fitted to the contextually specific data set. And again, this gives you efficiency, this gives you resiliency, this tends to integrate and assimilate, this tends to differentiate and have accommodate. Okay, so try to keep that on the mind.


Trying To Show That There Is A Plausible Naturalistic Account Of Relevance Realization (05:02)

Now what I want to try and do is argue that there is suggestive. It's by no means definitive and I want to clearly understood that I am not proposing to prove anything here. That's not my endeavor. My endeavor is to show that there is suggestive evidence for something. And all I need is that that makes it plausible that there will be a way to empirically explain relevance realization. So, let's talk about what this looks like. So, there's increasing evidence that when neurons fire in synchrony together, they're doing something like compression. So, if you give, for example, somebody a picture that they can't quite make out and you're looking at how the brain is firing, the areas of the visual cortex, for example, if it's a visual picture or firing sort of asynchronously. And then when the person gets it and goes aha, you get large areas that fire in the brain and they're going to get the brain and they're going to get the brain and they're going to get the brain and they're going to get the brain and they're going to get the brain and they're going to get the brain and they're going to get the brain and they're going to get the brain and they're going to get the brain and they're going to get the brain and they're going to get the brain and they're going to get the brain.


When People Are Cooperating And Joint Attention And Joint Activity Theres This Process Happening (06:01)

Interestingly, there's even increasing evidence that when human beings are cooperating and joint attention and joint activity, their brains are getting into patterns of synchrony. So, if you're a very large scale invariant, what that means is that many levels of analysis, you will see this process happening. Why is that important? Well, if you remember, relevance realization has to be something that's happening very locally, very globally. It has to be happening, pervasively throughout all of your cognitive processing. So, the fact that this, this process I'm describing, is also scale and invariant in the brain. It's impressive that it can be implementing relevance realization. Okay, now what happens is at many levels of analysis, what you have is, right, you have this pattern where neurons are firing in synchrony and then they become asynchronous and then they fire in synchrony and then they become asynchronous and they're doing this in a rapidly oscillating manner. So, this is an instance of what's called self-organizing criticality. It's a particular kind of opponent processing, a particular kind of self-organization. So, we're getting more precision in our account of the self-organizing nature, potentially a relevance realization. Okay, so let's talk a little bit about this first and then we'll come back to its particular instantiation in the brain. So, self-organizing criticality, this goes originally back to the work of perbok.


When Grains Of Sand Fall Through The Force Of Gravity And Friction (08:10)

So, let's say you have grains of sand falling, like in an hourglass, and initially it's random, well, random and from our point of view, of where, right, within a zone, individual grains will end up somewhere in that zone. We don't know where because they'll bounce and all that. But over time what happens, because there's a virtual engine there, friction and gravity, but also the bounce, right. So, the bouncing introduces variation, the friction and the gravity put constraint. And what happens is the sand grains self-organize. There's no little elf that runs in and shapes the sand into a mold. It self-organizes into a mound like that. And it keeps doing this and keeps doing this. Now, at some point, it enters a critical phase. Criticality means the system is close to, right, is potentially breaking down. See, when it's self-organized like this, it demonstrates a high degree of order. Order means that as this mound takes shape, the position of any one grain of sand gives me a lot of information about where the other grains are likely to be because they're so tightly organized, it's highly ordered. But then what happens is that order breaks down, right, and you get an avalanche. It avalanches down. And if this is too great, if the criticality becomes too great, the system will collapse. And so there are people that argue that civilizations collapse, due to what's called general systems failure, which is that these entropic forces, right, are actually overwhelming the structure of the system and the system just collapses.


Criticality Can Overwhelm The System At Any Point And What You See Within The Brain Is That They Self-Organize And Reorganize (09:50)

So collapse is a possibility with criticality. However, what can happen is the following. The sand spreads out due to the avalanche, and then that introduces variation. Important changes in the structural functional organization of the sand mound. Because now what happens is, right, there's a bigger base, and what that means is now a new mound forms and it can go much higher than the previous mound. It has an emergent capacity that didn't exist in the previous system. And then it cycles like this. It cycles like this. Now at any point, again, there's no tea loss to this. At any point, the criticality can overwhelm the system and it can collapse. At any point, the criticality within you can overwhelm the system and you can die, right? But what you see, right, is you see the brain cycling in this manner, self-organizing criticality. The neurons structure together, that's like the mound forming, and then they go asynchronous. This is sometimes even called the neural avalanche, right? And then they reconfigure into a new synchrony, and then they go asynchronous. So do you see what's happening here? What's happening here is the brain is oscillating like this, and what it's doing with self-organizing criticality is it's doing data compression and then it does a neural avalanche which opens up, introduces variation into the system, which allows a new structure to reconfigure. That is momentarily fitted to the situation.


Fittedness: 2-CM (11:47)

It breaks up, right? Now, do you see what it's doing? It's constantly moment by moment. This is happening in milliseconds. It's evolving. It's fittedness. It's complexifying. It's structural functional organization. It is doing, right, compression and particularization, which means it's constantly moment by moment evolving. It's sensory motor fittedness to the environment. It's doing relevance realization, I would argue. Now, what does that mean? Well, one thing, we should be careful of. When I'm doing this, again, I'm using words and gestures, and that, of course, to convey, and makes sense. But what you have to understand is this is happening at myriad of levels. There's this self-organizing criticality doing this fittedness at this level, and it's interacting with another one doing it at this level, all the way up to the whole brain, all the way down to individual sets of neurons. So this is a highly recursive, highly complex, very dynamic evolving fittedness. And I would argue that that is thereby implementing relevance realization. There is some evidence to support this. So Thatcher et al. did some important work in 2008 and 2009, pointing towards it. So here's the argument I'm making, right? I'm making that the argument that RR can be implemented. It's not completely identical too, because you remember there's also exploration and exploitation, but it can be implemented by this. And I've also, last time, made the argument that relevance realization is your, right, your general intelligence. If this is correct, then we should see measurable relationships between these two, right? Of course, we've known how to measure this psychometrically for a very long time, and now we're getting ways of measuring this in the brain. And what Thatcher found was exactly that. They found, right, Thatcher et al. found that there's a strong relationship between measures of self-organization and how, how intelligent you are. Specifically, what they found was the more flexibility there is in this, the more intelligent you are. The more it demonstrates a kind of dynamic evolveability, the more intelligent you are. Is this a conclusive thing? No. There's lots of controversy around this. And I don't want to misrepresent this. However, I would point out that there was a very good article by Hess and Gross in 2014 doing a comprehensive review of the application of self-organizing criticality as a fundamental property of neural systems. And they, I think, made a very good case that where, that it's highly plausible that self-organizing criticality is functional in the brain in a fundamental way, and that lines up, it's convergent with this. So what we've got is the possibility, I mean, and this, this carries with it some, so I'm hesitant here because I don't, I don't want to, by, by drawing out the implications, I don't want to thereby say that this has been proven, I'm not saying that. But, so remember the if, but if this is right, this has important implications. It says that we may be able to move from psychometric measures of intelligence to direct measures in the brain, much more in that sense objective measures. Secondly, if this is on the right track, it will feed, remember this, a lot of this ideas were derived from sort of emerging features of artificial intelligence.


What network theory can explain (15:50)

If this is right, it may help in feedback into this and help develop artificial intelligence. So there's a lot of potential here. Unfortunately, for both good or ill, I'm hoping, if you'll allow me a brief aside, I'm hoping by this project that I'm engaged in to link as much as I can and the people that I work with can, and you know, my lab and my colleagues, link this emergent scientific understanding very tightly to the spiritual project of addressing the meaning crisis rather than letting it just run rampant willy-nilly. All right, so if you'll allow me, that's a way in which we could give a naturalistic account of our, in terms of how neurons are firing. These are firing patterns. Now, I need another scale invariant thing, but I need it to deal with not how neurons are firing, but how they're wiring. What kinds of networks they're forming? I'm not particularly happy with the wiring metaphor, but it has become pervasive in our culture, and it's mnemonically useful because firing and wiring rhyme together. So, again, there is sort of a new way of thinking about how we can look at network. It's called graph theory or network theory. It's gotten very complex in a very short amount of time, so I want to do just sort of the core basic idea with you. That there's three kinds of networks. All right? So, this is a neutral. All right? This doesn't mean just networks in the brain. It can mean networks like how the internet is a network. It could mean how an airline is a network, a railway system, et cetera. This analysis, this theoretical machinery is applicable to all kinds of networks, which is part of its power. So, you want to talk about nodes. These are things that are connected, and then you have connections. So, I'm drawing two connections here. This isn't a single thick one. These are two individual ones. Two individual connections here. So, that's sort of the same number of connections and nodes for each network. So, there's three kinds of networks. This is called a regular network. It's a regular because all of the connections are short distance connections. And you'll notice that there's a lot of redundancy in this network. Everything is double connected. This is called a random or a chaotic network. It's a mixture of short and long connections. And then this is called a small world network. This comes from the Disney song. It's a small world after all because this was originally sort of discovered by Milgram when he was studying patterns of social connectedness. And it's a small world after all. All right. Now, again, these now, originally people were just talking about these. They understood that these are just names for broad families of different kinds of networks that can be analyzed into many different subspecies. And I won't get into that detail because I'm just trying to make an overarching core argument. So, remember I said that this network has a lot of redundancy in it? And that's really important because that means that this network is terrifically resilient. I can do a lot of damage to this network and no node gets isolated. Nothing falls out of communication. It has tremendous, it's tremendously resilient, very resilient. But you pay a price for that, all that redundancy. This is actually a very inefficient network. Now, your brain might trick you because that looks so well ordered. It looks like a nice clean room. And clean rooms look like they're really highly ordered. And that's, oh, this must be the most efficient because cleanliness is orderliness and orderliness is efficiency. And you can't let that mislead you. You actually measure how efficient a network is by calculating what's called a mean path distance. I calculate the number of steps between all the pairs. So how many steps do I have to go through to get from here to here? One, two. How many do I have to go to go from here to here? One, two, three, four. I do that for all the pairs and then I get an average of it. And the mean path distance measures how efficient your network is at basically communicating information.


Mean path distance paradox (20:39)

These have a very, very high mean path distance. So they're very inefficient. You pay a price for all that redundancy. And that's, of course, because redundancy and efficiency are in a trade-off relationship. Now, this, and here's where your granular brain is going to be like, this is so messy. Right? This is so messy. Well, it turns out this is actually efficient. Right? It's actually very efficient because it has so many long distance connections. It's very, very efficient. It has a very low mean path distance. But because they're in a trade-off relationship, it's not resilient. Very poor in resiliency.


Network Theory & Consciousness

The trade-off in efficiency and resilience (21:23)

Right? So, notice what we're getting here. These networks are being constrained in their functionality by the trade-off in the bioeconomics of efficiency and resiliency. Marcus Reed has sort of mathematical proofs about this in his work on network configuration. Now, what about this one, the Small World Network? Well, it's more efficient than the regular network, but less efficient than the random network. But it's more resilient than the random network, but less resilient than the regular network. But you know what it is? It's optimal. It gets the optimal amount of both. It optimizes for efficiency and resiliency. It optimizes for efficiency and resiliency. Now, that's interesting because that would mean that if your brain is doing relevance realization by trading between efficiency and resiliency, it's going to tend to generate Small World Networks. And not only that, the Small World Networks are going to be associated with the highest functionality in your brain. And there's increasing evidence that this is in fact a case. Right? In fact, there was research done by Langer et al. in 2012 that did the same thing, similar thing to what Thatcher did. So here we got this again. Rr is g. And it looks like Rr's might be implementing.


RR networks and small world networks (23:25)

This is what I'm putting here, Small World Networks. That's these guys, Small World Networks. And what Thatcher had all found is a relationship between these. The more your brain is wired like this, the better your intelligence. Again, is this conclusive? No. Still controversial. That's precisely why it's cutting edge. However, increasingly we're finding that these kinds of patterns of organization make sense. A member of Marcus Brede was doing work from just looking at just artificial networks, neural networks. And you want to optimize between these. So you're getting design arguments out of artificial intelligence. You're starting to get these arguments emerging out of neuroscience. Interestingly, Langer et al. did a second experiment in 2013 when you sort of put extra effort, task demands on working memory. You see that working memory becomes even more organized like a Small World Network. Hylgert et al. in 2016 found that there was a specific kind of Small World Network having to do with efficient hubs. The thing is entitled to "efficient hubs in the intelligent brain." Notal efficiency of hub regions in the salience network are correlated with general intelligence. So what seems to be going on is, again, suggestive, not conclusive, but you've got the Langer work working memory. It goes more like this. And then you've got this very sophisticated kind, a species of this. In recent research correlated with the salience network in the brain. Do you see that? That as your brain is moving to a specific species of this within the salience network, you become more intelligent. And the salience network is precisely that network by which things are salient to you. Stand out for you. Grab your attention. One more time. Is this conclusive? No. I'm presenting to you stuff that's literally happening in the last two or three years. And as there should be, there's tremendous controversy in science. However, this is what I'm pretty confident of. That that controversy is progressive. It's ongoing. It's getting better and better. Such that it is plausible that we will be able to increasingly explain, and it will be increasingly convergent with the ongoing progress in artificial intelligence, that we will be able to increasingly explain relevance, realization, and term of the firing and the wiring. Remember, the firing is self-organized in criticality in the wiring of small world networks. And here's something else that's really suggestive. The more a system fires this way, the more it wires this way.


Small world networks (26:26)

So if a system is firing in a self-organizing critical fashion, it will tend to network as a small world network. The more it wires this way, the more it is wired like a small world network, the more likely it will tend to fire in this pattern. These two things mutually reinforce each other's development. So remember, let's try and put this all together. I want you to, really? I mean, it's hard to grok this. I get this. But remember, this is happening at a scale invariant, massively recursive, complex self-organizing fashion. Right? This is also happening, scale invariant, at a very complex self-organizing recursive fashion. And the two are deeply interpenetrating and affording and affecting each other in ways that have to do directly with engineering, the evolving fittedness of your salience, right? Of your salience realization and your relevance realization within your sensory motor interaction with the world. This is, I think, strongly suggestive that we are getting, that this is going to be given a completely naturalistic explanation. Okay. Notice what I'm doing here, right? We're getting a structural, a theoretical structural functional organization for how this can operate. So we got, last time, right, the last couple of times we had this strong convergence argument to this. We have a naturalistic account of this, at least the rational promise that this is going to be forthcoming. And then we're getting an idea of how we can get a structural functional organization of this in terms of firing and wiring machinery. Now, this is, again, like I said, this is both very exciting and potentially I'm scary because it does carry with it the real potential to give a natural explanation of the fundamental guts of our intelligence. I want to go a little bit further and suggest that not only may this help to give us a naturalistic account of general intelligence, it may point towards a naturalistic account, at least of the functionality, but perhaps also, perhaps also of some of the phenomenology of consciousness. This, again, is even more controversial. But, again, my endeavor here is not to convince you that this is the final account or theory. It's to make plausible of the possibility of a naturalistic explanation. Okay. So let's remember a couple things. There's a deep relationship between consciousness. Remember the global workspace theory, the functionality, and that that overlaps a lot with working memory. This is global workspace theory, so that should be a T, global workspace theory, working memory. And we already know that there are important overlaps in the functional areas that have to do with general intelligence, working memory, attention, salience. And also, that measures of this and measures of the functionality of this are highly correlated with each other. That's now pretty well established. We've also got that we know from Lynn Hasher's work that this is doing relevance realization. Do you remember? Also gave you the argument when we talked about the functionality of consciousness, that many of the best accounts of the function of consciousness is that it's doing relevance realization. And so this should all hang together. This should all hang together such that the machinery of intelligence and the functionality of consciousness should be deeply integrated together in terms of relevance realization. We do know that there seems to be some important relationships between consciousness and self-organizing criticality. This has to do with the work of Cosmelly at all and others ongoing. Their work was in 2004. So they did what's called the binocular rivalry experiment.


Consciousness and self-organizing criticality (31:14)

Basically, you present two images to somebody and they're positioned in such a way that they are going to the different visual fields and they compete with each other because of their design. So what happens in people's visual experiences is that it's a triangle and a cross. What they'll have experientially is I'm seeing a cross. Oh, now I'm seeing a triangle. I'm seeing a cross and I'm seeing a triangle. And don't forget that's not obscure to you. So the nekra cube. When you watch the nekra cube, it flips. So this can be the front and it's going back this way. Where you can flip and you can see it the other way. And so you are even doing binoculars. This is the front and it goes that way. So you are constantly flipping between these and you can't see them both at the same time. So that's what binocular rivalry is. And so what you have though is you do this a little bit more controlled. You present it to different visual fields. So different areas of the brain. And so what you can see is what happens when the person is seeing the triangle? Well, one part of the brain goes into synchrony. And then as soon as the triangle that goes asynchronous and the other part of the brain that's picking up on the pluses, right? Because that's a different area of the brain. It's more basic. That goes into synchrony. And what you can see is as the person flips back and forth and experience different areas of the brain are going into synchrony or asynchrony. So that is suggestive of a relationship between consciousness and self-organizing criticality. Again, suggestive. But we've already got independent evidence, many, a lot of convergent evidence that the functionality of consciousness is to do relevance realization, which explains its strong correlation via working memory with measures of general intelligence. And so we know that this is plausibly associated with self-organizing criticality. So again, convincing, no, suggestively convergent? Yes. So there's another set of experiments done by Monti et al. In 2013. And what you're basically doing is you're giving people a general and an aesthetic. And then you're observing their brain as they pass out of consciousness and back into consciousness. And what did they find? They found that as the brain passes out of consciousness, it loses its overall structure as a small world network and breaks down into more local networks. And then as it returns into consciousness, it goes into a small network, small world network formation again. So that consciousness seems to be strongly associated with the degree to which the brain is wiring as a small world network. Now I want to try and bring these together in a more concrete instance where you can see the intelligence, the consciousness, and this dynamic process of self-organization, all that work. And I'm going to bring it back to the machinery of insight. The machinery of insight. So if you remember, we talked about this because we talked about the use of disruptive strategies and we talked about the work of Stefan and Dixon. And remember that what they found was they found a very sophisticated way, but nevertheless a very reliable way of measuring how much entropy is in people's processing when they're trying to solve the insight problem. Remember they were tracing through the gear figures and what they found is that entropy goes up right before the insight and then it drops and the brain becomes even right, sorry, the behavior, that was a mistake on my part. The behavior becomes even more organized. So that's plausibly, and that's plausibly an instance of self-organizing criticality, that what's happening is you're getting the neural avalanche, it's breaking up, and then that allows a restructuring, which goes with the restructuring of the problem. Remember, so you're breaking frame with the neural avalanche and then you're making frame like the new mound as you restructure your problem framing and you get the insight and you get a solution to your problem.


Small world networks and insight (36:28)

Now, interestingly enough, Schilling has a mathematical model from 2005, linking insight to small world networks. She argues quite persuasively, this is very interesting, what you can see happening in an insight is that people's information is initially organized in a regular network. Just think about that intuitively. So my information is integrated here, local organization, a regular network, local organization, so the whole thing is a regular network. But what can happen, right, is here's my regular network, I'll make that a little more clear, here's my regular network, and what happens is, one of these, I get a long distance connection that forms. So my regular network suddenly is altered into a small world network, which means I lose some resiliency, but I gain a massive spike in efficiency, I suddenly get more powerful. So insight is when a regular network is being converted into a small world network because that means this is a process of optimization, right, because remember this is more optimal than this. And you can see that in how people's information is organized in an insight, they take two domains, here's, right, think about how metaphor affords insight, you take two domains, Sam is a pig, and you get suddenly this connection between it, and those two regular networks are now coalesced into a small world network. Okay, that's great.


Flow is optimization, fitting, and salience (38:35)

So, in some of the work I've done with other people, I've been suggesting because of this, the following, that what happens in insight, alla, Stephen and Stefan and Dixon, is you get a self, you get self-organizing criticality, and that self-organizing criticality, right, breaks up a regular network and converts it into a small world network. So what you're getting is suddenly, suddenly, a sudden enhancement increased optimization of your relevance realization, and what's it accompanied with? It's accompanied with a flash in salience. Remember, and then that can be extended in the flow experience? You're getting an alteration of consciousness, an alteration of your intelligence, right, an optimization of your fittedness to the problem space. Okay, again, I'm going to say this again, right? I'm trying to give you stuff that makes this plausible. I'm sure that in specifics, it's going to turn out to be false because that's how science works, but that's not what I need right now. What I've tried to show you is how progressive the project is of naturalizing this, and how so much is converging towards it, that it is plausible that this will be something that we can scientifically explain, and more than scientifically explain, that we'll be able to create as we create autonomous, artificial, general intelligence. Okay, let's return back. If I've at least made it plausible that there's a deep connection between relevance realization and consciousness, I want to try and point out some aspects to you about relevance realization and why it is creating a tremendously textured, dynamically flowing salience landscape.


Exploration Of Consciousness And Philosophical Concepts

Relevance Realization (40:47)

Okay, so remember how relevance realization is happening at multiple interacting levels? So we can think about this, right? You're just getting features that are getting picked up. Remember the multiple object tracking? There's no -- this, this. So basic salience assignment, right? And then this is based on work originally from Mateson in 1976, his book on Sentience, if I've mentioned that before, and then some work that I did with Jeff Marshman and Steve Pierce, and then later work that I did with Anderson Todd and Richard Will. The featureization is also feeding up into foregrounding and feeding back, right? So a bunch of this, this is all these features, and then presumably I'm foregrounded and other stuff is backgrounded, right? This then feeds up into figuration. You're configuring me together and figuring me out, think of that language, right? So that I have a structural functional organization, I'm aspectualized for you, that's feeding back, and of course there's feedback down to here. And then that, of course, feeds back to and to framing, how you're framing your problems, and we've talked a lot about that, and that feeds back, right? So you've got, right, this happening, and it's giving you this very dynamic and textured salience landscape. And then you have to think about how that's the core machinery of your perspectival knowing, right? Notice, notice what I'm suggesting to you here. You've got the relevance realization that is the core machinery of your participatory knowing. It's how you are getting coupled to the world so that co-evolution reciprocal realization can occur. That's your participatory knowing. This feeds up to, feeds back to, right, your salience landscaping. Right? This is your perspectival knowing. This is what gives you your dynamic situational awareness. This textured salience landscaping. This, of course, is going to, we'll talk more about that, it's going to open up an affordance landscape for you. Certain connections, right, affordances are going to become obvious to you, right? And you say, oh man, does anybody, this is so abstract. Look, this is how people are trying to wrestle with this now. Here's an article from Frontiers in Human Neuroscience. Self-organizing free energy minimization, that's Fristen's work, and it has to do with, ultimately, about getting your processing as efficiency as possible. And optimal grip on a field of affordances, using all of this language that I'm using with you right now. That's by Brunenberg and Rett Velles from 2014 Frontiers in Human Neuroscience. Just as one example among many. Okay, so this is feeding up, and what it's basically giving you is affordance aviation. Certain affordances are being selected and made obvious to you. That, of course, is going to be the basis of your procedural knowing, knowing how to interact. And I think there might be a way in which that more directly interacts here, maybe through kinds of implicit learning, but I'm not going to go into that. We'll come back later into how propositional knowing relates to all of this. Okay, I'm putting it aside because this is where we do most of our talking about consciousness, right? With this, I think, at the core, the perspectival knowing. But it's the perspectival knowing that's grounded in our participatory knowing, and it's a perspectival knowing. Look, your situational awareness that obviates affordances is what you need in order to train your skills. That's how you train your skills. And we know that consciousness is about doing this higher order relevance realization, because that's what this is. This is higher order relevance realization that affords you solving your problems. Okay, so this is, I mean, I'm trying to say, I mean all of this when I'm talking about your salient landscaping. I'm talking about it as the nexus between your relevance realization, participatory knowing, and your affordance aviation procedural knowing, your skill development, right? And the perspectival knowing at the core, and then what's happening in here is this. If that's the case, then you can think of your salience landscape as having at least sort of three dimensions to it. Right.


Dimensions of Saliency (46:25)

So one is pretty obvious to you, which is the aspectuality. Your salience, as I said, your salience landscape is as spectralizing things. Things, right? Okay, so the features are being foregrounded and configured, and they're being framed. So this is a marker. It is as spectralized. Remember, whenever I'm representing or categorizing it, I'm not capturing all of its properties, I'm just capturing an aspect. So this is aspectualized. Everything is aspectualized for me. Right? There's another dimension here of centrality. I'll come back to this later, but this has to do with the way relevance realization works. Relevance realization is ultimately grounded in how things are relevant to you. Right? Literally, literally how they are important to you. You import. Right? How they are constitutive. Right? At some level, the sensory motor stuff is to get stuff that you literally need to import materially. And then, at a higher level, you literally need to import information to be constitutive of your cognition. We'll come back to that transition later. But what you have is, right, the perspectival knowing is there's doing a sexuality, and then everything is centered. It's not, right, it's not non-valence. It's vectored onto me. And then it has temporality. Because this is a dynamic process of ongoing evolution. Timing, small differences in time make huge impacts, huge differences in such dynamical processing. Chiross is really, really central. When you're intervening in these very comp, massively recursive, dynamically coupled systems, small variations can unexpectedly have major changes. So things have a central relevance in terms of their timing, not just their place in time. So think of your salient landscapes as unfolding, like in these three dimensions of aspectuality, centrality, and temporality. There's an acronym here, ACT. This is an enacted kind of perspectival knowing. All right, so you've got consciousness in what it's doing for me functionally, is all of this. But what it's doing in that functionality is all of this. And what that's giving me is perspectival knowing, that's grounded in participatory knowing, that affords procedural training, and that it has aspectuality, a salient landscape that has aspectuality, centrality, and temporality. It has, look at what it has. Centrality is the hereness. My consciousness is here because it is indexed on me. Of course it has nowness because timing is central to it. No, that was intended. And it has togetherness, unity, how everything fits together. I don't want to say unity because unity makes it sound like there's a single thing. There's a oneness to your consciousness. It's all together. You have the hereness, the nounness, the togetherness, the salience, the perspectival knowing how it is centered on you. A lot of the phenomenology of your consciousness is explained along with the functionality of your consciousness. A complete account, no, but it's a lot of what your consciousness does and is. It's a lot of what your consciousness does and is. So, I would argue that at least what that gives us is an account that we're going to need for the right hand of the diagram. Why altering states of consciousness can have such a profound effect on your reaching down to your identity up into your agency. Why it's could be linked to things like a profound sense of insight. We've talked about this before when we talked about higher states of consciousness. How it can feel like a dramatic coupling to your environment. That's that participatory coupling that we found in flow. This all, I think, hangs together extremely well, which means it looks like I have the machinery I need to talk about that right hand of the diagram. Before I do that, I want to make a couple of important points to remind you of things. Relevance realization is not called calculation. It is always about how your body is making risky, affect laden choices of what to do with its precious but limited cognitive and metabolic and temporal resources.


Consciousness (52:02)

Relevance realization is deeply, deeply always and think about how this also connects to this and to consciousness. It's always, always an aspect of caring. That's what Reed Montague argues, the neuroscientist in his book, your brain is almost perfect, that what makes us fundamentally different from computers because we are in the finitary predicament, is we are caring about our information processing and caring about the information processed therein. So this is always affect. It's things are salient. They're catching your attention. They're arousing. They're changing your level of arousal. Remember that arousal is an ongoing evolving part of this, right? And they are constantly creating affect, motivation, moving, emotion, moving you. Towards action. You have to hear how at the guts of consciousness intelligent, there is also caring. That's very important, right? That's very important. Because that brings back, I think, a central notion, and I know many of you are wondering why haven't spoke about him yet, but I'm going to speak about him later, right?


Heideggers Caring (53:18)

From Heidegger, that at the core of our being in the world is a foundational kind of caring. And this connection I'm making, this is not far-fetched. Look at somebody deeply influenced by Heidegger who is central to the third generation or four e-cognitive science. That's the work of Dreyfus and others and Dreyfus has had a lot of important history in reminding us that our knowing is not just propositional knowing. It's also procedural and ultimately I think a perspective on participatory. He doesn't quite use that language, but he points towards it. He talks a lot about optimal gripping and importantly, right? If you take a look at his work being in the world on Heidegger when he's talking about things like caring, he's invoking in central passages, the notion of relevance.


Dreyfus on Heidegger (54:10)

When he talked about what computers can't do and later on what computers still can't do, what they're basically lacking is this Heideggerian machinery of caring, which he explicates in being in the world in terms of the ability to find things relevant. And this, of course, points again towards Heidegger's notion of dazin, right? That our being in the world is inherent to use my language is inherently transjective because all of this machinery is inherently transjective. And it is something that we do not make.


Conclusion

Available (54:48)

We and our intelligible world co-emerge from it. We participate in it. And I want to take a look more at what that means for our spirituality next time. Thank you very much for your time and attention.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Wisdom In a Nutshell.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.