Radio Expert, Podcast Pioneer, and Bleeding Edge of Podnews
The Exciting Journey of Podcasting: From Curiosity to Global Impact.
Transcription for the video titled "Ep. 41 - Awakening from the Meaning Crisis - What is Rationality?".
Note: This transcription is split and grouped by topics and subtopics. You can navigate through the Table of Contents on the left. It's interactive. All paragraphs are timed to the original video. Click on the time (e.g., 01:53) to jump to the specific portion of the video.
Welcome back to Awakening from the Meaning Crisis. So we are pursuing the cognitive science of wisdom because wisdom has always been associated with meaning, right from the axial revolution onward. So that's a deep reason. Wisdom, of course, is also important for the cultivation of enlightenment, the response to the perennial problems. It's also putting a central role and being able to interpret our scientific worldview in a way that allows us to respond to the historical forces.
And so wisdom is very important. We took a look at, we continued to look at McGee and Barber and we saw their convergence argument that the core of wisdom is the systematic seeing through of illusion and into what's real. And this is very much like as the child is to the adult, the adult is to the sage. And then two other important aspects of it, that wisdom is much more with how you know than what you know which means how you come to know it and also how you interpret the knowledge. And that wisdom is in a therefore related fashion, deeply perspectival and participatory. And that's why wisdom can be associated with an important forms of pragmatic self-contradiction. We then noted the connection with overcoming self-deception in a systematic fashion. And the emphasis on wisdom on the process rather than the products of knowing. And that both of those took us into the work of Stanovich and because he famously argues that one of the hallmarks of rationality is valuing the process rather than-- sorry, not rather than-- rather than the process in addition to valuing of the products of our cognition. And that took us also into the notion, the discussion of rationality.
And Stanovich is a good bridge because-- for him, the notions of rationality and ameliorating foolishness overlap very strongly. And we got into this notion of-- and which I've been sort of comprehensively arguing throughout this course, that rationality has to do with the reliable and systematic overcoming of self-deception and the potential affording of flourishing by some process of optimization of achieving our goals. With the caveat that as we try to optimize, we often change the goals that we are pursuing-- one reason being that we come to more and more appreciate the value of the process as opposed to just the end result of the process. So in order to pursue that and to deepen our notion of rationality and thereby deepen our notion of wisdom-- and of course, wisdom has been associated with rationality from the beginning, Socrates and Plato and Aristotle.
Right? Right? We took a look at the rationality debate. We saw-- I gave you three examples of many possible examples, experimental results that seem to show reliably this kind of thing, very reliably, no replication crisis on this material. So very reliably, two things-- that people acquiesce, they acknowledge and accept the authority of certain standards, principles of how they should reason, and yet they reliably fail to meet those standards. And so one way to-- one possible interpretation of that-- not the only interpretation-- one possible interpretation is that most people are irrational in nature. As I pointed out, because rationality is existential, and not just abstract theoretical, concluding that people are irrational has important implications for their moral status, their political status, their legal status, even their developmental status. So this is what I keep meaning when I'm saying, rationality is deeply existential. It is not just theoretical. OK, we took a look at one-- the beginning of what's called the rationality debate, and good science always has good debate in it. The rationality debate and the argument was made by Cohen that human beings can't be comprehensively irrational, because we have to ask this very pertinent question. Where do the standards come from? And the argument is the standards have to come from us. And how do we come up with that normative theory? We come up with that normative theory that acknowledges the way we come up with our normative theory is consonant with that we're the source of our standards. And this is the idea that at the level of my competence, my competence contains all the standards. And that what I do in order to get my normative theory is I idealize awake all my performance errors. And this takes time, takes a lot of reflection, until I get at the underlying competence. And then what I'm doing when I'm proposing a normative theory is I'm basically giving people this now explicated, excavated account of the competence that they possess, and then demanding from them that they do their best to reduce the performance errors and meet that competence. So we, at the level of our competence, we are fundamentally rational. And all of the mistakes that people are making in these experiments according to Cohen are just performance errors. They're just performance errors. And just like you do not think I've lost English because of all of my performance errors, you basically dismiss them and read through them and attribute to me the underlying competence. Cohen is arguing. We should read through and dismiss these experimental results and because the argument shows that people must have the underlying competence. OK, so how does Stanovich reply to this?
Well, in many places, but I think the best is Stanovich in West 2000 and behavioral and brain sciences because the argument that this is a really, for me, a gold standard of how you do really good cognitive science. The way they integrate philosophical psychological argumentation together, for example, is really, really, really impressive. So Stanovich in West say, well, if Cohen is right, then all of the errors that people are making are performance errors. And Cohen is invoking the competence performance distinction, which, of course, goes back to Chomsky and ultimately to Piaget. Now let's remember Piaget because we've talked about this. Well, how do you distinguish the child's speech deficit from the drunkards? The drunkard is giving performance error, but we think the child actually-- her competence, for example, is not sufficiently developed. Why do we think that? Why do we think when the children are making all these conservation errors that reflects something about their competence at that point? Well, that's because competence errors that reflect a defect or a deficit in competence are systematic errors. That's precisely how Piaget did his work. That's precisely how you would determine that if I got brain damage and I'm leaving out-- my sentences are broken, that it is not performance error because my errors would be systematic. Across different contexts, different times of day, different tasks, I'd be making these mistakes. Performance errors are not systematic. So if my speech is broken because I'm rushing, it's not going to be broken when I'm not rushing. If my speech is broken when I'm tired, it's not going to be broken when I'm not tired. So this is circumstantially driven. And as the circumstances change, the patterns of error will change and go all over the place. So these errors are not systematic in nature. Well, that means we have a reliable way of telling whether or not the errors that the people are making in the experiments are performance errors or not. How do you see if errors are systematic? Well, this is what you do. Errors are systematic if when I make this error, it's highly predictive that I'll make this error. It's highly predictive that I make this error. It's highly predictive that I'll make this error. So if a child is showing conservation in this task, that's predictive that they'll fail to show conservation in this task, in this task, in this task. So the degree to which we've talked about this before, the degree to which there's a positive manifold. Remember when we talked about general intelligence. The degree to which your performance in one task is predictive of how you'll do on other tasks. If you have a manifold in your performance, if the errors are systematic across many different tasks that you're performing, then that's evidence that the errors are systematic rather than not systematic. Okay, so that's easy enough.
So then what we can do is we can, and we've got all this data, we can look at people doing these different experiments. And in many experiments they're doing multiple versions. The same participant is doing one task. So what we can do is to follow. We can see if, if I, for example, if I make a failure in critical detachment, does that mean that I'll also tend to show belief perseverance? Does that also mean that I'll tend to leap to the wrong conclusion when I was doing the task, the litty pads covering the pond? And the answer, and this is what there is overwhelming evidence for, is yes. The errors you make are systematic. They're systematic. And see, so now, Stan Evitch and Wes say, ah. Cohen's argument predicts that the errors are performance errors, that's what he claims. That means that the errors should be not systematic, but what we find is overwhelming, convincing evidence that the errors that people are making and all these tasks are systematically related together. So these are errors at the level of competence. And now you go, what? And this is what I mean about good debate, makes something problematic, right? Because there's something right about Cohen's argument, and Stan Evitch acknowledges it, right? There's something right in that we have to be the source, right, of our standards, and yet Cohen's conclusion that all the errors are performance errors is wrong. It's undeniably wrong. How do we put these together? Well, you put these together, and you can see Stan Evitch and Wes doing this to varying degrees, you put this together by stepping back and looking at an assumption that's in Cohen's argument. It's an assumption about the competence. Cohen is assuming that that competence for rationality is a single competence.
He's assuming that that competence is static. He's also assuming that the competence is completely individualistic. Okay? I'll come back to this one much later. I'm gonna address these two because Stan Evitch really doesn't talk about this. So remember that, but we'll come back to it later. Think about the platonic idea. What if I have two competences that both are working towards getting me to reliably achieve my goals, correct my own behavior? But these competencies could actually conflict with each other. That would mean I would be the source of all the standards here, all the standards here, here are all the standards here. I am all, I'm the source for all the standards, but at the level of my competence, I can be generating error because these two competencies can actually be in conflict with each other. ( So one of the way you start to resolve this debate and this has become a central idea in psychology, in cognitive psychology and cognitive science, is that we don't have a single competency. We have to have multiple competencies. And what that means is, and this is why, for example, assuming uncritically, uncritically assuming that you could reduce rationality and identify it with the single competence of syllogistic reasoning is just fundamentally wrong. It's not paying attention to the science. Here's another issue. Cohen is assuming that your competence is full blown, it's finished, it's stopped, it's static, it's done, and I sort of slip this in. Notice the examples I used of small children. Their competence, for example, in English, or whatever language they're speaking, I happen to be speaking English, that's why I'm using it. The little girl, the two and a half year old, her competence is not fully developed. She will come to have the standards as that competency fully develops. But as that competency is immature, she can be a source of error from her competence. So, right, we have to give up the assumption that our competence, again, is static.
Why would it be? Your cognition is inherently dynamic and developmental. Again, assuming, oh no, this is just what it is, right? And assuming that there is a single thing we're pointing to when we point to rationality is a mistake. See, this way, Stanovich and Wes can say, and it's brilliant, right? It's brilliant, they can say, Cohen's argument is fundamentally right, but his conclusion, that specific conclusion is wrong, because the conclusion that the errors are only performance errors is only a conclusion based on the hidden premise, right, that the competence is single and static. You have multiple competencies, and they're in ongoing development. Okay, so that's, we've learned something very interesting. So, what's happening is, and Stanovich is an advocate of this, right? You have dual processing models. This is the idea that we have different ways of processing information. Think of how platonic this is. How much Plato was going? I told you this a long time ago, right? We have different styles, ways of processing information, that are good for different kinds of problem solving, and those, and neither one of these competencies can be right exclusive or sufficient for us, but they have, right, ultimately a complementary relationship, a relationship, I would argue, of opponent processing. Stanovich has a different view, and we'll come back to that a little bit later. Okay, second person in the debate, and you've heard me mention him, and you've heard me talk about him with serious respect, and he has given serious respect by Stanovich and West. Cerniac has a much different approach to the rationality debate. He agrees with Stanovich and West that the difficulty is not at the level of our performance, it's the level of our competence, but he has a different move to make. He has a move that you saw me invoke last time. This is the odd implies can. So this is the question whether or not, right, whether or not we're applying the right normative theory to people in these experiments when we're judging them to be irrational. So Cerniac invokes something that you saw me invoke multiple times when we talked about relevance realization, we're in the finitary predicament. This is actually his term, right? We cannot, because it is combinatorially explosive, derive all the implications. We cannot consider all of our assumptions, all of the stuff that we've talked about before. We cannot go back and recreate all of the ways in which we've represented something. This is combinatorially explosive. What we do, right, do we just arbitrarily choose whatever implication we want? No, so here's the point, right? We can't be algorithmic, right? We can't use standards that work in terms of certainty and completeness, because for example, if I tried to be comprehensively deductively logical, then I would fall very rapidly into combinatorial explosion for any problem that I'm trying to solve, and I would then have committed cognitive suicide. It cannot be a standard, a normative standard of rationality, if trying to follow it would commit me to cognitive suicide, which would undermine any attempt to satisfy any of my goals. To see what Cerniac is saying, right? You can't do this, but of course you don't just arbitrarily choose whatever representation you want, choose whatever inference you want, choose to check whatever contradiction you want. So you can't check them all, and you can't just arbitrarily, well what's the answer? Well, you saw the answer before, and it's one that I have talked. You do relevance. You pick the relevant implications.
You check the relevant contradictions. You check which aspects of your representation you consider relevant, et cetera. You do relevance realization. And Stanovich is like, yeah, that's right. We're not gonna argue with that, and that's part of what I've been arguing throughout. There's this consensus on how central this ability is, at least it's an emerging consensus. Herbert Simon from Newell and Simon, talks about bounded rationality, that we can't be purely computational algorithmic, et cetera. For all of the reasons we've already explored.
Who you saw, and then what Choni X says is, but look what people are using in the experiments. They're using formal logic. They're using formal probability theory. They're using all these formal, purely algorithmic. By the way, you say, oh, what, probability certainty doesn't work in certainty. Yes, it does. It gives you certainty about probabilities. That's what makes it a formal theory. That's why it has axioms and theorems, et cetera. Don't confuse properties of the theory with properties of what the theory is about. So what Choni X says is, the scientists in the experiment are using all these formal theories that can only be applied in very limited contexts. If I try to apply them comprehensively in my life, and that's where rationality matters, 'cause rationality is an existential issue. If I try to apply it comprehensively in my life from within the Finitary Predicumen, I am doomed to fail. And then you're laying an ought on me that I cannot possibly meet, which means you, scientist, have the wrong normative theory. Because if you're laying a normative theory on me that I cannot possibly meet, that's evidence that it's the wrong normative theory. And that's Choni X argument. It's a powerful argument. It's a good argument. It's an argument I take very seriously, still to the Stan and Richard West, they go, yes. All of this is right. However, Chaniac is talking about, he thinks he's talking about one thing, and he's actually talking about another. And this is such a clever response, and it's gonna tell us something really, really important. Okay, first of all, this tells you that you, again, here's another argument, why you can't equate rationality with merely being logical, merely using probability theory, right? Does that mean I can be arbitrary and ignore logic? No, it's the much more difficult, and notice how this starts to overlap with wisdom. It's a much more difficult issue of knowing when, where, and to what degree I should use logic and probability in a formal manner, et cetera. Okay, how does Stan and Richard West reply? Again, I think it's brilliant.
They say, all of this is right, but it's not about rationality. All of this, right, all of this stuff, they tend to talk about it in terms, they use the phrase computational limitations. They always describe it sort of negatively, instead of positively, instead of relevance realization, although the two are inter-defined, they always talk about it in terms of computational limitations. But they say all this stuff that Chaniac is talking about in terms of computational limitations is actually not about rationality. It's actually about intelligence. And this is gonna be a brilliant move. And what it's gonna also show us is, there is a, and I said this a while ago, there's a deep difference between being intelligent and being rational. In fact, Stan and Richard's gonna argue that what makes you foolish is you're highly intelligent and highly irrational. And that is going to make sense, right, of what we've already argued, that the very processes that make me adaptively intelligent are the very processes that also subject me to self-deception. How does he do this? Well, he basically argues, and you saw an analogous argument, how do they do this, and how does Stan and Richard do it elsewhere, right? They basically argue, ah, right? When we test for people's ability to zero in on relevant information, right? How can we test to see how well people deal with computational limitations? So what Stan and Richard basically argues is, and again, myself and Leo Ferrago, Leo Ferraro argued this in a convergent fashion, other people, right, basically. What we're testing when we're testing people's intelligence is we're testing their capacity to deal with computational limitations. And this lines up again, also, you've heard me mention this, that measures of intelligence correlate with measures of working memory, and blah, blah, blah, are giving you all that argument. Stan and Richard's giving you an additional argument here. He's saying, okay, so we understand intelligence as the capacity to deal with computational limitations, that's a negative way of putting it. I would say it's the capacity to do relevance realization, but let's keep going. And then we have a way of, therefore, measuring people's capacity to deal with computational limitations. So, Cerniac is saying people fail in the experiment because of computational limitations. They're in the finitary predicament, and we have a way of measuring how well people can deal with computational limitations. That's intelligence. We have a way of measuring g. Reliable, robust, way of measuring g. So now, notice what we can do. Again, so brilliant. We have reliable ways of measuring g. Remember what Stanovich and Weston was showed with answer to Cohen? That all of the reasoning tasks also form a pawn strongly manifold. They don't label anything, but I'm gonna call it R. There's, sorry, gr. There's this general factor of reasoning because the reasoning tasks form a strong positive manifold. Okay, so what we can do is we can measure the g of R, right? And now we can do something very, very, very basic. If what's happening in the experiment is a measure of rationality, and rationality is equivalent to dealing with combinatorial explosion, computational limitations, then these two should approach parity. Intelligence and rationality should be identical, right? So notice what's going on here. If Cerniac is right, then rationality and intelligence would be identical, and there would be a strong relationship between how intelligent you are and how well you do on these experiments. And this is, again, with good science, reliable, robust, well replicated, lots of clever, decades. The relationship here is at best 0.3. So you know the correlation goes from zero to one, where zero is no correlation and one is very strong correlation, 0.3. Intelligence, what this clearly shows, is that intelligence is necessary, but nowhere near sufficient for being rational. Okay, so here's two things that are insufficient for making you rational. Just being very intelligent and just being able to use logic. The science is actually clear on this. So notice how a lot of the ways our culture has tried to understand rationality are now coming into question. We, oh, well, rationality is equivalent, think of Descartes. Now you see why Descartes is wrong, right? Of logicality, well, that turns out to be false. Oh, well, rationality is the same thing as being really smart, really intelligent, right? Nope, that turns out to be false. Oh, what is that then? What is that then?
So now we're starting to do good science, right? We're starting to get away from common sense. We're starting to have some humility. We're starting to, it's now a real problem. Well, what is it? What is it? And it gives us a way of, right, talking about what we've been talking about through you. The very processes that make you intelligent can actually cause you to be irrational. There is no contradiction in saying you're highly intelligent and highly irrational. Not at all. So now you have to ask yourselves, well, what's the missing piece? So let's remember that. We've got an important question that we need to ask and answer. What's the missing piece for rationality? What's missing? Something else is going on. And that missing piece is gonna tell us quite a bit, I think, about the overlap between wisdom and rationality. Now there's a third argument. And it's an important argument 'cause it's also gonna connect to the issue of understanding, which is, again, a crucial feature of wisdom. Now, Stanovich and Wes talk about this argument, but they don't cite an individual who actually came up with the argument explicitly. I think it's just because I don't think there's any fraud. I think they just didn't read it because the person I'm gonna talk about is Smedsland. And the article is from the Scandinavian Journal of Psychology in 1970. So it's impossible to read everyone everywhere at all times. Again, we are in the finitary predicament. But Smedsland pointed out something that Stanovich and Wes do take seriously. And because Smedsland makes an explicit and clear argument for it, we should pay attention to what he says. He says, well, there's a difficulty with these. So this is the third response. We've had Cohen, we've had Chereniak, and now a response that I will attribute to Smedsland. Stanovich and Wes don't, but they should, but again, no crime on their part. Okay, Smedsland says, well, there's a difficulty with interpreting the experiments. Again, that's the issue. Always the issue of interpretation. And you can't do an experiment to decide the interpretation because then you have to interpret that experiment. You can't experiment your way out of interpretation. Interpretation is always going to be needed in science, and that means theoretical debate is always going to be needed. Okay, so back to the theoretical debate. Smedsland says, now interpreting these experiments relies on a distinction between a fallacy and a misunderstanding. Because there's two ways in which I can give the wrong answer. One is, right, I interpret the problem correctly. I understand it, but then I reason incorrectly, and that's why I get the wrong answer. So the fault in a fallacy, Felicia's reasoning, is this is where the error comes in, the poor reasoning, I reason incorrectly. But there's another way in which I can give the wrong answer. I get to the same conclusion, the wrong answer. Well, what is it? I actually reason correctly in a normative fashion, but I've understood the problem incorrectly.
And that's a misunderstanding. Somebody misunderstands us, the error comes in because they've interpreted the problem, they've understood the problem incorrectly, but once you give them that, right, incorrect interpretation, there's nothing wrong with their reasoning. There's nothing wrong with their reasoning. Okay, great. So there's two places we can get. There's two equally good explanations for why we produce the wrong answers. One is, we reason incorrectly, and that's a fallacy because we've got correct interpretation. The other one is, we are reasoning correctly, there's nothing wrong with our reasoning, but we've understood the problem incorrectly, and that's a misunderstanding. And that's, okay, so this distinction is really crucial, because this distinction is really crucial. Keeping these apart is really crucial, because if we want to conclude that people are irrational, we have to be attributing to them fallacious cognition, not pure, sorry, not, sorry, oh, wow, we're to appear, come. We have to attribute to them fallacious cognition not some kind of distortion in the communication, that they've misunderstood us. One of the ways in which people typically often avoid self-criticism, avoiding the possibility that they might have reason incorrectly is to always claim that they have merely been misunderstood. Been misunderstood. Look for that in somebody. Look for somebody who never says, ah, right, my argument is wrong, I did it wrong. Look for somebody who always says, no, no, I've been misunderstood. Okay, because they're basically trading on this in an equivocal fashion, they're both shitting you in a way. Okay, so, now, sometimes they should say, I've been misunderstood, totally, sometimes, but sometimes they should say, I reasoned wrong. Okay, let's get back to this. Okay, so far so good, right? But then, Smells and Sib, but this is difficult because these things aren't independent the way we need them to be. What do I mean? The attribution of fallacy or the attribution of misunderstanding are not independent the way we need them to be in order to cleanly interpret the experimental results as showing that people are largely engaging in, engaging in folacious reasoning. Well, why? So, Smells and does something, and of course I think it's preliminary, but we'll have to come back to it, right? Which is, give a preliminary account of understanding.
And he basically says, well, what is it to understand something? And he says, well, to understand x, we ask people to give us something that's identical to x. Now, let's use arrows here. We ask, we ask us to give us something that contradicts x, right? We ask us to give things that x implies, and these are all, of course, related because identity is a kind of implication, contradiction is a failure of implication. And then, we also ask them to give us things that are relevant to x. There it is again. I would also add, by the way, because when you look at further research and understanding, people talk about it, not only what is relevant to x, but also what is x relevant to it. I don't think Smells and would object to that. But here's relevance again, of course. Okay, so what's the problem?
Now, put this one aside, and Smells and just sort of puts it aside in his argument. And, of course, that's something I'm not going to sort of let's sit by. He puts aside this. He says, well, ignoring that, look at these three. The way we determine if somebody has understood us is we determine, we determine if they have drawn the identities we would draw, draw the contradictions we have drawn, draw on the implications we've drawn, right? So somebody understands us if they reason the way that we do.
So, what Smells and says, this is what the scientists are assuming. The scientists are assuming that the participants in the experiment have understood the problem. Sorry, right? I've understood the problem correctly and then reason incorrectly. But notice how this is a pragmatic contradiction. Because if they've understood the problem correctly, then they reason the way the scientist does in this very difficult task of interpreting a problem. And then, but then they reason in the way the scientist does it when they're actually trying to solve the problem. And that's problematic. In fact, couldn't I say this? Couldn't I say the fact that the right participants in the experiment are consistently producing the wrong answer is good evidence that they are misunderstanding the problem. People are reliably misunderstanding these problems. Well, that can't be because the scientists made them. What scientists can't be making mistakes. Scientists can't misunderstand community. What are you attributing to scientists? God-like authority? No, stick with the argument here, right? You can conclude that they're making the fallacy, but it's sort of like you have to say for some reason, right, at this really difficult problem of interpreting what I'm saying, they're reasoning very correctly. And then when they go to solve the problem, they're reasoning poorly. Or you can say, right, they're reasoning correctly, but they've misunderstood the problem. But that also means that they're reasoning poorly. Or maybe, or maybe I've misrepresented or miscommunicated the problem. See, now it's much more problematic. Okay, so two things to note here. Two important things to note. First of all, we've got to come back to this because here's, there's an opening here. In Santa Vitian West, because they haven't read Smedgeland and because they don't have this so clearly explicated, they can't sort of pick up on that. So I'm not criticizing them for not seeing this, right? But they do come up with it a very important point. And this is convergent with their argument against Cohen. They argue for this. They argue, in order to break this impasse, we need a normativity on construal. We need a normativity on how people interpret, make sense of, size up the situation of the problem. That's what construal means. Basically, we need a normativity on how they formulate the problem. And this has to be an independent normativity, independent of what? Independent of inferential norms. Right, if I try to use inference as my stand, good inference is my standard for doing this, I'll fall into this circle. I have to be evaluated. I have to be evaluated. I have to be able to evaluate construal independently of evaluating how people make inferences. That's the only way I'm going to break out of this. But that's okay because that means, instead of it, it doesn't take this as deeply as a sh-he should. But that means that there's a non-inferential aspect to rationality that is essential. There's an aspect of rationality that has to do with understanding, with construal, that is non-inferential in nature. And that, of course, points back to this because relevance is pre-inferential. The way you formulate your problems, remember that, has to do with relevance realization. And that is something that is pre-inferential in nature. So we can actually put this together very cleanly, I would argue.
We can say, now, see what Stanovich and West say is, okay, we need this normative theory of construal that has to be independent of our inferential normativity, and then they go, oh, we don't know what this is. What could it be? What that normative theory of construal will look like. It's, oh, right. And it's like, granted. But here's a proposal. Right? I think it's a proposal that sort of is clearly, right, presented to us from a lot of the arguments we've already considered. Right? We do have a normativity on construal. We have standards of what a good problem formulation is versus a bad problem formulation. Where do we study that normativity in psychology? Well, we study it in insight problem solving. We know, right, what a bad problem formulation is. A bad problem formulation is one that puts you into a combinatorially explosive search space. A bad problem formulation is one that does not turn your ill-defined problem into a well-defined problem. A bad problem formulation is one in which you are not paying careful attention to how salience is misleading you. So that's telling us something really interesting, right? That there is, in addition to inference being crucial to being rational, insight is crucial to being rational. Because if what we mean by insight is somebody who is good at formulating problems, avoiding combinatorial explosion, avoiding ill-definedness, avoiding salience misdirection, then being insightful is going to be central to being rational. It's not going to be something that comes up out of the irrational aspects of the psyche. It comes in from the non-inferential. But why should we be identifying? Nobody is identifying rationality with just a pure logical normativity on inference. So what I propose to you is that we need to understand the role of both insight and inference in rationality. And that's much more problematic. But we need that because we need to integrate rationality and understanding together in an integrated account. And notice how now more and more rationality and wisdom are starting to overlap for us in a serious way. Because if I now get rationality and understanding inference and insight in mesh together, then of course I'm starting to talk more and having to happen in a systematic and reliable way. I'm starting to talk more about the ways when we talk about how people are wise. If we give up thinking of rationality as being like Mr. Spock or Mr. Data, and we give up the idea that being rational is just being really smart, then we start to get into the problematic notion of rationality. And we need the idea of multiple competencies. We need the distinction between being logical and being rational, being intelligent and being rational. And we have to understand that there's an important component of rationality that is non-inferential in nature. It has to do with a normativity of unconstruell. The generation of insight, not a normativity of argumentation and the generation of inference. And that's important. That's really important. So, the issue of construal is acknowledged but not in any way resolved by Stanovich. So, it's not going to play, although it should, given his own arguments, it's not going to play a significant role in his theory of rationality. What is that theory? What does it look like? What is the missing piece according to Stanovich?
So, we said, right, that intelligence is not predictive or only weakly predictive of rationality. These are not equivalent. And along the way we got yet another argument for intelligence being relevance realization. Okay. That's all good. What is the missing piece then? The relationship here is only .3. What accounts for most of the variance then as a scientist would say. So, Stanovich argues very clearly for what he calls a cognitive style. This term is a bit of a bit of a quipical. It's used in slightly different ways in psychology for different things. And he also invokes the notion of bad mind wear in appropriate psychotech. So, that's also in there. So, there's sort of cognitive styles in psychotech that can both be part of the missing variance. So, one part here is the psychotech you're using. Right? You can have poor what he calls mind wear, which is like software. He's picking up on the psychotech idea here. All right? And then the other, and this is what often gets given more priority because it accounts for a lot, is an appropriate cognitive style. The difference between these are not as clear I think as Stanovich seems to think they are. So, we'll have to come back to that when we come back to the relationship between psychotechnology and wisdom. So, what's the cognitive style? So, a cognitive style is something you can learn, at least as Stanovich is using it. And it's to learn a set of sensitivities and skills. Notice the procedurality and the perspectival in here, but it's implicit. It's in the background. Right? But what is the cognitive style that's most predictive of doing well on the reasoning test? He gets this from Jonathan Barron. This is the active open mindedness. And when you see this, you're going to see a lot of stoicism here. And this overlaps a lot with the cognitive behavioral therapy that is derived from stoicism. And again, that's convergent. That's not by design or deliberate. And that tells you something. Something crucial is being seen here. What is active open mindedness? So, active open mindedness is to train yourself to look for these patterns of self-deception, to look for biases. Okay? So, here's a bias you've heard me mention. You've heard me mention some of these.
Confirmation bias or right, the essentialism bias. Right? Or the representationalism or the availability, misusing the availability heuristic. Notice that a bias is just a heuristic misused. We've talked about this. So, this is, I tend to only look for information that confirms my beliefs. Right? This is, I tend to treat any category as pointing to an essence shared by all the members. And maybe we should give up an essentialism of sacredness or at least in the terms of its content. I've already suggested that to you. The availability heuristic is the availability biases. I judge some things probability by how easily I can remember it or imagine it happening. We've talked about all of these. There's many of these. So, what do I do? First of all, I have to do the science. I learn about all of these. Right? So, this comes from Baron. I want to point out something that Stanovich doesn't say as clearly as Baron does. So, what I do is I learn about these and I sensitize myself. I sensitize myself. And this is like a virtue because I have to care about the process, not just the results. I sensitize myself to looking for these biases in my day to day cognition. And then I actively counteract them. I actively say, no, no, no. I'm doing confirmation bias. I need to look for potential information that will disconfirm it. And here's where you can now begin to also give up the individualistic assumption of competence. Part of the way in which I can be rationally competent is I can ask you to help me overcome my confirmation bias. Because it's very hard for me to look for information that disconfirms my beliefs. It's much easier for you. And then if I practice with you and I can start to internalize you and I can start to get better at looking for my own instances where I fall prey to the confirmation bias. So I now actively counteract those. That's sort of where Stanovich leaves it. Barron points out, right, but you don't overdo this. Because if you overdo this, you will start to choke on the tsunami of combinatorial explosion that will overwhelm you. So again, you have to do this and you have to do this right degree. And that becomes much more nebulous. And again, again, we're starting to shade over into wisdom, right? So what you should then ask is what predicts, if being intelligent doesn't predict, right, if intelligence predicted rationality, then being intelligent would predict how well you've cultivated active open mindedness. But of course it doesn't. So what is it about people that predicts how well they will cultivate active open mindedness? And this is the degree to which people have a need for cognition.
This is people who problematize things. They create problems. They look for problems. They go out and on their own try to learn. I would add to that there's two aspects to need for cognition. There is a curiosity in which I need to have more information so that I can manipulate things more effectively. And that's important. People that are in that sense more curious and want to solve problems. Not just gather facts, but solve problems because that's what needs for cognition points do. That's important. But also think about how important wonder is, right? How much it opens you up to putting into question your entire world view, your deeper sense of identity, deep need for cognition. And that's relevant to do because rationality is ultimately an existential issue. Not just a theoretical inferential logical issue. So what we're going to need to do is to come back and look more about at Stanovich's account of rationality, some criticisms of it, and then on the basis of that, because we've already overlapping with it so much, take a look at some of the key theories of the nature of wisdom and try to draw that together into a viable account of wisdom which we then integrate with the account of enlightenment. Thank you very much for your time and attention.
Inspiring you with personalized, insightful, and actionable wisdom.