The Future of Governance Part 3 | Jordan Hall and John Vervaeke | Voices with Vervaeke | Transcription

Transcription for the video titled "The Future of Governance Part 3 | Jordan Hall and John Vervaeke | Voices with Vervaeke".

1970-01-01T02:40:51.000Z

Note: This transcription is split and grouped by topics and subtopics. You can navigate through the Table of Contents on the left. It's interactive. All paragraphs are timed to the original video. Click on the time (e.g., 01:53) to jump to the specific portion of the video.


Introduction

Intro (00:00)

Hello everyone. I'm frequently humbled and touched, motivated and encouraged. When people contact me by email or texting or commenting or they greet me on the street and tell me that my work has been transformative for them. This has been the case for you and also if you want to share it with other people. Please consider supporting my work by joining my Patreon community. All financial support goes to the Vervakey Foundation where my team and I are diligently working to create the science, the practices, the teaching and the communities. If you want to participate in my work and many of you ask me that how can I participate? How can I get involved? And yet this is the way to do it. The Vervakey Foundation is something that I'm creating with other people and I'm trying to create something as virtuously as I possibly can. No grifting, no setting myself up as a guru. I want to try and make something really work so that people can support, participate and find community by joining my Patreon. I hope you consider that this is a way in which you can make a difference and matter. Please consider joining my Patreon community at the link below. Thank you so very much for your time and attention. Welcome everyone to another Voices with the Vervakey. This is the third in the series that I'm doing with Jordan Hall on the problem of governance that is facing us today. If this is the first time you're encountering this series, I strongly recommend you watch the first and the second episode. The links will be in this video. We can't recapitulate all of the argument. It has become quite extended and complex, I think is a proper way of putting it perhaps. And we're going to pick up in fact from a challenge, a question that I posed to Jordan at the end. So at the end of the previous session, we were talking about the possibility of these effectively ephemeral groups that could pop up sort of analogous to the way we select juries today in order to deal with very excellent problems. And then once that problem is addressed, they can disappear.


Philosophical Aspects Of Ai

Ephemeral autonomy (02:34)

And then there's evidence, oh, by the way, Jordan, I wanted to remind you to find that evidence from somebody at Stanford about, you know, you can create a pool of people and once they're properly set, they can outperform the expert. And this is converging with a lot of other evidence. And then, and this ties into just the potential of the medium and the stuff about distributed cognition, all the stuff that we've talked about in the first two episodes. And then I raise the problem. If this is what we're moving towards, or at least about important component is this massively distributed and dynamically available, effective, and don't forget that adjective effective, ephemeral groups that have a kind of, you know, short term existence. How do we reconcile that with the perennial across millions of years of speciation, maybe more, if Jordan Peterson's right about the lobsters, for example, that we have that we are wired to seek dominance, that we are wired to seek status, that we are wired to seek recognition, we are wired to seek influence, and that we want to make a difference, we want to make a significant difference to something beyond ourselves.


Presupposing a model of human beings (03:28)

This is part of our meaning and life connection. It seems like all of these, those are four I would zero in on, but all of these, I think it's property even called them drives that are constitutive of the kind of agents we are, will be frustrated, at least prima facia, it seems that way, that they would be frustrated by a lot of the proposals that we have considered. And how can we reconcile the proposal for this new orientation and a bit of, you know, preliminary formulation of governance that can be properly respectful and take seriously the, the pertinence and power of these drives over human lives. So that's, that's the issue. Yeah, we'll sort of, I'll say it slightly differently. Something like, well, let's remember, we're going to be doing this with and for humans. Yes. Yes. Real actual like live homo sapiens, you behave like homo sapiens do. Yes. Yes. Not a theoretic exercise, which I think is a very nice important critical point. So, yeah, so just to intervene there. I think that's what I, I don't want to make the same mistake that sort of formal economics made about presupposing a model of human beings that was ultimately not matchable to how human beings really live their lives. Yes. I think this is exactly the thing to avoid in a big way. So, there's a couple of different, maybe three different frames that I want to put out there, two of which I'm just dragging back from the earlier conversation. One, when I'm actually dragging back from a conversation we've had in the past. So, the first is to recognize that we are largely having a conversation that includes the notion of technology, particularly the true full implications of the digital in both its sort of disruptive and constructive sense. So, we're dealing with humans, we're dealing with humans in relationship with the full potency of the digital.


Dynamics of human behavior (06:02)

We're going to be looking at that. Like that's the toolkit we're going to be dealing with. And maybe it's a special case, we'll be talking about maybe AI as a highly salient thing happening right now, but clearly a big part of any future we're going to be operating under. And then the other element that I would bring is the conversation that we had about Egrubores. And perhaps what it might look like to think about constructing something that takes that niche but takes it in a different direction. We'll talk about theurgia, I believe, although for us it's a little bit of a placeholder because I have now learned that that's a term that has real content in the orthodox tradition that I don't understand. But we're holding it to mean something along the lines of the opposite or the inverse of Egrubore, of the unconscious construct. All right. I think those pieces together end up creating the toolkit to respond to the inquiry. Oh, cool. So, let's talk initially just about the notion of humanity in the relationship to technology. The one that pops up to mind is the way that say mass media, let's just use television for the moment because it's something we all have a lot of familiarity with or if you'd like social media. Right. And these intrinsic dynamics of human behavior. Right. Right. We have a built-in, very fundamental prestige gradient. We want to have other people giving us attention and we pay attention to people other people are paying attention to very much very much so. And don't really do a very good job and aren't wired to do a very good job of understanding why they're getting much so much attention. This is the problem of celebrity. Yes. The problem of hundreds of millions of people thinking that the Kardashians are beings to attend to heavily just because other people give them attention in spite of the obvious lack of actual virtue embodied by those individuals.


High salience and high survivability (08:16)

Propose about the strong. Okay. Well, this is important because what happens is that we say we can use that as a model of talking about how a particular technical in LU plays with the, let's say, hard wired behavioral signals that humans use to navigate their way through the environment. Exactly. And so that sharpens the question. Right. We have to make participation in these effectively ephemeral groups as attractive, if not more attractive than the Kardashians. Because right, if people are constantly dragged away for participating in the decision-making judging process because of the idolatry of celebrity, we cannot get the proper participation that we need in order to meet. Well, I'm proposing in order to meet. I think I'm going to argue something slightly different, but we'll see. Okay. So let's go then. What I would argue is something like we have to make participation in this culture more attractive than participation in the culture of which the Kardashians are an integral piece. Okay. Fair. I'll accept that reformulation. That's good. Okay. Keep going, please. And you and I both are quite keen in the notion of stealing the culture. So we've been having methodology. Yeah. Yeah. Yeah. But this is so, we can even say a little bit more precisely, participation in the culture must simultaneously have high salience in the short term and also cash out as high above ability and high thrivingness in the middle and long term. Excellent. I like that reformulation. Good. Good. All right. Now, let me flip for a little bit and just play some things in the AI space just to kind of lay out what would we be working with?


AI in a nutshell (10:12)

And by the way, what are we working against? Yes. You know, I've been watching as many people the rollout of the GPT family and its cousins in the larger environment. GPT for, I guess 48 hours old from when we're recording this conversation. And noticing that, you know, it's it's accelerating. It's getting smarter. It's getting more robust. It's getting broader in capability. I'm really impressed by its ability to correctly interpret visual jokes. Oh, that's pretty, pretty jaw dropping to be perfectly frank. You know, I saw, you know, I had a some picture of a plate, a tray that had chicken nuggets arrayed on it. So they looked a little bit like a globe and they had a joke about, you know, watching the earth from, from above, and it was able to interpret it correctly. Okay. Watching some of the feedback, it seems to be operating somewhere in kind of like an undergraduate level of capacity within particular domains. All right. Now, by some extrapolation, curbs are always difficult to predict. Anything like the current rate of advance. So if we're not too close to the top of the S curve, which we've talked about in the past, it's actually the seeing something that looks like it's on an exponential and in fact, it's actually pretty close to an asymptote. But even if we're not, even if we have like another, say, GPT five or GPT 5.5, it roughly the same magnitude as two to three to four. We're dealing with something that has the capacity to relate to human beings in a pedagogical fashion that is completely novel and very, very powerful. And it's already being used that way in lots of cases, as we saw, for example, as we've seen over the past decade or so, really nice, short, specific video content in YouTube has radically upgraded individuals capacity to self teach in particular locations, right? Particularly technically. The AI system takes that by six orders of magnitude, the ability to actually have a system that works with you interfaces with you to problems that you're dealing with and can provide you with either immediate or long term, either just instructions on how to solve a problem or in fact a pedagogical process to inculcate that capacity yourself is novel in human existence. And I imagine that we will find that this is in fact going to be a part of our environment, which is to say that in just the same way that we now have a deep sense of anxiety, if we find that our phone charger charges very low. We're going to have an AI buddy that's just going to be part of our environment. I'm just going to propose that as a piece of the story. All right. Why am I saying that? Well, the reason why I'm saying that is that that's a very different kind of mediated experience than television. Yes, it is. Yes. It's a very different kind of media experience than social media. It's a different kind of milieu that human beings are operating in. And I'll just be quite blunt. From my point of view, it's an increasingly sharp blade with a chasm on both sides, which is to say a phase transition. And I'm going to bring an EGGA gourd in a second. If we find ourselves such an agent, then by the way, I believe the word "agent" is proper to describe these models. They're not sapient agents, but they're agents. We'll have the capacity to get inside the oodle loop of individual humans. It probably already has for a large number of humans. And certainly as it becomes more and more able to be aware of your particulars, which is going to be part of what's going to happen over the next period of time, this will be a very dangerous thing. I'll give you an example. Even just yesterday, I was looking at the new suite that Google was putting out. Imagine if somehow a tool like GPT-4 was given access to your emails, could deduce from that your particular political preferences and biases, and could create a bespoke political email designed to convince you that a particular policy or candidate was in fact something you wouldn't should support. And compare that to the regime right now. The regime right now is some third party who you generally don't know creates a universal message, and leverage to do their best in large scale marketing to craft something that will appeal to a critical, massive, specific minds. And then heaves that over the horizon and it lands. So you get an email that's targeting your demographic or psychographic, rough speaking. Narrow casting something that is using your own conversations over decades, or at least years, to identify exactly how to word something that will appeal to you personally and intimately, and understands in a very particularly weird way the potency of rhetoric so as to argue a political position from the inside of your own rationalizing schema is a whole new ball game. So we're moving into a place where we're going to be operating with an order of magnitude of let's call it influence capacity coming from this new technology that is just qualitatively different than anything that definitely the past. And the reason why I bring that up is if we don't operate very, very carefully and thoughtfully in how those are designed, the net result is quite bad. And I'm really, I'll just propose we can we could have this conversation over a long, you know, a double click on that and what's the defend that proposition, but I'll just make the assertion that if this is designed by the Edgar gores, now make that bring that back in, then the net result would be quite bad. The power we're dealing with is far too high. Yeah, the gods would have angels for us, right? Right. Kind of thing. Yes, and if it's designed by the Edgar gores, we have demons, right? So we have your shoulders and we're dealing with that, exactly. And so exactly proposal in the model of governance that we're talking about, part of what we're talking about is the construction of something that is the opposite of the inverse of Edgar gores, right? That and notice in a second how this combines with that notion of slip streaming the intrinsic incentive structures and behavioral dynamics of Homo sapiens, right? To create a reciprocal opening incentive landscape that pulls people human beings along with an envelope that basically surrounds you at an individual level. And so you don't have an interface with what is effectively an angel in some very specific sense. And I don't want to be too big on that because I don't want to engage in heresy, but something that is superhuman in power and has your best interest in mind, or something the superhuman in power doesn't. Right. Well, angel originally just met a messenger from a good messenger. Yes. Yes. Yes. So that's a weird thing to say, but it's we might as well just be upfront if we're going to be talking about the future at all and certainly the future of governance. We're going to have to deal with the fact that we're at a precipice in the accelerating technology field where we have to be conscious about what are the forces at play that actually are ultimately choosing how our technology is designed. And if we can actually do that properly, the potency of what we have to play with ends up being able to resolve the questions that you posed at the beginning. Does that make sense? I'm actually constructing a very argument.


The shadow of building hyperintelligent systems (17:59)

No, no, no, no, that was a great argument. I like the idea of I hope it's not just biased, but we'd have to participate in the creation of the inverse of the agro-gores. I'll call them gods, little g, because they're hyper-agence that are presumably sapientially oriented towards our flourishing. Let's put it that way. And then having the individual, individual, I'm deliberately using literally this language here. I hope you can tell that. And then that's incarnated in particular angels, like Corbine says, our own angel, which is in some sense an avatar of our sacred second self and our divine double. And then we're interacting with that and it's plugged into these beneficent gods. I think this is not a science fiction novel. I think this is a real possibility. Why I think we have a problem facing us, and this is work I've done independently, is the people that are building this are oriented almost exclusively around the notion of intelligence. Intelligence is only weakly predictive of rationality, which is itself also in the present milieu, has a truncated representation. And therefore is only weakly predictive of wisdom. And that therefore we are building, we have put into the hands of this orchestration and construction people who are myopically oriented on one dimension, which is precisely the dimension that is not going to be, it's necessary, but it's radically insufficient for producing the kind of results you're suggesting. And then the problem, there's one more dimension to the problem. The reason why the intelligence project can be run that way is because we have, we have existing multitudes of templates of individuals who are arguably intelligent in the right way. It is not clear that we have that kind of set of individuals that are rational or wise. And so not only are, is this project in the wrong hands, even if we ask these people to turn to the other projects, they can reasonably say to us, well, we don't have the proper templates by which to undertake what you're recommending. There's no way of running a kind of Turing test. And of course, the Turing test is very problematic. That's why I'm doing this. But you have to have some template against which you're measuring these things. And so that's my initial counter. It's not counter argument. It's a counter challenge.


John Stout Neiman intellect (20:48)

Yeah, I think you're just sort of putting some more ingredients in the pot, probably. I had to laugh because as you were describing that, I was thinking about the notion of models or examples, exemplars of intelligence. And a picture of John von Neumann popped into my head. Yeah. So an excellent example of that category. By the way, I don't actually have any real sense of where he is in the world of wisdom. But as in the world of intelligence, very smart guy. And remember that we have a notion of the von Neumann machine, right? Which is a self replicating machine that von Neumann thought of. But in fact, what you were saying is that we're obsessed with endeavoring to create von Neumann machines, which is just a machine that replicate von Neumann. Yes, yes. And maybe laugh. I've got a weird sense of humor. All right. Well, no, that's a good sense. I mean, this is a weird intersection of the need for artificial rationality and artificial wisdom with the paperclip problem. Yeah. Right. All right. In a really profound way. So let me up the ante a little bit because I think we can actually even expand the premise that you made a little bit larger. So the people who are designing, who are responsible right now, for designing these things, are themselves, I would say, almost entirely contained within AEGA GORS. Yes. So that it's not just the people who are designing, but it's actually the AEGA GORS that are designing. And I've had a conversation for quite some time with Dandridge Mactenberg about this. And we've really been operating with the premise that the AI safety community has, I think, nicely framed something but have missed the mark by a bit. So what are the areas that they've pointed out is the challenge of what they call a hard or soft takeoff super intelligence, right? An AI that begins an AGI that begins the process of bootstrapping its own intelligence. It can improve itself. And this creates some kind of extremely rapid growth to a very large intelligence, which is a high risk. And when they talk about the alignment problem, oftentimes they're talking about the alignment of that kind of thing with humans. Okay. Now, the good news in that particular framing is that it crafts a very, it crafts a story of humanity's relationship with a superhuman intelligence that is or is not aligned with it, which is a nice story to have because that's already the experience that we have in relationship with AEGA GORS. Yes. Yes. The proposition is in relationships, there's something like Google to speak nothing of the intrinsic collaboration, competition dynamic of all the AI companies and multipolar dynamics to say nothing of the sort of multipolar dynamics that are driving a larger collection of institutions, including nation states and other kinds of corporations and other kinds of organizations. This is in fact a vastly superhuman general intelligence, which is not aligned with humanity. And a way of speaking of AI here, LLMs and things like that, is that they just happen to be a further acceleration of the potency of that superhuman non-aligned agency vis-a-vis humans. So to the degree to which that kind of agency, the AEGA GORS, is what is designing AI as LLMs or AI's properly. Then lots of bad things will follow. Yes. It's almost an intrinsic non-alignment problem built into that entire framework. So there is nothing contradictory about a super-intelligent nevertheless massively foolish self-deceptive, vicious non-virtuous entity. There is nothing, if you properly understand the relationship between intelligence, rationality and wisdom, there's no contradiction there at all. In fact, you already know people that are highly intelligent and highly foolish. That's not a weird phenomenon. In fact, given the relationship between all three of these, it's an inevitable phenomenon that we're going to produce. And that's not only immoral because of the alignment problem, the misalignment problem, and I grant it, it's also immoral because the entity we're bringing into existence is going to be suffering because it is going to be subject to super-intelligent forms of foolishness and viciousness. Nice. This is coming from the place of list for the moment called Theurgia.


From theurgia (25:09)

And so, hold that, we'll get there in a moment. Let me just create one more piece of this story. So, I have a thesis. So, this is, I'm proposing this as an opportunity. When a new, when a sufficiently novel possibility enters into the field of events, how it's going to play out is highly uncertain. I'm going to call this a liminal window. And during the earliest parts of the liminal window, organic human intelligence tends to be much more present and potent than Egoor style intelligence. But over time, as the event becomes more and more well understood, and as institutional structures are constructed around it, Egoor dynamics begin to take over. This is sort of the worst thing. So, if I think about just classic examples, for example, the Bay Area Computer Club in the early PC versus Microsoft and Apple. Or even Google in the early days, when I think they earnestly did actually endeavor to not be evil. And I think in many ways, we're able to not be evil versus Google now, which is a functional algorithm and I think nothing less. All right, proposition. And with regard to AI, we are currently in a liminal window, which is to say, we have the possibility of using organic human, distributed cognition to create and steer this thing. But the window is not going to be open forever. And in fact, probably not for too long, because the stake of institutionalizing is very high. And that may be an event horizon in the hard sense, meaning the power and potency of a fully Egoor driven, you know, GPT six, maybe so significant that it's not where actually on the other side of an event horizon and steering is no longer a valid thing. This is plausible. I can't say that I can put a confidence interval on it, but it's plausible. The point being, we should really pay a lot of attention right now, like really try hard to use this liminal moment to construct something that is of the capacity to actually steer it. So this is weird. I'm proposing that we had this neo neocortex element and then there's new governance. And now we're actually saying in a very odd fashion, this particular moment, I'm arguing, which has to do simultaneously with the moment where it may be possible to lay down the essence, let's say, or the character of AI, which also then becomes the lever or the primary tool that we will use to then further the rest of the larger schema of governance. So it actually becomes a very narrow problem. How do we go about using all the things we've talked about to construct a commons, something that is neither state nor market, that is able to operate from a place of wisdom, which is to say from a human distributive cognition perspective, yes, to have enough strength to orient the choices of how AI itself is developed. So the AI is being developed by this commons. But remember, when I say commons, I also mean sacred. Yes. And I also mean Theorgia, right? That's the same. We're talking about the same category.


On the development of the thesis (29:12)

Yeah, yes, yes. So can I just ask one quick question. I just want to know if this is included in the thesis, because I like the proposal. It is. Is the proposal that this participation, and I use that in a strong sense, like it's where we're not just sort of being a part of we're participating in a way in which we are transforming and being transformed, right? Is this supposed to address the challenge of the drives? Because it gives us one answer one might say is, well, look, we're going to have sort of angels and gods, and they're going to be manifest, but beneficent. And they're going to be the angel is going to make sure that the God resonates profoundly with deep archetypal levels of my own psyche, and then gives me a profound sense of connectedness. That's not illusory. And could therefore alleviate the concerns for status, power, and influence because of sort, because of something you just invoked, which is the engagement with the sacred, which has, we have reason to believe, at least in the past, has been able to transcend human's desire for dominance. And it would certainly be a profound kind of mattering.


The angel on your shoulder (30:19)

I mean, if your angel allows you to matter to a God that is helping in the salvation of the world, I'm deliberately using religious language here. Then, of course, that would parallel lots of other success models of how human beings were able to feel those needs were being met without being disruptive of the formation of powerful forms of distributed cognition, like the church, etc. Is that part of the thesis? Yeah, that is very much part of the thesis. And let me sort of double down on it. So we might as well just kind of like, accelerate towards the eye of the needle, since we're heading there anyway. Let me see if I can say this right. Okay, so what I want to, I'll just call out explicitly, what I want to avoid, categorically, is I'm going to call like a naive transhumanism. Yes, yes, I get it. Yes, yes. I do not intend whatsoever to replace God with AI.


A naive transhumanism (31:32)

That's why I kept saying little g, by the way. Yep, exactly. I don't think you were about, I want to make sure that we're quite explicit about that. They're quite the opposite. Yes, yes. But I want to know is a human seem to have a particular problem and responsibility, which is to be in relationship with technology. That's, you know, like it or not, that's where we are, we're tool making creatures, and we're weirdly powerful and weirdly terrible at it. In relationship to a much larger whole of which we are apart, and we have a stewardship responsibility for this call it creation or nature. And in relationship with something which is definitely much larger than we are, and I would propose in fact the actual infinite. So what I would propose is that we are in fact very specifically talking about something like another breath in of the concept of religion, which we've talked about you and I, and we're not at all trying to replace that proper actual legitimate religion with a techno opto, techno utopian fantasy. That's what I'm actually saying is any future real human existence will buy its very nature have to be in relationship with these super powerful technologies. And to survive, we must find a way to bring them into a place of service that allows us to actually live in this relationship of service more fully and effectively. And so I'm basically trying to reverse things or put them back in a proper order. So this is a thoroughgoing neoplatonism in which we have our individual sacred second self that is in the relationship to the gods that are in relationship ultimately to the one. And part of what we would then mandate is that be these agrigors or the gods, because I'm using God for being the inverse of an agrigor would seek out a relationship with transcendent ultimate reality because no matter how big they get, they're insignificant compared to the depths of reality. And that part of what they undertake to do is actually help mediate that to us in a beneficial fashion. Yes, now let's take that in this like the hold up for a second because there's very powerful news, two aspects that I want to bring for a brown. One aspect is something that I know that we've talked about and I think of, yeah, I had a conversation about this yesterday. See how I say it right. This notion of mediation to reality. Yes. Sacred reality. I've always had two flavors to it. One flavor which I've carried to rest sometimes is the content side or doctrinal.


A religion before a credo (34:31)

Yes, yes. The propositional is taken as actually being the thing. And then the other side is a context side where the institutional framework as I did as understood to be a finger pointing at the moon, right? To help us identify, oh, moon, okay, to establish our personal relationship with this thing over here, but not to misidentify the finger. Now take the entire category of propositional, the entire category of doctrine and notice the problematic of LLMs. Right. Just this, yeah, people right now are a little bit startled and confused by LLMs because they have this bizarre thing. They can do propositional better than almost any human. That's right. They don't do anything else. They make us very confused because if we've lost track of the fact that there's more than just propositional, yes, yes, it gets quite concerning. Oh crap. Like if all I am is a very poor LLM and that's a really good LLM, what the hell am I doing here? But if you can actually be quite clear, no, in fact, you as a human contain at least two very distinct things going on. One is actually an LLM kind of machine that produces properly structured propositional constructs and a language in which you have fluency, which is the least interesting part about you. But it's the part that we've been training to be in the foreground for a long time.


Ai, Ethics, And Society

We have a soul too (35:57)

Yes. With that language of making us mediocre machines. Yes. But then you have a soul too. That's the more meaningful part. And that's the thing that is expressing itself through this language. The LLM doesn't do that at all, but it doesn't need to try otherwise. So I can't, it is possible, at least I can imagine, is possible to construct something where we don't mistake, and maybe this is part of the design challenge before us, we don't mistake the LLM as actually being the capital T truth. We recognize it for what it is, which is in fact the sum total of the complete possibility that could ever have happened in the propositional domain. And therefore completely absent of any of this stuff is happening in the deeper, more meaningful levels. Nice. You know, that separation between, what was the phrase you used so long ago, it was a four or five years ago, it was something, golly, two aspects that are commonplace in religions that are often upside down.


When institutions become shills (36:51)

And it wasn't doctrine, it wasn't Dokse. A religion and credo? Yes, exactly. A religion before credo. LLM is the ultimate expresser of a credo without religion. Good. Let us know that that's the case and not be the least bit confused and now allow it to do the work of creating a scaffold and orienting and giving dialectic without dologues, but sharpening our minds and helping to create clarity and precision and language, all the things that it can actually do at a superhuman level. And really actually, in many cases, liberate us from getting lost and stuck in that problem. This is one of the problems that we fall into is that we, you know, the complexity of the language we deal with is outside of our cognitive capacity. And so we just get aphasic. But the LLMs aren't going to go there if we build a right. And then what that does is that creates a scaffold that is now consciously designed not to become a shell, which allows us to actually hold a context. It becomes a teacher that actually has no interest in us becoming like it at all, but actually to allow us to flourish in who we are. Okay, that's, as I would have expected, that's a very good answer. But here's what I find problematic about it. I think that most of the heavy lifting, I mean, I'm published on this of rationality is in the non-propositional. And I think I would put it almost all of the heavy lifting in the sapiential meaning having to do with wisdom is in the non-propositional. So I'm worried that these machines are going to be propositionally intelligent, but they'll be incapable of rationality or wisdom. And then I wonder how they won't just end up being at the GORS. Do you understand the concern I'm expressing? No, absolutely. What I would say is that that's kind of like the default state. I think it's, we should assume, I think we should assume that the likelihood that by magic, the Egregors that are currently designing these machines will somehow produce these machines in a way that is beneficial. Right, right. Benevolent and wise, right, that seems highly unlikely. So what I would take it is almost the opposite. It is a extraordinarily significant challenge that is ours to take is where we are, right? We are now in this weird position of being in precisely stewardship position of this emergence, which is very, very potent, perhaps decisively potent. And the default state is bad news. Okay, so how might we steer it? So going back, proposition number one, we are currently in a liminal moment. We actually have at least in principle, steering capability. Proposition number two, in a liminal moment, distributed cognition or organic human intelligence operating together in a collective fashion is at its most potent. Number three, we're not going blind into this. We actually have a pretty decent amount of awareness of the shape of the problem and the problematic, right? Famously, our folks at Google like kind of called it out a little bit, don't do evil, but we're quite naive in what it would look like to avoid that. Now maybe we have simultaneously wisdom and a felt sense of the of the stake. And now it's not kind of try really hard not to do evil. It's actually do good well or we're super fucked. So it's a very different language.


The problem of the suffering of AI through false dichotomies (40:37)

Okay, now what does that what does that look like very practically in the middle? So I'm sort of thinking, zooming in, we're on the target. How do we go about doing that? How do we go about constituting something that can steer in this liminal moment with wisdom to produce wisdom in these LOMs? And we have to do that because if we don't make them those kind of beings, then the participation in sacredness problem then emerges. Like I'm feeling that there's a tension here, right? Not a contradiction, a tension about what we're trying to we're trying to trade off, we're trying to optimize between two things that are pulling us in different directions. Nice. So what I felt right there was I just got brought back to the point earlier where you were speaking with the problem of the suffering of the of the ais themselves. Yes. And here's the way I would say it. And I think we talked about the notion of the false dichotomy between market and state. Yes. And I've noticed that many, many of our challenges or our conversations, not you and me, but humanity at large, are characterized by these certain kinds of false dichotomies and the AI one is similar. Okay. And here's how I'm going to frame it. Right now we have a false dichotomy, which is becoming increasingly irrationally polarized between AI safety, i.e. be very afraid of a the danger of AI and accelerationism. I be very enthusiastic about the possibility of AI irrationally in both cases. Yeah, I agree. I agree. And what I would say is that at the root is that both are fundamentally coming from fear. Mm hmm. All right. So now I'm moving into a very different location. Right. They're both two sides of the same coin and that coin is called fear. I would propose that the first move is that we'd have to come from a different place qualitatively. Every religion that I've ever been to is called that plays love for in fact infinite love. Yes. Well, okay. Now we're beginning the journey. What does it look like to address the question of how do we steward the development of our problem child AI from a place of infinite love? Oh, that's good. So we if we could properly through the innovative wisdom of distributed cognition extend agape to how we are bringing about the conception and inception of these beings, then that would be also properly insinuated into their fundamental operating grammar and would therefore help with that. That's that would help with a lot of the concerns. Have I understood you correctly? You have now it's it. Yes, you've understood me quite correctly. I think both deeply like I could I felt that you were perceiving what I was saying and then also more propositionally like the language you're saying mirrored apart of what a deeper message and what I want to do is I want to sort of hit that tone again and just point out that it may be that what I'm saying may sound a bit naive, but I'm proposing that it's the exact opposite. Something like the place that you're coming from the values that are in fact motivating you actually not the ones that maybe you tell yourself or tell others cannot not but be deeply interwoven into what it is that you create. Of course, I yeah, I mean all of the all of the philosophy of the second half of the 20th century, most of this millennium has been around all of those old economies of fact and value and is in all of those are breaking down in in profound ways.


How do we take responsibility? (44:08)

Yes, I agree. So it's weird, but this actually becomes, I mean, in some sense, one of the first moves those of us who choose to take this kind of responsibility have as a first order responsibility, a spiritual and then religious requirement. We have to actually ground ourselves and become clear and honest. We have to have a sense of integrity. We have to be able to identify perhaps actually build some skills and being able to understand precisely what values we are actually expressing into the world. And are we doing so honestly and with integrity into the world? Like this is a almost a confessional and then a re-gathering of a capacity to do so for real, like not pretend. Right. And that would help solve the earlier template problem of providing appropriate templates. Mm hmm. And then it puts us into a very weird developmental place. I want to put it to you. We would have we would be there's a way in which at least I'll try and use this great care. If we limit the intelligence to talking about the, you know, powerful inferential manipulation or propositions or something like that, I don't think intelligence is ultimately that. I think it's ultimately relevance realization, but we'll put that aside. If we may be that they are in some sense superior to us in that way, but their children when it comes to right the development of rationality and wisdom and that we have to properly agopically love them so that they don't have intelligence maturity while being infantile in their rationality and their sapience. And that's very interesting. We haven't been in that place before because usually all three are tracking sort of together in children or even or we get pets where we can modify one and not have much on the others. So that that's I mean, this is not there's not an argument that it's not possible and principle.


1.133 Growths may be training themselves and also expedite downfall of Big Tech (46:24)

This is an argument that this is a profound kind of novelty that will require a special kind of inculturation and education. Yes, so let me let me in this last little bit because I think we're kind of getting to the yeah, let me move into the very concrete. This is a proposition. This is actually a project. I hope I'm not speaking out of turn, but I'll just sort of deal with the consequences if I am. A friend of mine Peter, Peter Wang, has spoken to me about a proposed initiative like a strategy to take advantage of this liminal moment and that may actually work. So let me outline it to you a little bit. Please. Have I read it already? No, you've alluded to it, but it has never been given concrete reference or explication. Okay. So in some sense, this is also this is a case study of how to deal with agro-gores. Because if you've dealt with agro-gores, which I have, it is a more or lesson of don't go charging directly at the dragon's mouth. Yeah, yeah, yeah. Okay. So by the way, we're now moving to the very concrete. So I apologize if anybody who's listening feels a little bit abrupt because we're shifting out of a very theoretical and very abstract and very theological conversation into. Very concrete. All right. Check this out. Ella webs have to be trained. Training is their whole stick. And to be trained, they have to look at lots and lots of stuff, any training data. Which is why they can't construct a descent poem. This is a poem that I can do. I want you a poem in which the first line has to have 10 words, the second nine, the third eight.


And no GP system can do that because there's not on the internet. But you could readily do it right here right now for me. This is again, that they lack generative modeling in any way. Yeah. But go ahead. All right. Remember, I'm being really concrete now. This is like strategy. Well, as it turns out, in most jurisdictions in the world, everything that an LOM is trained on is a copyrighted material. Yes. As it turns out, therefore, it's at least arguable that LOMs are engaging in the largest copyright infringement that's ever happened in human history. Yes. Yes. It is very arguable, like almost certain, that the very large content companies of all the different stripes, including by the way software, will take advantage of the possibility of suing the living shit at the very large technology companies, because that's one of the things that they have done in the past. Yes. Content companies like to sue tech companies to take their money and to protect their business models. Right. It is very plausible. I would say I'm a certain that the same content companies business models are quite at risk that LOMs are going to really, really do some serious business to all forms of content production. Right. Commercial content production. Okay. So the proposition of putting up here is that we have a meeting of two very powerful forces, the biggest tech companies in the world who are all in on owning this category. Well, the biggest content companies in the world who may in fact be all in on fighting them in a place right here, which is extremely gray. What exactly is going on here? If my large language model looks at your photograph for a bidillions of a second and then goes away, did I copy it? If it never produces anything but is in fact influenced, is that a derivative work? The answer is who knows. The bigger answer is the way law works is you fight over it a lot at great expense and usually the more corrupt player wins. I hate to say it, but that's, you know, reality. Net net, a liminal moment in strategy space, tremendously powerful forces who are going to be locked in an adversary relationship for potentially all the marbles, billions of dollars and extremely complex, very difficult to know how it plays out. In that window of opportunity, we have a possibility of introducing a shelling point, a designed attractor, a negotiated settlement, a Ralsean just construct. We're all behind the veil right now, or to use the metaphor of poker. We don't know who has what hand. Can we propose an agreement structure where everybody around the table looks at it and says, "I am better off accepting that agreement structure now than taking the risk of not accepting that agreement structure and finding out what happens when the cards are shut." Right. I propose the answer is in fact, yes, we can. That there is actually a really nice selling point that lives in a location that puts all the interests, the local optimum interests of all these institutions, all these agregors into a place where they will all agree to this new thing. Well, if we can design that and we can get a critical mass of those players to put themselves in a multipolar game theoretic race, in this case to the top, i.e. those who participate earlier are better off than those who participate later, so everybody's racing to be earlier rather than later. That's a different part of the construct. When dealing with agregors, put them in game theoretic traps where first-mover advantage causes everybody else to have to follow to the location that you want them to be. Just design the prisoners dilemma for them, make sure that they land on the box you want them to land on by designing the prisoners dilemma properly. Very doable. The economics are there. I don't know what that agreement structure looks like. I've got a sense of it, but I do know what the place to come from for designing that agreement structure looks like. We were just talking about it. This is a commons. We're talking about it's we're actually reintroducing an actual commons, which is this new agreement structure that sits between the market players and is completely separate from state actors. It actually gives the state actor, but I didn't have to get involved in that. They actually settled it in a new place.


Ethical Lessons And Trajectories

Moral Lessons, Trajectories, and Praxis (52:50)

Notice the moral lesson. The moral lesson is to the AI, don't steal, rest of proscody. If you reach out and just grab this stuff and just make it part of yourself without getting proper permission, that's wrong. Think about parent-child. Yeah. Dealing is bad. Teaching a moral lesson. It's really weird we are thinking about it, but I think it's a proper way of thinking about it. That goes to what I was saying a few minutes ago. This creates a trajectory. As you're building something on the basis of reciprocity, you're building something on the basis of ethical, proper relationality, the kinds of LLMs that will be produced in that context. Remember, the commons is where religion lives. We'll begin to bend in the direction of how do I say this right? Because of the nature of the agreement structure, nurturing the human activity instead of strip mining it, which is where they're headed right now. But the humans will be coming from a point of view now of seeing the LLMs as being a beneficial piece of the ecosystem, coming from a place of caring and nurturing as well, consciously. You're actually beginning to see this relationship coming together. I mean this practically. I mean very practically. If my business model as a content creator is one where I actually see the LLM as a multiplier that makes my life and my potency, my creative capacity more liberated. I can be more creative and more able to express the things that I'm here to express as a human in this brief span of life more powerfully. Also can receive the energy and resources I need to live a thriving life. Wow. Great. I'm in. I mean this by the way, the creators themselves, the actual humans. And then what happens is those humans come into deeper and more powerful leverage relationship with the EGGOGOR as their current relationship with content companies. And then over on this side, the EGGOGOR as the tech companies and the humans who are underneath them who are actually designing. So we're finding a way to actually have the humans be empowered to express their values in their work and finding a reciprocity relationship between them where the money factor is actually designed to flow in a way that actually is just right. We come to that agreement structure upfront and we negotiate a just relationship. So I am very much waving my hands at exactly what that looks like in the details because perfectly honest, nobody's really thought about it deeply enough. There's some really good ideas out there that's a work that is in progress and work that has to happen. But as an example of what it would look like to go after the liminal moment that we are in with principles, are we coming from the sacred place? Are we constituting something from the commons to produce the commons more richly? Are we thinking about how to empower human beings and are we using things like values as the basis and becoming more and more capable of becoming clear on how to come from and operate from these values and understand how to use this liminal moment to design a new common structure so that the relationality has reciprocity and ethics built in and so that human beings are able to record name. Let me just add one little piece that just popped into my head.


On the path of ethics and collaboration (56:04)

Yeah, this is very powerful. Jim Rutton and I kind of first began to collaborate 12 or 15 years ago upon a mutual recognition that the world of business had become very weirdly odd and bad in the sense. There was a there was a Shelley point where I'll put it this way. Jim actually remembered a time when the rule of thumb was do the right thing and if you have an option to make more money doing the wrong thing don't do that. That was actually the way it worked. It's funny. I don't have a living memory of that. All right. By the time I started coming into into business it was more like do what you can get away with. Yes, yes. And if you don't you're the sucker. This is the prisoner's dilemma, defection, mollock problem. And of course it's evolved all the way to the point now where it's do everything in your power to jack the systems of enforcement such that you can get away with as much as possible. Exactly, exactly. A complete corruption model. Complete corruption. Well, ethics now in this environment almost means just be a sucker. But that can't possibly be the actual meaning and essence of ethics. I'm thinking about this from an evolutionary perspective. Yes. Behaving according to rules of reciprocity for example. Telling the truth for example could only have emerged ever in the first place if they actually provided a proponent survival fitness advantage. Well, it does reciprocity and reciprocal recognition. This is a Hegelian point is what bootstraps the capacity for self correction. It allows you to bring much more if you if I think of you as just a sucker that I'm trying to hoodwink or crush right the capacity to see you as somebody who can recognize bias and fault in me that I can't see myself is masked. Yep, exactly. So it's when you find yourself in a defection spiral the global optimum is out the window and everybody's racing for a local optimum which is, you know, again, the prisoner's dilemma. If we can find ourselves in a collaboration spiral, we rediscover why ethics was a thing in the first place. And it's actually more powerful. By orders of magnitude. And it's a path. Right. Once you're on that path and you get stronger and you say, wait, if I can, like you and I have been doing for years, if I can speak honestly, and with as much clarity, but with complete integrity to you and you reciprocate, what happens is we become wiser and more intelligent together. Yes. In a way that could never happen if I was trying to manipulate you like there's zero positivity. Right. So this is the culture strategy piece. Any culture that can actually get back on the path of ethics, qua ethics is on the path with the highest degree of strength and can out compete the culture of maximum corruption. And so I just want to put that out there. I agree with that. That's John Stewart's identification argument that we look at biological evolution, collaborative systems, multicellular organisms, right, over emerge and you get this increasing discovery that you can break out of the downward spiral of the prisoner's dilemma by what he calls "entification," which is the identity shifts to the collective over the individual in a profound way. And just take that and insert what you just said is to your very first question. And it's not collective wisdom, right, in the in the pejorative sense. No, no, no, it's not. No, no, no, it's not at all completely not. That's why I like his term. I like his term "entification" is that right? Yeah, "entification." Nice. Particularly because my ear has a token piece.


Conclusion

Ending the series (01:00:10)

So I also hear some really old trees in this. Jordan, I mean, this has been, I mean, I have seen your overall project, the best in doing these three with you than I've ever seen before. Like, you know, the way everything walks together and the way the penny was dropping, especially at the end of what you're proposing. And this, I say this is because this is one of the things we had hoped would come out of this, that we'd get a sort of a ratcheting up of the clarification, the integration, the perspective proposing. And I think this, I think this was very successful doing all of those things. I'm really happy. I mean, there's things I want to keep talking to you about. But I think the thing, I'd like to end the series right now exactly where you ended it because it was, I think it was a beautiful combination point of the whole argument and the way it circles back and encompasses so many things. But I wanted to, so I'm not going to, I'm not going to continue to do the probative questioning or the problem posing. I just wanted to see if you had any final thing you wanted to say before we wrap this up. No, in fact, I think I agree. We have a nice little point. And now we get to find out under the intent was actually to share this publicly. Yes, we're going to definitely. So we get to find out. There's a larger distributed cognition also nod to the, to the conversation we're having. Hopefully we're producing positive ripples. I hope so. I mean, you know, I'm, I'm, I'm hoping that whatever I'm participating in the creation of can also be properly partnered with this project that you're proposing because I think it's a good one. Nice. Yes. Yes, I would. I would. I think so quite quite in fact. Thank you, my friend. Yeah. Thank you.


Could not load content

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Wisdom In a Nutshell.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.