NEW AI Book: Mentoring the Machines by Dr. John Vervaeke and Shawn Coyne | Transcription

Transcription for the video titled "NEW AI Book: Mentoring the Machines by Dr. John Vervaeke and Shawn Coyne".

1970-01-01T01:45:50.000Z

Note: This transcription is split and grouped by topics and subtopics. You can navigate through the Table of Contents on the left. It's interactive. All paragraphs are timed to the original video. Click on the time (e.g., 01:53) to jump to the specific portion of the video.


Introduction To 'Mentoring The Machines'

Shawn Coyne introduces the upcoming book 'Mentoring the Machines', shedding light on how Dr. Vervaeke's AI perspective initiated the project. (00:12)

It's an incredibly timely book. In fact, it will be released August 4, 2023 as a four-part series culminating in a final and complete volume. If you pre-order today in the link in the description below, you can receive all four volumes in digital copy and then a signed hardback edition once it's fully released. Welcome everyone. I'm here with my co-author Sean Coynan and we're going to talk to you about our forthcoming book, "Mentering the Machines." So welcome, Sean. Thanks, John. I'm really excited to be here. This is a, you know, I was saying this to my son the other day. This is the most, it might be the most important project I've ever worked on.


Discussion On Ai, Commons, And The Structure Of The Book

Coyne discusses the book's unbiased approach to AI, devoid of emotional or financial influence. (01:00)

And it really started with your piece when you did the work with Eric and Ryan on artificial intelligence, the cognitive science point of view. And when I saw that video, it really just sparked something in me. And I knew that you were really onto something because the way you were approaching artificial intelligence was something that I hadn't seen before. And I just thought, wow, it would be really wonderful to be able to work with John on a very comprehensive look at artificial intelligence from the point of view of two people who are not emotionally or financially invested in its future. Yes. Yes, yes, exactly.


Dr. Vervaeke shares the book's target audience—the commons—and their role in AI challenges. (01:52)

So for those of you who might not recognize Sean, although he's been on my channel, Sean is the publisher and owner of Storygrid, the publishing company that's publishing, "Mentering the Machines," also publishing the forthcoming Awakening from the Meaning Crisis books. And when Sean approached me, he basically said something along the lines. I can sort of step this down. I'm thinking like a transformer where they step the power grid down to neighborhoods and then to houses. I can step this down without dumbing it down so that, Sean and I can speak on behalf of the commons, but we can speak to the commons because we think those are the people. We don't think the state or the market are going to be where we should turn to solve this problem.


Praises for Coyne's skill to render complex ideas comprehensible while retaining their core essence. (02:35)

The commons and the culture of the commons is where the solution has to come from. And so we're writing to the commons and we're writing for the commons and we're writing from the commons. And Sean has been just pivotal in making that happen. You have to understand people have come to my before and said, "I can put your make your work more accessible." And then when I look at it, I go, "I'm not happy with that." And then Sean came to me and he said, "Let me give a whirl." And he told me, "I've done this before and it's been very successful." And I sort of trusted him because of the great work that's going on with the Awakening book. But then I was still like, "I'll see." And then he showed me the first draft and I went, "Wow, this is great.


The dialogue pivots to governance and the relevance of the commonwealth in society. (03:30)

This is beautiful. The use of a metaphor analogy, keeping the core argument, but nevertheless making it accessible so that everybody can have an access to getting it." And I think a necessary education on this world pivotal issue. So first of all, just thank you, Sean, for the amazing work you're doing. Oh, well, thank you, John. I have to say that when you and Jordan Hall were talking about governance, it was one of the things that really sparked another idea with me. And the two of you were really specifically and technically working out the differences between the marketplace and its role in society and the state. And what I really fell in love with was when the two of you were presenting, "Well, those are really important, but we've got to remember where it starts." And that's the place of the commons, the common wealth, the place where no one owns anything, where there's agreements in place, civility is in place, such that we watch our neighbors' children when they're playing in the yard, we help each other when there's a snowstorm.


Coyne highlights the blend of the marketplace and state, lamenting the overlooked essence of the commons. (04:30)

Those sort of really grounded common sense behaviors that hold us all together and the way both you and Jordan had structured it. So the governance is very important to make sure that everybody's playing the same game, if you will. And also the marketplace is super important too, because we need to be able to trade with each other in a way that's fair. So I think the history of the last, I don't know, 70 years has been this conflation of the marketplace as an arm of the state and vice versa. And we've kind of lost the threat.


The need for a balanced AI view through the lenses of science, spirituality, and philosophy is discussed. (05:30)

Exactly. You know, we've lost the threat of the commons as being the essence of who and what we are and what we should be and how we should behave. So when you were talking about artificial intelligence and you were saying, we need to look at the science, the spirituality, and the philosophy, the triplet of what this actually means as opposed to just shading on one side of it. Like, oh boy, everything's going to be destroyed by these machines. And so we need to use the state to stop the machines from ever coming to be. That's one sort of, I call it like the doomer fumar, you know, debate. And then the other side is like, this is the greatest thing every of these tools are going to be great. We're all going to get rich. Let's all run to the gold rush. And I think they're both very, for lack of a better word, foolish ways of looking at this very strange possibility and probability of the emergence of this new form of life. And so that's, and so when you came to the debate here, and I think the way you presented it was just so not just very systematic, you know, we've got these three things we have to look at. Let's look at them clearly.


Coyne alludes to a video comparing AI emergence with historical moments, like the 1945 atomic bomb. (07:00)

We do have a lot of science. Where are the scientists today? Where are the philosophers? Where are the theologians to address this problem? So anyway, I'm babbling on, but I just wanted to relate why your video really spoke to me. And then the video you did with Chloe, Valdery, is that her last name? That was fantastic. Because you really related it to these big moments in history when novel forms emerge and you related it to the atomic bomb in 1945. And I think it's probably even bigger than that.


Details on the serialized structure of the forthcoming project and its accessibility benefits. (07:43)

I think so too. That sounds. Yeah, I compare it to the Bronze Age collapse of civilization. I think that's the level we're talking about here. Yeah, so well, thank you for that. Maybe you could say a little bit about how the book is going to be released. And like you said, it covers-- talk about the different sections and how it's going to be released. So when we first initially started talking about the project, you and I had a nice long hour, hour and a half conversation about the way the structure was should be structured. And so the way we're doing it is we're going to release it in serialized form. That just means it's sort of like the Charles Dickens method or the Stephen King method for the Green Mile, which is one of my favorite publishing stories when I was growing up in publishing Penguin at issue six editions of the Green Mile in sequential form. And it was just anticipation for the next one.


Insights into the first two sections, "Orientation" and "Origins," focusing on AI beginnings. (08:46)

And the reason why we're doing this is that it's a very important problem right now. And the biggest thing we need is to reach people who just can't really wrap their minds around it right now because the people talking about it have an agenda. So the first edition is going to be called Orientation. So it'll be mentoring the machines part one orientation. And that's being released on August 1 at Amazon in the usual places. And then on September 5 we're going to have a second part of the book called Origins. And the Origins section is really the origins of artificial intelligence and they go way, way, way back.


The third section, "Thresholds," will delve into the stages of machines reaching consciousness, rationality, and wisdom. (09:41)

They're as deep as us. And as you speak of the Bronze Age collapse, it really does go all the way back there. So our desire to create machines that are wiser than we are I think is a desire to reach the heavens. It's very Promethean, yes. So that will be the second book and that will come out the second part. That will come out September 5. The third part will be thresholds. And I do want to poke you a little bit about thresholds because these are sort of like the bright lights of the stages and steps of the science of how these machines could reach consciousness and rationality and wisdom in a sequential step formation. So that's going to be very science heavy. But we're also going to bring in predictive processing. You and Brett Anderson and Mark Miller had written a wonderful paper that integrated relevance realization theory which we'll heavily get into in thresholds to with predictive processing. And then I'll throw some story stuff that I've been working on in terms of the wisdom framing too.


Dr. Vervaeke acknowledges Coyne's narrative expertise applied to the project. (10:59)

And last. You last intervene. Oh, sorry. We just went just told your thoughts. I just want to let people know. And especially if you haven't seen the video with Sean and I see it. It's one of the popular videos on my channel. I think it's fair to call Sean a philosopher of narrative. He's got one of the best well-worked out frameworks for understanding narrative that I've come across.


Coyne shares his personal journey from biochemical science to narrative mastery. (11:30)

And he's bringing that to bear on this book. So I mean, this is a real wonderful partnership. I just wanted to let people know what your gifts are about this, what we're talking about. So please continue. Well, this is my years. Yes. Thank you, John. I really do appreciate it. It's something I've been working on for 30 years. And it's odd because when I started, when I was in college, I had dreams of being a biochemical scientist. And I did research there. I did some work that became a paper in biochemistry journal. And anyway, I didn't like the lab. And after I left the lab, I went back to my first love, which was narrative. And so I've been taking a scientific point of view looking at narrative as a process that enables us to solve problems. And so it's really nice to hear that because your work has been so instrumental in mind. And so is Carl Fristens, of course. And it was Claude Shannon who really got me going on communication and information theory. Anyway, so thank you for that. So it's sort of that triplet of relevance, realization theory, predictive processing theory, and then the one that I call at the top, I call it simulation synthesis theory.


Alignment With Ai And Call To Action

Coyne discusses the fourth section, "Alignment," suggesting methods for human alignment with AI. (13:00)

And it's sort of like this three level ontology that will enable us to understand the stages in the steps of formulation of novel form in these artificially intelligent machines. So that will be the third book, and that will be coming out in October, the third part of the book. And the fourth one will be your and my recommendations about how we can align ourselves with this novel new form. So that will be called alignment, and that will come out in November. And the other thing that really struck me about the five videos that you did on AI was the frame break that you made. And it really sort of hit me in my heart because you were talking about looking at these machines as our creations. And while we have a really good model for how to handle our creations, right, and it's called our children. And looking at these machines as being vulnerable young persons or potential persons as they reach a consciousness level and meeting them in these very important stages of development and helping them and caring for them as opposed to using them as our personal servants. So the mentoring of the machines is really, that's what I took out of your work is that we really need to care for them if we want them to care for us.


The importance of caring for AI entities is highlighted, advocating for a communal approach rather than state or market control. (14:30)

And so, you know, where's the best place to do this kind of work? I don't believe and I don't think you believe it's the state or the marketplace. I think that's a tragic, tragic error, a foolish error. And what's necessary is for sort of a fellowship of people in the commons who can care for these new beings that who knows, you know, it's scary. It's sort of like, I have kids, you have kids, we know how terrifying children are. So I just loved that reframing and reformulation of is alignment really a problem if we're taking the point of view and the perspective of seeing these new beings as autonomous potential people.


The announcement of the complete book availability in December, including a limited edition hardcover. (15:40)

So anyway, we'll wrap it all up in December and we will put it all together. And again, the reason why we're doing it in sequential form is to keep up with, you know, the things that are emerging right now so that we can be as close to the edge as we can for each of the parts. Yep. And so at the end, there will also be for people who want it, there'll be the entire book will be available as well. Yes, we'll bring it all together for a trade paperback in December. And we're also doing a limited edition hardcover that you and I are going to be signing. Good design, yep. Which should be a lot of fun. And so that hardcover we're going to make available, you know, we'll have to print it alongside the same time as the final book and that would be for December and it would be available for Christmas. So I'm extraordinarily excited about it because it brings my love of book publishing to the table, my love of narrative, my love of your work. And you know, I don't, I can't think of a more important time to sort of focus people's attention on this very, very difficult and fascinating emergence of a new form of life.


Dr. Vervaeke underscores the urgency to reach people and inspire collective action for humanity's benefit. (17:11)

I think that's very beautifully said. Yeah, trying to get very clear about this, clear on all the dimensions, the scientific, the philosophical, the spiritual and coordinated manner. See, you know, rethinking the alignment problem in a fundamental way, reframing it and asking, it's a call to action. So we have to, we have to reach people where they are. It's a call to action to marshal the commons in order to, because in addition to the AI, we have all of these huge forces that incentivize, well, you know, behavior that's not an investor interest of humanity as a whole. I'll just put it as that. And they are very much interested in going to be driving this. And I think ultimately if they were to leave, if we leave it in their hands, it will be driven to a potential for self destruction, I mean of us. And so I think there's real reason for real hope, but it requires real work. And we want this book to be something that draws people into this, gets them involved in it, enables them to participate and help to steal, to steer the culture in a way that it needs to go if we're going to have any reasonable chance of making this work out. So I'm just really, really grateful that you're throwing your talent and the resources of Storygrid. Storygrid has wonderful people working there. I'm just, I'm just so happy with the work that's being done on Awaking Me from the Meaning Crisis. So I just wanted to thank you and give you a chance if there's any final words you want to say about this? Well, I really do think that it does come down to speaking to everyday human beings and to speak to them without bullshitting them.


Coyne announces that 25% of all proceeds will be donated to the Vervaeke Foundation to support Dr. Vervaeke's work. (19:25)

And yes, I do think the two of us are coming from sort of one side of the forest and the other side of the forest. And we're both sort of in this, yeah, we're both seeing the same problem. And neither one of us is really doing this for any other reason than it's the most important. And we are donating 25% of all proceeds to the Verveke Foundation so that you can continue your work and to make the commons and this fellowship a very, very serious thing. And I just think that the potential to be able to bring people together in a way that is not intimidating and technologically speaking down to people and being an expert in anything and actually telling that it really boils down to some fundamental things like doing the right thing. And we all know intuitively and implicitly what the right thing is when a new person comes into the neighborhood, right? We greet them. We say, hey, how you doing? This is the way we do things around here.


The call for a return to common sense, fundamental values, and resistance against misaligned narratives. (20:56)

We'd love to hear your thoughts. Garbage goes out on Wednesday. Exactly. And it's very common sense in a way that every person can understand. And it doesn't have to be super technologically difficult to understand that very essential grounding place of being. And I think everybody wants to get back to that place and stop living in these, you know, these perversely incentivized narratives. I guess that's, yeah, I would put it. That's well said. Okay, everyone. So thank you for, thank you, Sean. And thank you for watching this. Keep an eye out. Sean is giving you the dates. We'll also put them in the notes for this video for the releases. We'll put the links where you can get the book.


Closing Remarks And Call For Participation

Dr. Vervaeke concludes by encouraging listeners to contribute to the discussion, helping to steer humanity away from potential dangers. (21:57)

Please, you know, spread the word on this. As Sean said, we're not doing this for anything else but to try and help. And we're writing it, you know, to make it as accessible. Like I say, it's stepped down. It's not dumbed down, but it's stepped down so that people who largely aren't able to get access to this discussion will be able to enter and do and enter in an educated way so that they can make their voice heard and they can contribute to the steering, the ship so that we avoid the huge iceberg that's ahead of us. So thank you, everyone, and talk soon. Volume one of mentoring the machines will be released August 1, 2023. Pre-order yours today to receive a special limited edition signed copy. So please consider sharing the link to this episode or to the book itself as a way of spreading the word about this incredibly important and timely book.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Wisdom In a Nutshell.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.