22. Emergence and Complexity

Transcription for the video titled "22. Emergence and Complexity".


Note: This transcription is split and grouped by topics and subtopics. You can navigate through the Table of Contents on the left. It's interactive. All paragraphs are timed to the original video. Click on the time (e.g., 01:53) to jump to the specific portion of the video.


Intro (00:00)

Stanford University. >> Okay, so we will pick up on the one topic that was not covered from two days ago, because you guys needed to go play around with these cellular automatographers. So I will work with the assumption that everybody here has now spent 48 hours playing with those, but presumably because of the sleep operation, you have forgotten much of it by now, so you will cover some of it.

Complexity And Systems Science

Butterfly Effects (00:35)

Okay, back to that issue of fractals and butterfly effects. And that whole business that by the time you look at chaotic systems that are determinists, but a periodic, all of that, when they seem to be lines crossing, getting back into the same spot, look closely enough and they're not going to actually be touching. And the centerpiece of why that matters was that whole business of, you know, these both appear to be the same and take them out at decimal place, and they're actually different, and a gazillion decimal places out. And the entire sort of rationale for thinking in that way is the notion that a very small difference here can make a difference one step to the left. And a million decimal places out there, a small difference, will make a difference one before that. In a scale-free way, a fractal, this here, a difference here, a million decimal places out, is just as likely to have consequences for one over, as this one for one over fractal, scale-free, all of that. But the critical thing that is encompassed in this is the notion that tiny little differences can have consequences that magnify, and magnify and amplify into a butterfly effect.

Emergent Patterns (02:00)

So cellular automata are a great way of seeing this principle along with a number of others that are relevant to all of this. So, okay, so we start off with the very first one, and this is the one that you no doubt first discovered is a pattern which made you deeply happy. And if you follow the rules, it was starting there was this, which one is this face saying, starting at the bottom. Okay, so starting at the bottom. And what you see is these very simple rules, and out of it emerges a whole complex pattern. And we'll be seeing shortly the features of this that perfectly match what the requirements are for emergent complexity, what we'll see as the elements are lots of constituents, lots of building blocks. Building blocks being very simple, they're binary, either they are filled or not filled. Extremely simple rules as to how the next generation gets formed. And in terms of the extremely simple rules, none of the rules have anything to do with other than the next generation. It is all local rules built around what the neighborhood is like for each one of these. So we put it together and outcome these very structured patterns like these. And this is great. This is very exciting, except this isn't what you usually get. And most of these cellular automata systems, where you start off with an initial condition, and a simple set of local neighbor rules for how you get reproduction into the next generations. In most cases, the patterns stop after a while. In the vast majority, they stop, they hit a wall, they go extinct. Aha, two terms that I've already stuck in here that are biological metaphors start to seem less metaphorical after a while. First off, the notion that going from here to here to here to here represents each next generation. And the notion that is we're just now that the vast majority of these cellular automata systems go extinct. They fail after a while. So it's a very small subset. What you then also see is, in some ways, the critical point in this whole business is the relatively small number of starting states that succeed produce a remarkably small number of mature states that all look very similar to each other. In other words, you can start with a whole bunch of different conditions, and you will wind up with a bunch of smaller number of stereotypical patterns. Half of the cellular automata that wind up taking off looks something like this with this pattern. What are we seeing? Convergence. Convergence, the notion that you can start with different forms and they will converge over time. What's this? We just proved you look at the mature form and you can't know the starting state. The other thing is, starting at the beginning, just looking at this line, there is no way you can tell what it's going to look like 20 generations from now. You've got to march through it. In other words, the starting state gives you no predictive power about the mature state.

Cellular Automata (05:32)

This is a nonlinear system. The cellular automata encapsulates this business that most of these go extinct. Only a relatively few number mature forms exist. It shows convergence. Very different starting states can converge into the same sort of patterns. Minor differences in the starting state can extend into very different consequences. It shows, in other words, better fly effects. Appreciating this a bit. We did go to example number two, where we changed the starting state just a little bit here. We shifted around some of the boxes. What you see is something that looks roughly the same, but it's not exactly the same, but it's the same general feel to it. That's great. Then we started on an exercise of starting off with the initial boxes. It goes that way. The initial boxes evenly spaced with one space between them. Apply the rules from there. This is what you get. Totally boring, static, inorganic, inanimate. This is what it does for the rest of time. What this exercise then did, going into number four, is what if we now space two boxes between each one of these? Here we have an extinction. This is one of those where it hits a wall, and all the next lines are empty. How about three boxes in between the starting states? Another form of extinction. How about four boxes between the starting states? Suddenly, something very dynamic takes off. Applying the same rules and all you've done is change the spacing between the starting states. Look at, for one thing, how close this was to going extinct up there on top, how asymmetrical the pattern is that comes out. This particular one will stay asymmetrical forever. The ways in which that generated something very unexpected. There is no way you could sit there a priori and say, "Hmm, one box in between generates something that looks inanimate. Two boxes, not going to work. Three boxes. Somewhere around four boxes in between, that's one dynamic system suddenly take off. There is no way to have known that before without marching through this and actually seeing. Starting state tells you nothing about the mature state. Then we space it even further. What we get is something similar. Again, this one is symmetrical.

Begginings of Asymmetry (08:05)

It is somewhat different from the previous one, but it's the same sorts of patterns that come up over and over. What we've seen here is by starting state minor differences, big divergence between going extinct versus being a viable pattern. Minor differences in starting state, big divergence between symmetrical and asymmetrical patterns. Tiny differences, butterfly effects. Next, looking at the consequences here of introducing some asymmetry from the very beginning. The one on the left up on top has four boxes and four boxes. It has eight boxes. The one on top has eight boxes on the left and the one on top on the right just adding in one extra box on the side. It's four and five adding a little asymmetry and what you see is a very different pattern. One of the things you tend to see in these pseudo-animate living pattern systems is starting states of asymmetry produce more dynamic systems, more dynamic patterns than even symmetrical ones. That's one of the only rules that comes out of there. We're seeing now minor differences producing major consequences, divergent sees, butterfly effects. Now showing this in a different way. What we've got here are four different starting state conditions. The one on the far left is the one from the previous one, the four and four. Four different starting state conditions where they're not enormously related to each other. The first one against the other three but the other three have minor differences. The whole thing is two of these are identical after the first 20 generations or so. This one and this one. The two of them are identical and for the rest of the universe they will produce the same identical pattern and looking at the mature state. You show up on the scene somewhere halfway down and you could never ever know what the starting state was. Did it start like this or did it start like this? A convergent see here. In this case it's another one of those rules. Knowing the starting state doesn't allow you to predict the mature form. Knowing the mature form you don't know which particular starting state brought it about. The only way to figure it out is to step wise go through the whole process because you can't just iterate by a blueprint. There's no blueprint. Finally the last one was giving you instead of different starting boxes in each case with the same reproduction rule. The last one was the same starting pattern of boxes with different slightly different reproductive rules. What you see here are totally different outcomes depending on which variant we have the beloved one on the top left. You see here by slightly changing the nearest neighbor rules. If and only if there's one neighbor with this property. If and only if there's two neighbors. If and working through that way and you see remarkably divergent outcomes. The one thing you see that the majority of them produce something very boring either boring extinct or boring repetitive in a very undynamic way. Only a small subset produce lively animated living systems. We're seeing a whole bunch of biological metaphors here over and over and over which is the starting states. You don't know the mature material in the starting state. The generation very simple rules for generations going one to the next. What we see also is the vast majority go extinct and produce either go extinct or some repetitive very boring crystallized type structure. A small subset of tiny subset produce instead dynamic patterns and knowing what the starting state is is not going to give you any predictability whatsoever. Is this going to produce a dynamic pattern or not? Nor does it allow you to look at a bunch of the starting states and say those two are going to produce the same mature pattern. These are all properties of the evolution of different living systems. You begin to see cellular automata. Do you see some of these principles? The simplest level out there in the natural world is looking at all sorts of shells, sea shells and tortoise shells by the sea shore and whatever is. They all have patterning on them that is derived from that first cellular automata rule producing patterns that look a whole lot like these. Go online and look for them because I didn't get around to it in time but producing all sorts of patterns in nature. Very common ones. What does that tell you? Very simple rules for generating the same complex patterns and different starting states cellular automaton properties here. Another thing in a living biological system that begins to suggest this. I do this research in East Africa and every now and then over the years I have gone to this mountain called Mount Kenya which is on the equator. It is about 17,000 feet. It has got glaciers up on top. This is an equatorial glacial mountain. You go up to about the 15,000 foot zone and there is more land. Almost everything is dead up there from the cold and there is basically only like four or five different types of plants up there. Already only a very small number that survive in that environment and each one of them is very bizarre and distinctive looking. There is one of them that looks like a little rose bud thing except it is about five feet across. There is another one that has sort of a sprouty thing like this and a big central cactus looking thing that isn't really cashed. There is a few of these really distinctive bizarre looking plants. In some way or other that is what it takes to survive up there. I have this friend who does research up in the antise and he does botany stuff up there. He goes into this one range there that is on the equator and high enough that there is glaciers up there. A glacial equatorial mountain on the other side of the globe. One day I am sitting around and looking at some of his pictures there and suddenly I look and say that is the exact same plant. That is the big rose bud plant as in Mount Kenya. That is the tall sprouty one and say it is the exact same plant. How can that plant be over there? We go rummage around and his botany taxonomy stuff and they are completely unrelated plants. They are taxonomically of no connection whatsoever but what they have done is converged onto the same shape. In some mysterious way if you are going to be a plant growing on the equator at about 15,000 feet there is only about four or five different ways of appearing. There is massive convergence and there is only four or five ways that you can survive an environment like that. You get organisms in very dry environments and there is only four or five ways that you can go about being an organism that is super efficient at retaining water. Those are the only ones you see amongst them. Desert animals and completely unrelated ones have converged onto some of the same solutions. There is only a very finite number of ways to do legs and locomotion. Two is good for weirdo things that fly of six creepy things of eight. You do not find seven. You do not find three. You find some of the solutions here are immensely different starting states and have converged. What we see is in these living systems over and over stuff that look like cellular automata. Where slight differences magnify enormously, butterfly effects, where you are modeling living systems in a very real way. Most of them go extinct, divergence, convergence and where each one of these you get the smaller number reflecting the fact that there is only a limited number of ways. There are only limited number of ways of doing rain forest, tempered zone rain forest in the Pacific Northwest. There are only a limited number of ways of doing tundra in Miami Beach. There are only a limited number of ways in all of these convergence and always reflecting that these are cellular automata. Okay, so hopefully you are now feeling desperately, regardless of what you did not spend the last few days doing this, because these are so heartwarming. If you want to read a book that nobody in their right mind should read, it is a book by this guy named Steve Wolfram, who is one of the gods of computers, math and was one of the sort of people who first developed cellular automata. And by all reports, probably one of the largest egos on the planet. And he published the book self-published it a few years ago, which he can because he is grotesquely wealthy from some of his computer programs. And just showing what a low key sort of humble guy he is, he called the book a new kind of science. Just showing that he wasn't just going from some little piddly new way of viewing the world. Here was his new type of science, and the book is about 1200 pages, and I suspect not even his mother has read the thing. It is so impenetrable, and it sold a gazillion copies, and almost all of them were sitting in people's garage, is now waiting down drain pipes. Because no one could actually read this thing, but an awful lot of what the book is about are patterns in nature, coded for by very simple local rules. And the simple fact of that, you've got a lot of very smart people doing the cellular automata stuff, and they can't come up with rules where you could look at something beforehand, a pre-order.

The Ways of Coding Life (18:34)

And no, this one is going to survive, this one is going to go extinct, those two are going to turn the same. These two that differ by a slight smitchen are going to turn out to be enormously different, there's no rules for it. And the book has all sorts of cool pictures of the cellular automata looking things out in nature, and go buy it for somebody's birthday and see if they're not grateful for the rest of their lives. But his whole argument there is, these show ways in which you can code for a lot of the complexity in the natural world with small numbers of simple rules. This whole business of emergence. This sets us up now for beginning to look at some of the ways in which we hit a wall the other day, ways in which the reductive model of understanding the universe stops working after a while. One version being the problem of not having enough numbers of things, not having enough neurons to do grandmother neurons beyond Jennifer Aniston, that whole business that you simply don't have enough neurons to do that beyond just the rare ones now and then, and what you get instead, what has the solution turned out to be, this field that people focus on now called neural networks. And the point of neural networks is that information again is not coded in a single molecule, single synapse, single neuron, one neuron, this neuron knows one thing and one thing only, which is when there's a doc there.

Example (20:03)

Instead, information is coded in networks and patterns of neural activation. And just to give an example, and this is one that's in the Zebra book, I named one who was in BioCore, I do this one, and I do it because I have one point to learn the name of three impressionist painters, except they're not coming to mind right now. Okay, so you've got two layers, here's what a neural network would look like, a two layer one. These neurons on the bottom are boring, simple, human, these all type neurons from the other day, where each neuron knows one thing and one thing only. This one knows how to recognize Gogam paintings, this one recognizes Van Gogh and this one Monet. Okay, each one of them is obviously there's no human and visual neuron on Earth, that's like that, but just for our purposes, they now project up to this next layer. Note this neuron projects to one, two and three. This, the two, three and four, this, the three, four and five. So what does this neuron know about? This one knows how to recognize Gogam, it's only getting information from this neuron, it's another one of those each, a heubile and visual type, I know one fact and one fact only. This one here is another one of those. What does this neuron know about in the middle? That's the neuron that knows how to recognize impressionist paintings. That's the one that says, I can't tell you who the artist is, but it's one of those impressionists, it's not one of those Dutch masters, it's an impressionist painting. And this one does it because it is getting information that is not available to these guys, it is getting information at the intersection of all these specific examples. These ones, number two and four, those are ones that recognize impressionist paintings also, but they're not as accurate as number three because they've got less examples to work off of. This is how a network would work and what that suddenly begins to explain is something about the human brain versus a computer. Computers are amazing at doing sequential analytical stuff like you get calculator things inside, material boxes that can do more things than the human brain can do computationally, but what we can do is parallel processing. What we can do is patterns, resemblances, similarities, metaphorical similarities, physical similarities, and that's why you need networks like these. You don't need neurons that know one fact and one fact only, you need neurons where each one of them is at the intersection of a whole bunch of other inputs. Okay, example. So now suppose you've got a network, there's one neuron which fires and there's a whole bunch of neurons sort of sending projections into it. And this is a neuron for remembering the name of that guy. What was the name of that guy, that guy, he was that impressionist painter. So suddenly your impressionist painter network is activating and firing at this neuron, so it's sitting there. So this is, now you've got your whole impressionist network, this activating. What was the name of that guy? He was an impressionist painter, he painted women dancers a lot of the time, so people who painted dancers, but it wasn't Degas. Okay, so it's not Degas, or a kid going in there. And what was that guy's name? God, I had that seventh grade art teacher who loved this guy's work. If I could remember her name, I would remember his name. Oh, remember the time I was at the museum? And there was that really cute person who seemed to like it and I had to pretend I liked this guy also, and it didn't work out, nonetheless. And going through and, oh, what's the name? There's that stupid pond about the guy he's really short and something about the tracks being too loose. Ah, too loose to the track. And suddenly it pops out there and you've got enough of these inputs coming in there. And this is tip of the tongue, wiring. This is how you may not be able to just remember the guy's name, wait, he's the short guy with a beard who hung out and bars and Parisian bars, and here was that time in seventh grade. And enough of these inputs and suddenly out pops the information. What this begins to tell you is this is ways of getting similarities. These are ways of getting things that vaguely remind you. This is a world where humans can now do stuff like have a piece of music that reminds them of a certain artist because they both have similar coloration. And that's something that makes sense to us. That's something that can work because what you then begin to see is every one of these neurons, this one, for example, impressions neurons, this one may also be at the intersection of another network that's going this way. A network of French guys from the last century. And it may be part of another network of people whose names are hard to pronounce, so you're anxious about saying them in a lecture. Or the intersection, and each one of these is going to be an intersection of a whole bunch of these. All of these networks, what does that do? That's what you can do that a computer can't. You see similarities, similes, metaphors, and somewhere in there you get something really important, which is the ones, the networks that have wider expanses that connect a broader number of neurons in a very simple, artificial, idiotic way. That's kind of what creativity would have to be. Networks that are spreading far wider than in some other individual, it is literally making connections that neurons in another individual does not. And suddenly you have a world where everyone knows this one is a face, and it was only a limited number of people who ever decided that this one's a face. And in some level Picasso had a different network, a broader one, as to what could constitute a face. A broader network in some way is going to have to be wiring that is more divergent. And in the intersection of a bunch of networks that are acting in a convergent way. So what's some of the evidence that it actually does work this way? You go and you stick electrodes into neurons in the cortex, and what you see, if the world was entirely made up of like, a one piece of knowledge only, what you would see is you would find neurons, then each one responds to one single thing, all these grandmother neurons. And instead what you see by the time you get to the interesting part of the cortex past the first three layers of the visual cortex and the first three layers of the auditory, once you get into the 90% that's called the associational cortex, then what you see is called that because nobody really knows what it does, then what you see are neurons that are multimodal in their responses, all sorts of things stimulate them. And here we have a neuron that's being stimulated by a type of painting, by the knowledge of French guys, by something phonetic, by also, and they're multi-responsive. That's what you wind up seeing, the majority of cortical neurons, when you record from the electrode, they're not grandmother neurons, they're at the intersection of a bunch of nets. More evidence for this. This was one of the grand pro-baws of neuroscience around the 1940s or so, a guy named Carl Lashley, and obviously a very different time in terms of thinking about specification of brain function, what he did was a very systematic attempt to be able to show where in the brain individual facts were stored. And the term print at the time, this jargony term, was N-grams. He was searching for the N-gram for different facts, and what he would show was he would destroy parts of the cortex in an experimental animal, and he couldn't make the information disappear. He would have to destroy broader areas, and some of the knowledge, some of the memory was still in there, and he concluded in this famous paper in the search for the N-gram that according to all the signs he knew, there could be no such thing as memory. And the reason why was he was working with a model of being able to, there's a single neuron where if I could have bladed, I should be able to now show in that rat that it's just lost the name of its kindergarten teacher. And instead, you see networks going on. You see the same thing clinically in something like people with Alzheimer's disease. Early on in Alzheimer's, you will lose, you know, in these networks, you'll lose a neuron here, you'll lose a neuron there when you're just beginning to lose neurons. And what you see is clinically in people with Alzheimer's, early on, it's not that they forget things, it's not that memory is gone, it's just harder to get to. And you do this with all sorts of neuropsychological testing where you try to give the person cues to pull it out. For example, you're giving somebody, potentially without time, there's a classic orientation test, you ask them, "Okay, do you know the name of the president?"

Networks (29:22)

"Okay, they managed to get that. Do you know the name of the last president?" No idea. So now you give them a little bit of keeling. Okay, let me help you a little bit. It's a one-syllable word. Still not there, even though you've now activated the one-syllable word network, obviously artificial, still saying, "Okay, let's make it a little bit easier. It's things you could find in a park, in a city park." So you're activating that one, no, still not coming out. And then you give even more explicit priming there. You give them a forced choice paradigm. It's one of those. Okay, so is it President Trie or President Trump or President Benj or President Bush? Bush, Bush, the kid with the father also, it's still in there. It was still in there. It just takes more work to pull it out. And what you're seeing there is not the death of individual memories. You're seeing a weakening of a network, a network that is now taking stronger priming to pull it out of there. And just to show how subtle network stuff can be, here's something that would work with a lot of individuals with early-stage dimensions.

Priming (30:31)

What you do is another type of priming. So you're eventually going to ask them the name of the previous president, and they first come in and you say, "Oh, great to see you. Come on in. What a beautiful day I walked here by way of the park. The bushes were so beautiful this morning." In the park there are some of them. I'd flower some of them. But bushes are so nice to look at when you're walking through a park because bushes are one of my favorite forms of bombing. And then five minutes later, they are more likely to remember the name Bush out of a whole different realm of more subtle networks you're tapping into.

Scalefree (31:03)

So all of this is the beginning of a way of solving the problem we had the other day of not enough neurons for them to be grammothroner neurons. More solutions. We then went to our next realm of trouble, which was the problem of there's not enough genes. There's not enough genes in that specific realm of explaining bifurcations. And there can't be a gene that specifies, okay, this is where you bifurcate if you're this particular blood vessel and a different gene for this particular bronchiol and a different gene for this branch of a dendrite. In a single, it can't work that way, there's not enough genes. What this introduces is the idea of there being fractal genes. Genes whose instructions are ones that are scale free. What do I mean by this? Okay, here's what a fractal gene might do. So we've got a tube, and remember this is a tube that's going to be part of a blood vessel or a dendrite or a lung or whatever. We've got a tube. And the fractal rule here is, grow this tube in distance, grow it until it is five times longer than it is wide. With the opening, and that's the simple rule. And the rule is when it's grown five times longer, bifurcate. So what's going to happen at that point is just gone five times longer, and it bifurcates at that point. And what you've got is now because this is split into the cross section is going to be shorter. But you apply the same rule. Now with the shorter cross section, you have the same rule, grow five times the length of that cross section until you split. And what you wind up getting is this is one simple fractal rule that will generate the tree patterns.

Protein (32:53)

That the branchings get shorter and shorter. The distances between the branch points get shorter and shorter because the cross sections are getting one simple rule. And you can generate a circulatory system, a pulmonary system, and a dendritic tree by giving a fractal instruction. In this case, one that is scale free. That is independent of what the unit is here, and this could work within the single neuron, or within an entire circulatory system. So all of that's great. That's totally hypothetical. Ooh fractal genes. We know by now that's got to translate into a protein in some way or other. How might this actually look in a real system? So suppose... Okay, so a gene coding for a protein. This is one copy of the protein. This is another. It's another. They bind to each other in a way so that they form a tube. And they bind to each other in a way that's just pure mechanical reality of these are not bits of information. These are actual proteins. So it's going up on the tube there. And suppose that the forces are as the tube goes up, it gets more and more unstable.

Math and probability in temperospatial relationships (34:04)

And when the tube is high enough, it gets unstable enough that these bonds between the proteins begin to weaken. And it begins to split. The splitting there is a function of the length of these. So it's split. And now the next one has half the number of cell of proteins in this one. And thus it's that much weaker. So you only have to go a shorter distance now before it begins to split. This doesn't exist. There's no way it's like this. What you could begin to see is here's how you could turn a scale free set of instructions potentially into what it would actually look like with more information. It would look like with mortar and bricks in terms of proteins. How it might actually work. Now the notion of fractal genetics or fractal genes and fractal instructions begins to solve another problem. And this is that space problem of how much stuff can you jam into a space. Here's the challenge here. In terms of how dense things are. In the body, amazing factoid, there is no cell in your body. That is more than five cells away from a blood vessel. Okay, you could see why you would want to do that. But that is not an easy thing to pull off. How do you do that with the circulatory system? An amazing other factoid to factor in with that is the circulatory system comprises less than 5% of your body mass. How can this be? You've got this system that's everywhere, but it's taking up almost no space. It's within five cells of every cell out there. Yet it's less than 5% of the body. And, okay, forget it. I don't like to put that up. But what this begins to, okay, you convinced me. So let's do this. So what you begin to do is transition to a world of fractal geometry. You've got all your Euclidean world and nice, smiley, strange things there. You've got this whole world of shapes that are constrained by classic, cartesian geometry and all of that. And what fractal geometry generates are objects that simply cannot exist. Here, up on top, eventually, you will see the first example of this. And this is out of the chaos book. And this is this cantoar set. What you do is you start with a line. Start with a line and you cut out the middle third. Now for those remaining two ones, you cut out the middle third. For those remaining four, you cut out the middle third. And there it is. And you just keep doing this over and over and over again. And what do you do when you take it out to infinity? What have you generated a set of an infinitely large number of objects, lines that take up an infinitely small amount of space? It's not possible for that to work, yet as you go more and more of that direction, you get this impossible phenomenon of something approaching, having an infinite number of places that something appears while taking up almost an infinitely small amount of space. And what this winds up being is it's not quite a line anymore at the bottom, but it's kind of more than a dot. It's somewhere between one and two dimensions. It's a fractal. Its dimensional state is somewhere one point something or other. It is somewhere between dots and a line, and it does this impossible thing, which is it's everywhere without taking up any space. Or you could then push it to the same thing in the next dimension. And this is this Coke snowflake, and it's the same sort of rule. You start with the triangle there, and the rule is you take the middle third and you put a little triangle out of it. And then take the middle third of that and put a little triangle out. And the middle third, you just keep doing it forever and ever and ever. And you wind up with something that is impossible, which is an object that has an infinite amount of perimeter, an infinite amount of surface area within a finite space. That's impossible, but it begins to approach this. And what do you see here? This is a way of just iterating over and over and over to jam a huge amount of surface area into a tiny space. And thus it's somewhere but different, sort of like a line, but it's sort of like a plane by then, and it's got a fractal form somewhere between two and three. It's got a fractal quality of two points something or other. It's an impossible object that is solving this problem of being in another version, having surface area everywhere without taking up any space and being within a finite area.

Fractal geometry (38:53)

Next, finally, this Manger sponge, which is the same exact concept again, you start with the box up there, the ring, and you take out the middle third of each of those segments, and then you take out the middle third of each of those segments. And if you're doing this with what starts off as a three-dimensional cube, eventually you get something that cannot exist, which is an object that has an infinitely large amount of surface area while having no volume. That's what it produces at the extreme, and we got something here that's somewhere between two different dimensions of fractal again. And what you see is this is how the body solves the packing problem, because all you need to do is make the circulatory system, not circulatory system, some version of this, some version of splitting the ends of the capillaries over and over and over, or making the lungs with their surface area for exchanging oxygen, looking something like this, and this is how you generate a system that is everywhere and taking up virtually no space. Obviously, it's not taken out to infinity, but this is how you can have a circulatory system that's five cells away from every cell in the body, yet takes up less than 5% of the body. This is a fractal solution. All you do here to generate these is taking some of these qualities over and over and over, and you can begin to produce absolutely bizarre impossible things in terms of surface area, and perimeter and volume and all of that. This is how you can use a fractal system to solve the packing problem. Now, of course, as soon as you're coming up with a notion of something like fractal genes, you, of course, have to consider the possibility of there being fractal mutations. What would a fractal mutation look like? And again, most people, most geneticists, and molecular people do not think about this in these terms, but there are people who do, who actually talk about things like fractal gene mutations. What would it look like?

Emergence And Evolution

Fractal gene mutations (41:05)

Suppose you've got a mutation and it produces a protein that's slightly different, and as a result, it's got bonds here that are slightly weaker between different proteins. So on a mechanical level, what have we just defined? This is a tube that's going to grow these proteins where it's a shorter distance before it begins to split, because these bonds between them were not as strong. There's a mutation now where instead of growing five times the cross-section, maybe you're growing 4.9 times the cross-section. And thanks to that mutation, the entire branching system is going to be compacted a bit. It's not going to reach the target cells. And these would be catastrophic mutations where the pulmonary system doesn't develop, the circulatory system doesn't develop. And what you would see in those cases is the mutation is something that has consequences that are scale-free. Another hint when you see some fractal gene mutations are a small number of diseases that they're about spatial relationships in the body. For example, there's a disease called Coleman syndrome where you get stuff that's wrong with mid-line structures in the body. Something is wrong with the septum between the nose, sis, the nostrils. Something is wrong in the hypothalamus. Something is wrong in the septum of the heart. This is not three different mutations. This is some sort of fractal mutation messing up how that embryo did symmetry, how the embryo does mid-line structures. So you begin to see ways here in which you can solve this and within biological metaphor where you could begin to get solutions for these problems and also mutations that can put you up the creek. Okay, so that is another realm for beginning to solve this. Another domain.

Emergence Driven by Biophysical Properties (42:57)

And here we begin to move into the realm of emergence, emergent complexity, which we will first look at a couple of crude passes at it. First, emergence driven by biophysical properties and do not freak out if you don't know what I mean because I have no idea what I mean by that, so I will explain it in a more accessible way. And this was something that was explained endlessly by a guy who used to be in the bio-department a developmental botanist named Paul Green who died about ten years ago, way too young from cancer, he was a really good guy. He would give this famous lecture where he would start off and he would describe some sort of disc and the point is the disc, the material inside was of a softer material than the material on the perimeter and he would be putting up math at this point that I didn't understand, but it was sort of a disc like that and then he would show that what happens if you heat the system. What happens if you put heat on a disc like this and what he would wind up showing going through agonizing amounts of math is that when you heat a system, the only solution for this system that's trying to respond to the heat but in different ways on the perimeter versus the inside is to come up with a double saddle, a double saddle shape and the math proved this and I had no idea what he was talking about when you come up with a double saddle shape. And then what he says is so that's how you get a potato chip. You take a slice of potato where there's more resistance on the perimeter and less on the inside and you heat it and the only solution to that problem is to come up with a double-sarrowed potato chip shape and if you change the outside, the force of it, if you take one of those like great organic, give you the runs, type potato chips where it's going to have the skin left on the outside, it's going to be a somewhat different shaped double-sarrowed because there's only one solution mathematically to that and then you sit there and you deal with a very simple important fact which is that slice of potato knows no biophysics, that slice of potato didn't fit, there's no gene that instructs potatoes to respond to heat in this way, this was the inevitable outcome of the biophysical properties of a slice of potato. And what he then shows is in plant systems after plant systems, they develop where two sheets come out this way and a little higher up two sheets this way and two this way and two this way, they're all double saddles and this winds up being a mathematical solution to a packing problem there when plants are growing their stems, there's no genes specifying it, you don't need genetic instructions, it is an emergent property of the physical constraints of the system.

Proto-Emergence & Wisdom in the Crowd (45:42)

Another example here that sort of proto-emergence, somewhat simpler versions, this phenomenon of wisdom in the crowd, and this is one was first identified by Francis Galton, who was some relative of Darwin and started eugenics and was bad news in that regard with famous statistician and being an Englishman somewhere in the 19th century he spent huge amounts of time going to state fairs and county fairs or whatever, and he was at this fair one day where they had some oxen up there and they were having a contest that if you could guess the exact weight of the oxen you would get to milk getters, something I don't know what the prize would be, and there were hundreds of farmers around filling out little pieces of paper where they were guessing and what he discovered at the end was that nobody got the answer right. Good, so the owners of this get off easy without having to give up any of their oxen milk, but he then did something interesting, he collected all the little slips of paper, and he averaged all of them and it came out to the correct weight within an ounce. In other words, no individual in that group had enough knowledge to be able to truly accurately tell what this thing was, but put them together in a crowd and outcomes the right answer. Another version of this, and one of this one is deeply important in terms of Western intellectual tradition back to, is that program who wants to marry a millionaire, does that still exist? In reruns and, okay, so this one is the same, they give you questions and if you answer them they give you money and it's great, and at various points if you're stumped you've got three things you can do, one is they can eliminate you've got four choices, they can eliminate two of them to make it a little bit easier for you, another is you have this expert who you can call up, and the third option is to ask the audience which they think is the right answer, and all the audience there has these little bits of paper. The audience there has these little buttons so they can choose ABC or D of the multiple choice there, and what the logic is supposed to be is cut it down to two, your chances are better if you have to guess, talk to your wise expert who's sitting by on the phone there, and they're going to be wise and be able to hopefully answer this question or ask a whole bunch of people, and they would all vote, and any smart contestant would choose whatever the audience chose, because when the audience was asked, they got the right answer, they got the majority of people voting for the right answer, and this is more wisdom of the crowd, and this was a much better hit rate than whoever the expert was on the other side of the phone, one person could be extremely expert, but they're not going to be as expert as a whole bunch of somewhat decent experts thrown together. This is the notion behind a field called prediction markets, where what you do is you are trying to predict some event, for example, the Pentagon is very interested in using prediction markets to try to predict where the next terrorist attack might be, and what you do is you get a whole bunch of experts, and you ask each of them to think about whatever the parameters are, and take a guess as to how long it will be before the next one occurs, and what you do is you average them up and assume there's a wisdom of the crowd thing going on, and that will give you lots of information. A great case of this a few years ago, there was some submarine or something that sunk somewhere out in the Pacific and the ocean, and nobody knew where it was, but they kind of knew where the last sighting, the last recording was from it, but every day I got a whole bunch of naval experts, and they had all of them sort of bone up on the knowledge of what was the water temperature and wind speeds and where they were in the last sighting and what was on TV that day or whatever, they got all the information, and each one made a guess as to where it would be on the map, and you put them all together, and they had guesses covering hundreds of square miles of ocean floor, and they put it all together, and they came up within 300 yards at the right location. So what we have over and over here is this business of put a lot of someone decent experts together on a problem, and they will be more accurate than almost any one single amazing expert at it. Under a few conditions, the collection of these partial experts can't be biased, or if they are, they all have to be biased in a random scattering of directions, and they need to really do be somewhat expert. If you get a whole bunch of people off to Subway in New York and ask them to guess the weight of the oxen, they are not going to wisdom of the crowd their way into being able to milk the thing afterwards, you got to have people who have some experience with it. And you wind up seeing wisdom of the crowd stuff going on in all sorts of living systems. For example, here is an ant colony, and here is a dead ant, and they are trying to get the dead ant back to the ant colony, and when you look at these things, they know how to get it or they get some dead beetle or something to eat, and a whole bunch of ants push it over back to their colony. Oh, does each one of them know exactly where they should be pushing? No, what you have instead is each ant has somewhat of the right idea as to where they should be going, and there's more ants that have a reasonably accurate notion, a smaller number that are somewhat off, a really small number that are way out of whack, because in general, ants are kind of experts at finding ant colonies, they are pretty informed, and what you do is you put them all together and you do this vector geometry stuff, and you can see them. And it moves perfectly in that direction, and no single ant knows exactly where the colony is. You've got a wisdom of the crowd thing going on. Okay, five minute break. If you have a chance, can you email me to that website so we can post it, and the horse works? Great. Okay, picking up. So now we are ready to take some of those building blocks, wisdom of the crowd stuff, biophysical potato chips, and begin to see it more formally in this field of emergent complexity. What is that about? What we've already alluded to, it's systems where you have a very small number of rules for how very large numbers of simple participants interact. What's that about? Here's what emergence is about. You take an ant and you put it on a tabletop and you watch what it's doing, and it makes no sense whatsoever. You take ten ants and do it, and none of them make any sense. You put a hundred and they're all scattering around, and somewhere around, I don't know, a thousand ants or so, they suddenly start making sense. And you put in ten thousand or a hundred thousand or whatever it is, and suddenly, instead of some little thing wandering around aimlessly, you suddenly have a colony that can grow fungi and regulate the temperature of the colony and all these things. And suddenly, out of these ants emerges in incredibly complex adapted system, an adaptive one. And the critical point there is no single ant knows what the temperature should be in the colony, or if this is time to go out foraging in this direction instead of that direction, it all emerges out of the nature of ant interactions. You've got very simple constituent parts, an ant, much like one box that's filled in a cellular automata. You've got very simple rules for how they interact with each other, ants have, I don't know, maybe three and a half rules. Don't tell Deborah Gordon on the department who's an ant obsessive, but that I may be inadvertently dissing the ants, but they have a small number of rules as to how they interact. If you bump into an ant and you do this with the pheromones, you go this way and a few of that way and I'm just making it up.

Emergent Complexity (54:01)

They have a small number of rules, and as long as you've got a lot of ants doing this, out of this can emerge hugely complex adaptive patterns. And this is what any merchant system is about.

Two Different Examples (54:14)

Simple players, huge numbers of them, simple nearest neighbor rules, and you throw them all together in outcomes patterning, and there is no single ant. That knows what the blueprint is, and there's no blueprint. There is no plan anywhere that says what the mature form of the colony should look like. There are no instructions. It is bottom up organization rather than top down. And you see all sorts of versions then of emergent complexity built around, again, lots of elements of things with a small number of very simple rules. About how neighbors interact with each other. You need that board. Okay. Here we have two, four, six, eight different cities or eight different places where an ant could find good food or eight different something or others, eight different locales.

The Traveling Salesman Problem (55:05)

And you were trying to do something efficient. You need to go to each one of them to sell your product or to see if there's good food there or not. You need to go to all eight of them and you want to do it as efficiently as possible. You want to find a way to have the shortest possible path to go to all of these places. And this is the classic traveling salesman problem. And nobody at this point can solve it. There's no formal mathematical solution. And by the time you get to like eight locales, there's like, I don't know, hundreds of billions of different ways you can do it. So how can you can't come up with the perfect solution, but you could come up with maybe kind of a good decent one. There's two ways you could do it. First is to have an unbelievably good computer that just by sheer force cranks out a bazillion different outcomes. And in each case, measures how much you're doing it. And you can get something close to an optimal answer.

Swarm Intelligence And Decision Making

The traveling salesmen problem, swarm logic (56:09)

The other way of doing it is to have yourself some virtual ants in something that is now called swarm intelligence. Here's what you do. You need to have two generations of ants. The first generation, you stick them all down, different numbers of them, and they all start off in these different cities, these different locales. And their rule is each one of them goes to another city. Each one of them goes to another destination. But here's the following rule. The ants are leaving a pheromone trail. Pheromone trail, they stick their rear end down, what is it? Ted Forex abdomen. They stick their abdomen down, and they've got a gland at the bottom there, which releases a pheromona, makes a track, a scent track of the pheromone there. And a very simple rule, they have a finite amount of pheromone in there to expend on the entire path they're making. In other words, the shorter the path, the thicker the pheromone trail is going to be. Now what you do is deal with the fact that the pheromones dissipate after a while. They evaporate, and thus the thicker the path, the longer it's going to be there. You now take a second generation of virtual ants, and you throw them in there, and what their rule is, they will be able to do it. The rule is, they wander around randomly, and any time they hit a pheromone trail, they join the trail. One way or the other, and they lay down a pheromone trail of their own. With their abdomen, they reinforce the markings on this trail. And let 10,000 virtual ants do that for a couple of hundred or thousand rounds of generations, and they solve the traveling salesman problem for you. Because it winds up being the short paths, the more efficient ways of connecting locales will leave larger, thicker trails, which are more likely to last longer, and thus increase the odds that an ant wandering around randomly will bump into it and reinforce it.

How do bees pick a new nesting site (58:21)

What you see is, initially, there will be every possible path, and as you run this over and over, it will begin to fade out, and out will emerge the more efficient ones. You can optimize the outcome doing it this way. Just asking virtual ants to do it for you. And this is exactly how ants do it out. In the real world, when they're foraging in different places, there is a first wave of them that comes out, and they go to locales leaving sent trails, and then there's the wanderers that come in. And when they hit a trail, they join it. There are now telecommunications companies that use swarm intelligence to figure out what's the shortest length of cable they need to use to connect up, you know, eight different states worth of telecommunication towers, whatever they're called. And they can sit there and do math till the end of the universe, trying to figure out the cheapest way to wire them up, or they can use swarm intelligence. And that's what a lot of them do at this point. It works. What are the features of it? This is not wisdom of the crowd. This is not that every ant knows a solution to the traveling salesman problem, except none of them have the perfect solution, but put them all together, and they all get to vote in outcomes. The ant don't know from traveling salesman problems.

Simple Rules (59:31)

The ant does nothing about trying to optimize this. All the ant knows is one of two different rules. If I'm walking from one of these to one of these, the longer I walk, the thinner the pheromone trail, or rule number two, if I stumble into one of these, I join it and put down my markings there. Two simple rules, one very simple type of sort of unit of information, and they're an ant. And all you need to do is make sure there's enough of them and they solve the problem for you. This winds up explaining another thing. How do bees pick a new nesting site? A bees nest, bees hornets nest, a bees nest. Every now and then the bees need to leave and pick a new place to live, and how do they figure out the good place? And there's all sorts of criteria, nutrients, and so all sorts of bees go out there, and what they do is they look for food sources, and they look for a place that will have a lot of food. Maybe that's a place to go and move the colony. So we know already the bee will go out and find its food there, its food source, and we'll come back in. And here's the colony cut-and-cross section, and what you wind up having is this ring of bees, here's the entry, and you have the bee dancing going on that we've heard about in the middle and the dance floor there. And we've already heard it's this pattern of this figure eight while shaking the rear end, and we know what the information is, which is the angle tells the direction to go out there. And the extent to which it's wiggling its rear end is how long you're supposed to fly for, but the final variable is the better the resource, the longer you do the dance. So you've got bees coming in from all over the place that are found, good resources that are found, so-so ones, all of that. And so there's bees doing all this dancing stuff here of different durations, and the ones who have found the good solution to where do we want to live are dancing longer. The ones who have found the most efficient path are leaving a message longer. So now you bring in your second generation, and the rule is among bees, if they happen to bump into a bee that's doing a dance, the bee responds and goes where it tells you to go. So a bee may randomly sort of bump into one of these guys, and then off it goes, actually I'm sure it's more complicated than this, but it's along the lines of there's now random interactions. If one of the peripheral bees encounters bumps into one of these bees that has information, it joins in in that bees group, and then goes and finds the food resource and comes back with the information. So thus, by definition, if you have found a great food source, you're going to be dancing longer, which increases the odds of other bees randomly bumping into you, which causes them to go and find the same great food source and come back and dance longer, and the ones with lousy ones are coming in and dancing very briefly, and thus there's hardly any odds of somebody bumping into them, and what you begin to do is you suddenly optimize where the hive is supposed to go.

Elements Factors and Simple Rules (01:02:45)

Again, it's not wisdom of the crowd. It is an emergent feature of one generation with information based on some very simple rules, and one information that generates some random element and outcomes in ideal solution. More versions of this. Another domain where some very simple rules out of it emerges something very complex and adaptive. Okay, so the themes here are two generations, the more adaptive the signal, the stronger it is longer, it lasts and then the randomization element. Another theme that comes through in a lot of the emergence, which is to have your elements in there, your ants, your bees, your traveling salesman, whatever the constituents are, and now what the rules are are simple rules of attraction and repulsion, which is to say some of the elements are attracted to each other, and some of the elements are repulsed by each other. Some are pulled together, some are pushed apart like, for example, magnets. Magnets are polarized in the sense that magnets only have two ways of interacting with each other, simple nearest neighbor rules. They're either attracting or repelling depending on the orientation. So here's what you do now. You take a system and something very simple. You've got some simulated SimCity sort of thing where you're letting out of a system run to design a city. You want to do your urban planning in your city that you're going to construct there, and what you do is you can sit there and you can study millions of laws about zoning and economics and all of that to decide something very simple. Where are you going to put the commercial districts and where are the residential districts going to be? Or you can have just a small number of simple rules, which is, for example, if a market appears in someplace, what it attracts is a Starbucks, and what it also attracts is a clothing store, or some such thing. So a bunch of rules, but then you have repulsion rules, which is if you have a Starbucks, it will repulse any other Starbucks. So the nearest other Starbucks can be this far away. If you have a competitor's market, it can't get any closer than this. That sort of thing, these simple attraction and repulsion rules, and what you want to get in when you run these simulations are commercial districts in a city where you get clusters of commercial sort of places that are balanced by attraction and repulsion where you have thoroughfares connecting them, and the more elements there are in the two neighborhood commercial centers, the bigger the connection is going to be, the bigger the street is, the more lanes, the more powerful the signal coming through there, and you throw it in, and out pops an urban plan that looks exactly like the sort of ones that the best urban planners come up with. And all you need to do instead is run these simulations with some very simple attraction and repulsion rules. So you do that, and it winds up producing stuff that looks like cities. You do that with a bunch of neurons. You take a Petri dish, and you throw in a whole bunch of individual neurons, and they have very simple rules. They secrete factors which attract some types of neurons, and they secrete factors which repel other types of neurons, and all of them are having some very simple rules. When I encounter this, I grow projections towards where it's coming from. If I encounter that, I grow projections in the opposite direction, simple attraction and repulsion. And what you do is, at this point, you throw in a whole bunch of neurons each one where you throw into a Petri dish, and at the beginning they're all scattered evenly all over the place, and you come back, and you come back two days later, and it looks just like this.

Attraction and Repulsion (01:07:01)

You have clusters of neurons sending projections, and you have all these empty residential areas in between, and if you just mark this in a schematic way, looking from above, you're not going to be able to tell, is this the commercial definition? Is this the commercial districts in a big city, or are these neurons growing in a dish? And you get areas of nuclei, of cell bodies, and areas of projections, and it winds up looking exactly like that. And amazingly, there was a paper in science earlier this year, and it was looking at one of these versions again, in this case attraction with repulsion rules with ants' colonies setting up foraging paths, and they explicitly compared one colony to the efficiency of these neurons. And what they showed was very similar solutions, but the ants had gotten a more optimal one. And the subway system had people sitting there, a salary, to figure out the best way to do it. All the ants had very simple rules of, if it's someone from the other colony, I stay this way, if it's someone from mine, simple attraction and repulsion, and outcomes, something that looks like this as well. So here you see that happening with a remarkably small number of rules. Now you put it into a really interesting context, which is something we bumped into back when first introducing proteins and DNA sequence equals shape, equals function, all of that. Molecules have charges on them. Some of them were positively charged, some of them were negatively, whoa, attraction and repulsion. Positively charged molecules are attracted to negatively charged ones, same charged ones repulsed. Here we have a system with very simple attraction and repulsion rules. And that's the logic behind, and when one thinks about it, one of the all-time important experiments, something that was done in the 1950s by a parasyntist, University of Chicago, Yuri and Miller. Here's what they did. They took big vats of organic soup stuff, they just had all sorts of simple molecules in there. Little fragments of carbon to carbon, little fragments of all sorts of inorganic molecules and their little ones in there, floating around in this organic soup. And what they did was they would pass electricity through it. And they did this vast numbers of times. And eventually what they saw was they would come back and check and these random distribution of these things, of these little fragments, had begun to form amino acids. Whoa, metaphor, the organic soup, just the evenly distributed sort of world of potentially organic molecules in a world in which electricity passes through lightning. What had these guys just come up with some in your kitchen sink experiment of the origins of life? And what people have done subsequently is show you don't need the catalyst. There's a whole world of researchers who study origin of life, and the basic notion is you put in enough simple molecules in there that have attraction and repulsion rules, and you get perturbations and spatial distributions of certain ways, and they will begin to form rational structures after a while. Here's another version of this, and I used to do this in classics, except I can never pull this one off and it just became chaotic. It's kids toy, you got these magnets, you either have magnets like that, and then you have little metal balls that can go onto the magnet here, and you've got vast numbers of them, and you can piece them together. Whoa, this is starting to look kind of familiar here, so we have these constituents with very simple rules, which is the magnets repel each other. They bind these things, and here's what you would do, here's what I would attempt to do. First off, I would get somebody to show me how to get the video thing on here to project it, but you would put up a whole bunch of these magnets in rows, not too close to each other, nice and symmetrical, and what you do then is you take a handful of the metal balls and flume in there. And if you do that four or five hundred times, eventually, they will bounce around and amid all the pieces flying, you are going to get a pyramidal structure like this. One of those, just like that, three-dimensional, you know that, you are going to get one of those that will simply pop out of this, because that's the nature of potato chips solving their math problem with double saddles. That's the nature of throwing a whole bunch of elements with simple attraction and repulsion rules, and give them enough chances, throw in enough perturbations there, and structures will begin to emerge. And it's the same exact principle there, these same ones over and over. So we've got some very simple versions where you get a merchant complexity. One is this version of a first generation, has directed searches, and the intensity of the signal that it leaves afterward is a function of how good of a search they've done, random wanderers. Then you have the attraction, repulsion world, of putting these together, lots of elements, and you begin to get structures out of it. Next, a version of this, or next domain of where you begin to see the fact that these rules are underlying an awful lot of things. Suppose here you were studying earthquakes, and apparently there's just like little earthquakes going on, you know, twenty times an hour or so, all down on the Richter scale of, you know, one quarter, or who knows what, but you get enough of these, you get a huge database, and you can begin to graph the frequency of Richter 1 earthquakes, and how often do you get the Richter 2 and Richter 3 and all of that, and you graph it, and it's going to look something like this, a distribution like that, which is obviously there's a huge number of number 1 categories, and it drops off until the extremely rare at this end, there's a distribution, which is mathematically can be described something called a power law distribution, with a certain angle to it, and okay, so here's the relationship between how often you get little teensy earthquakes and the big ones. Now instead, what you do is something much more different from that, which is you look at 50,000 people, and you look at their phone calls over the course of the year, and you keep track of how far the phone call was, how distant the person is that they called, and now you map the distance, the very shortest calls, the very longest, and the frequency, and it's the exact same curve. It's the same power law distribution. Next version of it, this was a study that was done, which was, I don't quite know how these guys did it, I always get lost in the math on these, but in this one, what they did was they took a whole bunch of marked dollar bills, and they started in the middle of, I don't know where, I think it was in Columbia, something, and they were somehow able to keep track of how far the bills had traveled a week later, and asking, okay, how many of the bills had traveled no more than a mile, how many, five miles, and it was the exact same curve, and people now have been showing this same power law distribution. Here's some of the things that have been shown. The number of links that websites have to other websites, the number that have only one link to power law distribution, proteins, the number of proteins showing certain degrees of complexity, and the number of things that have been shown. And the numbers dropping off with the same power law. Here's one, which is the number of emails somebody sends over the course of the year.

(Six Degrees of Freedom) (01:15:11)

This is the one that was done in Columbia, they got access to everybody's email records, I don't understand how they could have done this, but it was a couple of million over the course of the year, and what they showed was the frequency, how many people were making the small number of emails over the end, the same power law. And then there's this totally crazy one, which is, okay, do you guys know the Kevin Bacon, six degrees of separation thing there? Okay, someone went and did a study about this, that they got like every actor that they could find who was in a film in the last two years, and they got all of their filmographies, and they generated their Kevin Bacon degrees of freedom, degrees of -- I'm going to bring it out, okay, and they figured it out, the number for each individual, and then they grafted. How many people were six degrees of separation away, how many were five, so on, and it's the same pattern. And this one keeps popping up this power law business, and what you see intrinsic in that is it's a fractal, because some of the time you're talking about what's happening with the tectonic plates on Earth, and some of the time you're talking about phone calls, and some of the time you're talking about how molecules interact with each other. There's something emergent that goes on there, which is an outcome of some of these simple attraction repulsion rules, and outcome of simple pioneer generation of the random movement ones, and outcome structures like these. This winds up being applicable in a very interesting domain biologically. Okay, so now we go back to the traveling salesman problem, and we're having now a cellular version of it in terms of networks. You've got a whole bunch of nodes here, and the choice that each node has to make an effect is how many connections it will make in the network to other nodes, and how far should those connections be. Should it only connect with one's way at the -- what does it want to do? That's nonsense in terms of optimizing a system. What do you want your distribution of connections of nodes in a network to be? What does you want to optimize? You want to get a system that has very stable, solid interactions amongst clusters of nodes, but nonetheless, occasionally has the capacity to make long distance connections there. What you wind up seeing is if you generate a power law distribution in terms of, okay, all of my projections are going to be within this distance, and this then a same power law distribution, so that the vast majority of the nodes in the network are having very local connections, but still there's a possibility now in the very long ones you get a system that is the most optimal for solving problems most cheaply, cheaply in whatever the term is there, and this solves it for you. And then you look at brain development. So you've got neurons forming in the cortex, in a fetal cortex, and you've got neurons. You've got all these nodes, and they have to figure out how to wire up with each other, and how to wire up in a way that is most efficient, what's most efficient in order to be able to do the sorts of things the cortex specializes in, and you now begin to look at the distribution of projections, and it's a power law relationship. Most neurons in the cortex are having the vast majority of their projections very local, but then you have ones now and then that have moderate ones, even rare ones that have extremely long ones, and you look, and this is how the cortex is wired up. It follows a power law distribution, and what this allows you to do is have clusters of stable functional interactions, but every now and then you can talk to somebody way over the other end of the cortex to see what's happening. Interesting finding, autism. Autism people have been looking for what's up biologically, and the initial assumptions would be there's not going to be enough neurons in some part of the brain, or maybe too many another.

Human And Animal Behaviour

Cortex of Autistic Individuals (01:19:32)

What appears to be the case so far is there's a relatively normal number of neurons in the cortex, but then some people started studying the projection profiles of neurons in the cortex of individuals with autism postmortem, very rare to get these, and you see a power law distribution, but it's a different one. It's a steeper one. What does that mean in the cortex of autistic individuals way more of the connections are little local ones. There's far fewer of the long distance ones. They're way more local ones. What does that produce? Little pockets, little modules of function that are isolated from other ones, and that in some ways is what's going on. Functionally, in someone with autism, there's a lack of integration of a whole bunch of these different functions there, and that's what happens when you have maybe a mutation, or maybe some epigenetic something or other prenatally that changes the shape of the power law distribution. Interesting. There's a gender difference in the power law distribution of wiring in the cortex, which is in the typical female brain, if this is the power law distribution, and in the male brain, it's a little steeper. Male brains are more modular in their wiring. What's the biggest part of the brain? Okay, we're running out of space here. There it is. There's the brain and cross-section, and you've got cortex here and cortex there, and famously, here's all the cell bodies, and when projections are going from one hemisphere to the other, it goes across this huge bundle of axons called the corpus callosum. The corpus callosum is thicker in women than in men. On the average, it is thicker in females than in males because the power law pattern is such that there are more long distance connections and female networks, and thus, it's a thicker corpus callosum. Same thing is playing out with connections like this and connections, but this is the big honker one. You get a thinner corpus callosum in men. You get an even thinner course of corpus callosum in people with autism. Again, that hyper male notion there of baron cones. What you have here is perfectly normal number of neurons, probably even perfectly normal number of connections between neurons, but they're more local. They're more isolated in the autistic cortex. There's less integration of function. It's more isolated islands of function there. Okay, more examples of where you can get patterns coming out. Another version of it, which is bottom-up quality control. You start a website, you are selling some product, you are selling books or whatever, and you're asking people to rate the books, and you have a board of experts that read all your books, and there are editors, and there was, and they write your book reviews and recommend which one should be bought and which one's not. And you get this very successful business going so that you're selling more and more different kinds of books, and as a result, you need to hire more and more of these experts to read the books and produce their ratings. And eventually that just becomes too top-heavy. And what do you do? The whole world that we completely take for granted now, you have bottom-up. Bottom-up evaluations, everybody rates things, and that's the world where you punch in a book into Amazon, or you look at something in Netflix, and when you return it, it will give you people who liked this movie tend to like these things as well. There are no critics, professional critics sitting there doing top-down evaluations. This is another realm of expressing attraction and repulsion rules. I like this, I didn't like this, and all you need to do then is throw in elements of randomization, and you've got bottom-up quality control. And that's a completely different way of doing these things. What's the greatest example out there of bottom-up systems with quality control? Wikipedia. Wikipedia does not have gray-bearded silverback elders there, lighting up the Wikipedia knowledge and sending it on down to everyone else. It is a bottom-up self-correcting system. It is very easy to make fun of some of the stuff that winds up in Wikipedia, which is like wildly and insanely wrong. But when you get into areas that are fairly hard-nosed, very interesting study about five years ago that Nature commissioned, which was getting a bunch of experts to look at Wikipedia and to look at the encyclopedia Britannica, and look at the hard-nosed facts in there about the physical sciences, the life sciences. And what you got was Wikipedia was inhaling distance of the encyclopedia Britannica's level of accuracy. And that was five years ago, and it has five years of self-organized correction since then. This is amazing. The encyclopedia Britannica is like written. There's like 30 like elderly stuffed British scholars that they like have locked in a room for years who produced the encyclopedia. And these are the law givers and the knowledge. And you just let a whole bunch of people lose with somewhat differing opinions about whether Madonna was born in 1994 or 1987 or whatever it is. You throw them all together and you do wisdom of the crowd, stuff, and outcomes, a self-correcting, accurate adaptive system with no blueprint, just with some very simple local rules. Very local, simple ones, which is looking for similar patterns shared between different individuals and self-correcting, where you get even more efficient versions of that is with a lot of websites where not only does everybody to get to put in their opinion, but people whose opinions are better rated have more of a voice and evaluating somebody else. You're putting in weighted wisdom of the crowd type functions in there and outcomes, incredible accuracy. These are great. There's one drawback to those systems though, which is with ones like Netflix, where tells you you're going to like this if you like this, that sort of thing. It's a system that's very biased towards conformity. It's not good at spotting outliers of sort of taste and such. What you really want to do in those systems is here are the movies of the movies that are out right now. Here are the ones that have 10% of the people think it's the greatest movie they've ever seen and 10% think it's the worst movie. That's an interesting movie to see. That's when you want to be able to get away of bottom up information about the extremes, movies that generate the risk of being controversial.

How Do You Wire Some Of These Up? (01:26:40)

Everybody's going to love whatever it is and that doesn't take a whole lot. This is a way to break the potential for conformity in these bottom up systems. Nonetheless, overall, it winds up solving a problem without professional critics, without a blueprint without top down control. How do you wire some of these up? Back to the cortex and the adult cortex and has these parallel distributions and they're great because they optimize. They've got lots of stable local communication, but there's still the ability to do creative long distance connections. That's great. But how do you get that? How does the nervous system wire up this way and it does swarm intelligence? The developing cortex does a swarm intelligence solution. When the cortex is first developing, what you will have is a first generation, a pioneer generation, a pioneer generation of cells, the cortex surface, all of that, that there's a pioneer generation of cells that basically grow processes up like these. And these are called radial glial cells. What they are, they're the ants with the first generation of setting down the trail here. They're the first bees coming in and what you then have, the neurons are the second generation random wanderers. And what they do is they come in and as they begin to develop, they have rules that when they hit a radial glia, they grow up along it. They migrate along it, they throw up connections. And you do that with enough of the cortex, which is hundreds of millions of millions of neurons in there, and you get optimal power law distributions.

Difficulties of Applying Chaos Theory to Humans (01:28:21)

All you need are some very simple local rules and out of that emerges an optimally wired cortex. And it's the same simple emergent stuff going on. Okay, so how do we begin to really apply this stuff to humans? Because it winds up being very pertinent and making sense of some of the most interesting complex things about us. So, what's the difference between humans and every other species? Nothing all that exciting. From the neurobiological standpoint, you've got this real challenge which is you look at a neuron from a fruit fly under a microscope and you look at one from us and it's going to look kind of the same. Looking at a single neuron, you can't tell what species it came from. We have the same kind of neurotransmitters that a worm uses in its nervous system. We've got the same kind of ion channels, the same sort of excitability, the same action potentials, you know, minor details are different. We have not become humans by inventing new types of brain cells and new types of chemical messengers. We have the same basic off the rack neuron that a fly does. Oh, we have very similar basic building blocks. What's the difference, of course, is we've got 100 million of them for every neuron that you find in a fly brain. And out of that comes emergent properties. Great story. Gary Kasparov, Casparov. I never remember which syllable to emphasize. Grandmaster, Russian chess grandmaster in the 90s and apparently he's rated. It was one of the strongest of all times. And he was the person who wound up participating in this really major event which was this tournament with this chess playing computer that IBM had built called Deep Blue or Big Blue or Old Yellow or what was it called Deep Blue. Deep Blue. Deep Blue and they played against each other and apparently what happened was in the first game Kasparov won perhaps and the computer was able to modify its strategy. And then proceeded to mock the floor with him. And this was a landmark event in computer science. This was the first time that a computer had beaten a chess grandmaster. Amazing event. Not surprisingly, afterward Kasparov has all bummed out and depressed and his friends were trying to make him feel better. And they go to him and they say, "Look, all you got done in by is quantity. All you got done in by is the fact that that computer could do a whole lot more computations than you could in a set amount of time." I'm told apparently chess grandmaster types can see five, six moves ahead and they can intuit where the interesting ones were. And Deep Blue could calculate every single possible outcome like seven, eight moves in advance and every time it would simply pick the one that was the best outcome. It was like generating solutions to the traveling salesman problem. Kasparov didn't have a chance because the computer could simply generate enough solutions to pick the right one. So all of them are saying to him, "You should not be depressed because all that computer had going for it was quantity." And what he said in response was, "Yeah, but with enough quantity you invent quality." And that's the exact equivalent of one ant makes no sense and 10,000 do. That's the exact equivalent. With enough of these elements here you optimize. We do not have fancy neurons that are different than any other species. We've just gotten more of them. And simple nearest neighbor rules and you throw a million of them together and you get a fruit fly and you throw a hundred billion of them together and you get poetry and you get symphonies and you get theology and you get all of that. And it's the same building blocks with enough quantity you invent quality. And this is the punchline that came out of really important work a few years ago. Okay, we're now what, 10 years I think into having the human genome sequenced. And about five years ago they sequenced the chimp genome. Soundbite everybody learned from whenever back when is that the human and chimp shares 98% of its DNA.

Human/Chimp DNA Comparisons (01:32:46)

So finally you had these two gigantic rolls of printout and here's the entire human genome and here's the entire chimp one and somebody could finally sit there and compare them and compare them and see indeed is it 98% shared and that winds up being the answer even though what that number actually means is debatable. But that brings up the question of course. What's the 2%? What's the 2% that differ? And what has come out of that have been some very interesting findings. Some that were mentioned earlier on which is they are disproportionately coding for transcription factors and splicing enzymes and okay that amplification of network stuff. It is preferentially coding for non-coding regions differing but non-coding all the stuff from back that lecture for getting macro evolutionary changes. That's how you get a different species coming out but how about other types of genes. What were some of the key differences? Here was one big difference. We have about a thousand fewer genes for olfactory receptors than chimps do. They've been inactivated in us. They're called pseudogenes in us. They don't express and that's about half of the difference in the genome between humans and chimps. If you want to turn a chimp into a human you're halfway there if you just give it a lousy sense of smell. That's half the genetic differences. What other differences there? There were ones having to do with morphology, bone development, probably bipedalism versus being a partial quadrupole. There was ones having to do with hair development which is why chimps have all the hair on them and only those disturbing people with the hair on their shoulders have that much hair. So that's that there's differences in some reproductive related genes. You don't want to mate with them all of that. And then you say, "Where's the genes having to do with the brain? Are there any differences there?" And they're turned out to be very, very few. And they turned out to be very, very logical. The handful that differ seem to have something to do with cell division. Have something to do with how many rounds of cell division these cells go through. And what you have is the human versions go through more rounds. And calculations have been done looking at the average number of neurons that each progenitor cell generates, say, during cortical development. And if you start with a number of neurons that you find in a Rhesus monkey brain and happen to three or four more rounds of cell division, you get a human brain in terms of the numbers. Qualitatively, it's the exact same neurons, all the differences quantity. And you put enough of these together and you go from tools which are meant to get little termites out into human technology.

Genetic influences (01:35:32)

The difference between us and them is one of quantity, throw enough neurons in there and out begins emerging all these distinctive human things. So, what does that do? That begins to, from one thing, underline what the main genetics are about in terms of the genetic differences in the brain between us and say chimps are genes that free you from genetic influences. Because those are not specifying what sort of cells you generate in larger numbers in the brain. They're not specifying connections. They're just specifying larger quantity. And all this stuff goes to work and outcomes of human brain instead of a chimp one. Okay, so what does this whole subject get us? The chaos stuff, the complexity, emergence stuff. What are some of the themes that come through with all of it? The first one is this emphasis on quantity. You want to get a very, very fancy system. You don't necessarily have to invent a new type of ant or a new type of zero or one in a binary system or a new type of neuron. You could do it with quantity. You get quality, you get excellence, you get complexity, you get adaptive optimization with huge numbers of elements with the very simple rules. What's the next theme that comes out of it? One that is totally counterintuitive. Once again like this whole subject that shoots reductionism down the drain, totally counterintuitive, the simpler the constituent parts, the better. Fancy complicated ants that are specialized and have all sorts of different rules, they are not going to generate swarm intelligence as effectively as do systems with the simpler elements. The more simple the building blocks are, the better. Something else that is intrinsic to all of this which runs counter to all sorts of rational intuitions which is more random interactions make for better, more adaptive networks. You want lots of random noise thrown in there because that's how you stumble onto optimal solutions. Randomness is a good thing. Remember right at the time that we're making new neurons in the cortex, that's when you induce the transposable events in the genome, that's where you juggle the DNA producing randomness there. Randomness is a good thing. Randomness adds to the excellence of networks. What else? Next thing that comes out of it is the power of gradients of information. Things that guide you, you a cell, you an ant, you a commercial district. Things that can guide you towards things, things that can repel you, gradients of attraction and repulsion. And that's exactly what's going on. There is a gradient in magnets when they're this close and the power that they have is dropping off as they move. Gradients provide a lot of the optimization in these systems. Very, very important as well is nearest neighbor interactions. These are not just a handful of simple rules about how you're interacting with somebody in Chicago. These are all how you interact with another ant, another B when you bump into it, a glial cell, local interactions with simple rules. Something else, another one that runs totally counter to intuition, which is generalists work better in these systems than specialists do. Generalists are more likely to come up with these adaptive outcomes. Okay, so what does all of this mean on a larger level? And what I think is up is that this is where the complexity of human brains and human behaviors come from these emergent properties. And this is now a generation or two into people thinking about this stuff. And it is incredibly hard to think about. And most of the work I do and my peers do is reductive stuff that's very limited. And like, I don't understand how to think about it in this other way. And the odds are you guys are not going to be good enough at it either. You're good enough that you were a first generation growing up that knows if you want to find out if you're going to like a movie or not, you don't need to have somebody with expertise and a label on their forehead and a blueprint and top down. You don't need critics anymore. You have bottom up systems. You guys are a first generation growing up thinking in that way. What's the consequence of that? You're beginning to get better at this stuff. And my guess is it's not until like you're grandkids that you're going to have people thinking so much in the emergent systems that we're finally going to be able to figure out what the brain is doing. And where you see there is all sorts of things that can happen. If there was more bottom up communication in the trenches in World War I, they would have stopped the war. All these emergent things bottom up. We've now had revolutions when Marcos was overthrown in the Philippines back when that was basically bloodless. When the Czech revolution occurred, it was called the Velvet Revolution because there was no violence.

Chaos and emergent properties (01:40:40)

All they had to do was get enough people in the town square in the capital and paralyze the country and they took it over. I will predict that within our lifetime there is going to be a revolution in some country at some point where nobody leaves their living rooms.

Chaos And Attractors

Strange attractors (01:41:00)

All they do is do something online with some emergent bottom up thing and they collapse the government and do it in and no one will have to leave their living room. Because it will be all emergent things coming up. The final couple of points here. First one is all that chaotic strange attractor stuff. All of us spend a lot of time thinking about how we're not quite up to the ideal this or not. We're not at the ideal appearance. We're not at the ideal intelligence. We're not at the ideal choice of perfumes. We're not at the ideal anything. What strange attractors and chaos shows you is the notion that there is an ideal, that there's an essentialist, optimal, whatever is a myth. We are all deviating from the optima because the optima is just an emergent imaginary thing. The other final point is something that you guys are going to be much better at than any previous generation which if you grow up thinking when I want to find out if a movie is good or not, I do bottom up stuff. You are growing up with a mindset that you don't need blueprints. You don't need top down blueprints. And implicit in that when you look at how you can get complex adaptive optimized systems without blueprints is the fact that if you don't need blueprints, you don't need somebody who makes the blueprints. And will be a lot easier to comprehend that as being the case. You don't have to have a source of top down instruction if you don't need a blueprint. Okay, so I don't know. I'm talking about something.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Wisdom In a Nutshell.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.