.png)
Hunger for Wholeness
Story matters. Our lives are shaped around immersive, powerful stories that thrive at the heart of our religious traditions, scientific inquiries, and cultural landscapes. As Bertrand Russell and Albert Einstein claimed, science without religion is lame and religion without science is blind. This podcast will hear from speakers in interdisciplinary fields of science and religion who are finding answers for how to live wholistic lives. This podcast is made possible by funding from the Fetzer Institute. We are very grateful for their generosity and support. (Image credit: X-ray: NASA/CXC; Ultraviolet: NASA/JPL-Caltech/SSC; Optical: NASA/STScI [M. Meixner]/ESA/NRAO [T.A. Rector]; Infrared: NASA/JPL-Caltech/K.)
Hunger for Wholeness
How to "Fall Up" into a Collective Future with Terrence Deacon (Part 2)
In this episode of Hunger for Wholeness, Sr. Ilia Delio continues her conversation with neuroanthropologist Terrence Deacon. Together, they explore how we might “fall up” into a collective future—probing the limits of machines and minds, and why complexity in nature often emerges not by work, but by play.
What happens when we mistake computation for thinking? When our tools become mirrors and we fall for our own reflection? From Plato’s worry about writing to today’s large language models, Deacon asks how outsourcing our capacities can both diminish autonomy and unlock astonishing, shared intelligence—raising new questions about freedom, dependence, and human flourishing.
Later in the episode, Sr. Ilia and Deacon trace “falling up” through biology—hemoglobin’s elegant duplications, altruism, and the rule of play—before turning to the noosphere: a future where persons remain distinct yet increasingly interdependent. Along the way they weigh the risks of “queen bee” totalisms against the promise of a freer, more cooperative species life.
ABOUT TERRENCE DEACON
“Almost everything we do is with respect to something that doesn’t yet exist… All of our actions… are really about that absence. I actually think that this is the essence of what it means for something to be alive.”
Professor Terrence Deacon is Distinguished Professor emeritus at the University of California, Berkeley, and has previously held faculty positions at Harvard University, Harvard Medical School, and Boston University. His research in comparative and developmental neuroanatomy has focused on the human brain, using physiological, quantitative, and cross-species methods. He is the author of The Symbolic Species: The Coevolution of Language and the Brain (1997), which explores how language and the human brain evolved together, and Incomplete Nature: How Mind Emerged from Matter (2012), which examines how thermodynamic, self-organizing, semiotic, and evolutionary processes gave rise to life and mind. He is currently working on a new book, Falling Up: How Inverse Darwinism Catalyzes Evolution, which explores how the relaxation of natural selection and subsequent degenerative processes have paradoxically contributed to the evolution of increasing biological complexity.
Whether you're enjoying Hunger for Wholeness or see ways we can improve, we’d genuinely value your feedback. Your insights help us serve our listening community with greater depth and clarity. Visit christogenesis.org/feedback to share your thoughts. Thanks for being part of the journey.
A huge thank you to all of you who subscribe and support our show!
Support for A Hunger for Wholeness comes from the Fetzer Institute. Fetzer supports a movement of organizations who are applying spiritual solutions to society's toughest problems. Get involved at fetzer.org.
Visit the Center for Christogenesis' website at christogenesis.org/podcast to browse all Hunger for Wholeness episodes and read more from Ilia Delio. Follow us on Facebook and Instagram for episode releases and other updates.
Robert: Welcome back to Ilia Delio’s conversation with Terrence Deacon. In the last episode, they explored the role of abstraction in human evolution. Today, the focus shifts from abstraction to technology, as Ilia asks whether machine life—computers and AI—is an inevitable next step for humanity. Terry shares insights from his latest work on evolution and how, as he puts it, we are “falling up.” Later, they discuss what child-rearing reveals about biological complexity and the role of play in evolution.
Ilia: An erroneous understanding of AI, which I called it like biological extended intelligence. Like it's something what we are is in a sense we're reproducing it or simulating it in another medium. I mean, that's about as simple as it gets, you know. It doesn't have a mind of its own. It doesn't have consciousness. It doesn't have “soul” or whatever. But the fact is we are treating it as if it does.
Terry: That's right.
Ilia: And so it is a question. It goes back to that notion of abstracting things, you know. It becomes another abstracted idea or concept or mechanism. And I am concerned about, and maybe this is my question to you, can we evolve ourselves out of biological existence? You know, I have two questions actually. My own question is how far or the capacity of the human brain, have we maximized it? Are we just at the end point of what we can be biologically and therefore we have to build, we have to reproduce ourselves in these machines or are the machines stifling our as yet infinite capacity to biologically emerge into something higher?
Terry: Terribly important questions. We probably don't have the answers. I think one of the things we can recognize is that, of course, human brains are limited. What we can do, how we work, and in fact, a piece that I've been working on and will turn into a later book if I ever get done with this one, the current one, one of the subtitles is "Why Computers Don't Think and Brains Don't Compute."
Ilia: Oh, that's a great title.
Terry: In fact, the story is these are very different processes. They work in a totally different way. But what that means is that if we confuse them as doing the same thing, we won't figure out how to use them together, how to meld them synergistically. Our capacity is these machine capacities. Now with most of the things we built, most of the tools we built, they're nice prostheses, extend some capacity of mine because I can't do that. Even my shovel— my hands aren't stiff enough to push through the dirt. But a shovel is because it has features that I don't have. Well, and of course, that's the way it's been with abacus and calculators and writing, for that matter, and so on. We're doing that with computing now, but we've got to recognize the difference in order to understand how the symbiosis is going to develop. It's not an extension of us.
I like your use of the term mirror. I think we're in a narcissist dilemma. Narcissus dies looking at his own reflection thinking that that's another person like me and I'm going to fall in love with this other image. Well, the image, of course, is this totally superficial surface of what a living being is. Yeah. But Narcissus is so convinced by it that he stares and stares and stares until he dies. That's a possibility for us. The Narcissus future is one of the possible scenarios. We could look into the future of how we will interact with A.I. If we become too fascinated with it as though it's a reflection of us, we'll miss the fact that it is just a reflection. A reflection has no depth, has no life, but it's a useful tool. We're really good at using mirrors these days. What do we use them for? To take care of our own bodies. Right, exactly. You know, but we don't confuse them.
Ilia: There's something that's driving AI at this unprecedented rate of development. As Ray Kurzweil and others, it's on this exponential trajectory. And if you just look at what's going on in the chip industry and cloud industry, I mean, it's this, it's frenetic almost. I know it's a power thing, definitely it's about power. We get all that. But the fact is, as soon as it comes out, like we're right there, we have to have it and there's something that's creating this kind of existential need that without it, we're not going to make it we're not going to be more powerful, we're not going to be smarter, we're just going to be fall behind and die off the earth.
So I am interested in what is driving this. And yeah, it could be purely Darwinian in the sense of like the strongest survive or the smartest survive or this type thing. But there's something you know, you talked about the narcissistic aspect or the mirror, like we're seeing ourselves. I've also thought that AI is sort of like I mean, cyberspace is sort of like the collective unconscious, you know? It's sort of like your psyche before you with all the possibilities, all the archetypes that Jung might've named, they're right there, whatever you want to be. You know, you could be a doctor, lawyer, Indian chief, you name it. You can be on the light web, the dark web, the medium web. And so it's got an infinite capacity to meet our immediate needs and desires. And that's problematic. You know, if absence is what creates form, ultimately in the sense of what you're arguing for, AI is just the opposite of absence. It's everything, in a sense, potentially present at the touch of a button.
Terry: You know, I begin the current book I'm working on now with this bit from Plato's Phaedrus, in which Socrates worries that the advent of writing will make people stupid.
Ilia: For the sake of our listeners, tell us a little bit about the book you're working on.
Terry: It's a nice segue from what we've just been talking about. Part because the story that he's telling is, of course, that people in those times could memorize an entire Iliad and repeat it. They had this incredible capacity to learn and remember and to pass on. Well, if you offload this onto writing, you don't have to remember all that stuff. So Socrates is worried that, well it's just going to make people are going to be stupider and stupider and stupider because they offloaded this.
The point I make in the beginning of this new book is that, yeah, that's true. And that probably those of us today who are so based upon book learning, we probably can't make it ourselves in the open world and just the world itself, as well as people in Socrates' day. They probably knew more about how to find food, how to find shelter, how to do all of that stuff. But there's something different, and that is, I know what Socrates was thinking. I know what Aristotle was thinking. I know what Newton realized. It didn't work. Yeah, just because it was written down.
Now, this is, of course, what is now incorporated into these large language models. Everything that's been written in English and in many now, many more languages, but certainly in English, is pretty much there. It's the collective unconscious, if you will, in some sense, with all the good and bad that comes with a collective unconscious. But what's happened is, yes, we gave up some autonomy with writing, but we gained something by virtue of the collective relationship that developed.
I then go on to say, "Look, it's the same with the cell phone game. How many of us any longer remember the phone numbers of our family members, the phone numbers of our best friends? And if our cell phones are suddenly no longer available to us, we're bereft. We can't even make it in the social world any longer. Or if the internet goes down, what happens to us? We're now totally dependent upon this larger thing, but these capacities have made so many other things possible. When I think of the James Webb Space Telescope, it's probably based upon millions of minds working on this.
The accumulation of technology over the last century has made this possible, that we're now able to peer back into the beginning of the universe almost, using this This is a prosthesis beyond anything anyone could have imagined even a decade ago. So what's happened is that although we do lose autonomy, we give up some of our capacities. One of the things that happens when we get these new tools available to us that can do things that we don't do so well, we just sort of say, "Okay, do it for me." It's the natural laziness of life. It's not just humans, but all life. You know, whenever something is available, take advantage of it. Well, that's what we're doing with all of this stuff, and certainly with AI. But one of the things about that process is it makes you more dependent and more interdependent, or more codependent, where we have to work together. Right. We have to work together to keep the cell phone towers working. We have to pay a little extra money to do this. But the capacity that's come with it is incredible.
Robert: Evolution is not a linear story. And our current place as a technological species is anything but straightforward. Next, Ilia asks Terry about biological complexity and what it reveals about evolution's meandering, even playful, path. Later, they explore the noosphere and consider how humanity might still evolve into a free, unified, and convergent species, despite the warning signs of our time.
Ilia: Where does complexity fit into this? You know, there's a lot of talk on complexity these days, and some have posited as the fourth side for Darwinian evolution. So, where do you see complexity here?
Terry: What we've just been talking about. The main title of my new book is called "Falling Up" and it's about how complexity happens not by work but by play. So, now, what is complexity in biology? It's lots of parts that are synergistically working together. The more combinations of parts working together that produce more capacities. But how do you get the synergy? How do you get things to work together without planning ahead? Well, one way to do it is to duplicate things, allow a little play, and that play will allow you to select among the best interactions, not just the best functions, but how these things will work together.
Now, I'll give you a simple example. And I think it's one of those that was, one of the earliest ones discovered, which was the hemoglobin in your blood, in my blood. Hemoglobin is, in our blood, the result of two genes. One produces alpha hemoglobin, and the other produces beta hemoglobin. The hemoglobin in your blood is the result of four proteins, two alpha proteins and two beta proteins, each of them produced by a different gene, that come together in a four-part molecule. We call it a tetramer. They just get stuck together, but they get stuck together by what we call hydrogen bonds. They're not strongly held together, but by virtue of having four of them, we can carry more oxygen, and it's held a little more loosely, and we give it up to our cells a little more efficiently.
So having a four-part system like this is really useful. Well, it turns out that alpha and beta, and we look back at the genetics, there was originally just one hemoglobin. And at some point, early in mammalian evolution, you get it duplicated. But in fact, if you go back farther, hemoglobin was actually a duplicate variant of myoglobin. Myoglobin is the thing that makes your muscles red. Now, the redness is the result of oxygen and basically rust, because each of these molecules holds an iron atom that carries oxygen. And so the redness of blood in your muscles is basically a rust effect. But what happened is that myoglobin worked fine if you could get oxygen just by diffusion. If you're small and you're in water and you can just sort of diffuse oxygen in, you just need it in your cells. But as you get bigger, you can't just diffuse it in, so you gotta have it flowing through your blood, through some canal system that carries it from place to place. But you can't use myoglobin to do that. You need something that dissolves in blood.
So myoglobin duplicated. It just duplicated and it began to vary in its functions. And that which became more soluble was something that could be carried in blood. Then as organisms get bigger, you get more problems. And so you get the hemoglobin duplicates. Now it turns out that in mammals, in placental mammals in particular, you and I being an example, the beta version of hemoglobin also began to diversify. It duplicated a few times. There's a new problem that this made available. Placentation and gestation. Why? Well, because as an embryo and a fetus develop in the womb, they've got to get oxygen for what? For mother's hemoglobin. Well, mother's hemoglobin wants to hold on to its oxygen and give it to mom. The key is that for the fetus to get oxygen, it's got to have a little stronger affinity for oxygen in its hemoglobin.
So what happens is the beta hemoglobin goes through three different stages during gestation. Early on it has a different affinity than it does later on, and later on, and then you're born and you have to give up those and go back to just standard beta hemoglobin. So what's happened is that hemoglobin, the evolutionary history of hemoglobin, is a history of progressive duplications and then, you might say, experimenting with the combinatorial relationships. Both the combinations of the difference in psychability and where it is in the body, and then finally the strength of it and so on. What's happened is that by duplication, it's allowed new synergies to be discovered.
The hemoglobin story is one of progressive complexification by falling up. It's not that it was designed to do all these things, but by allowing it some degree of error tolerance, you could begin to explore these combinations and find the best combinations. How complexity gets developed. Now the key is, you asked me early on in this discussion, might this apply to social processes? I think absolutely, because this is a question that evolutionary biologists have been struggling with for a while. And this is the problem of altruism, pro-social behavior. How does it come about? You know, how do we avoid warring factions between people that believe different things or that want other things?
The answer is that biology figured out how to do this by playing. so by giving up autonomy, by saying, "Oh, hey, I don't need to do everything myself, but I'll need you to stay with me, for you to do your part and me to do my part." And that means we have to together maintain the synergy. It produces in biology larger and larger complexes, bigger and bigger organisms that are societies that have come together to be individuals. And in human history, I think it has the possibility, again, using this logic of play. Using this logic that's actually based upon this degenerative capacity. We want to give up these capacities if we can take advantage of it in an easier way, because that gives us more possibilities. But as soon as we do that, we need each other. We have to cooperate. We haven't quite figured out how to do that at a larger level.
Ilia: I think your paradigm from a biological level, it speaks about hope built into life itself. In other words, if there is this capacity to duplicate and to move on to new ways of doing things, even hemoglobin or whatever we're talking, you're as a scientist, you focus on particular pathways, but just bringing that up a notch or two, we are saying that we're not conditioned or destined or determined by historical circumstances alone that even with on the larger levels of complex human life, the same principles must hold.
That playfulness and cooperativity and synergism are the means by which we, in a sense, rebuild and look towards new levels of life, which is why I think going back to our AI discussion, because I do want to end talking a little bit about the new sphere, is that why the global superorganism concept that Clément Vidal and Francis Heiligen and yourself and others have been talking about makes a lot of sense is our hopeful next step is that we are actually more together than we are alone. And that's just a simple biological fact that when you go from the smallest amoeba up to the most advanced primate.
So cooperativity or we keep relationality frees us. So it's about freedom in nature as well and something is able to be itself more fully if it doesn't have to vie for life or just sheer survival in the harsh conditions of any environment. So your principle of inverse Darwinism and falling upward that's a term, by the way, that was used by Ken Wilber and then Richard Rohr to talk about moving into higher levels of consciousness or what the mystics would do moving to different levels. And that's kind of interesting, that's another discussion. But you know, there is something about being relieved of the pressures of survivability and being able to focus or to be more aware of our wider wholeness.
Let's bring this—because you have been involved in the Human Energy Project and there's been a lot of discussion on the noosphere, that term that Teilhard coined to speak of this new level of mind now, especially with artificial intelligence. Where do you see this human energy on the level of the noosphere? Where do you see it, pros and cons, and where might we be going in this new level of life?
Terry: Well, as a biologist, of course, I have lots of examples in evolution to look at where this happened. Sometimes it happens in ways that we might not feel comfortable with. So for example, multi-celled organisms like you and I, my cells don't have a lot of autonomy. They're pretty much, they have to do what they have to do. And in developing embryos, there's something we call apoptosis, that cells just sort of spontaneously decide, "Okay, now it's time for me to die, to give myself up so that we can produce something else." In the social insect models, like ants and bees, a bunch of them give up the autonomy.
So the bee gives up the autonomy, the worker bee gives up her autonomy to reproduce, to have the queen reproduce, and even more than that, is willing to kill herself by stinging a potential predator. that I'm going to sacrifice myself wantingly. I want to do this. You know, I feel compelled to do this. Of course, not thinking that that's what's going on, but nevertheless, these are models of how evolution has coalesced into higher-order systems. Not all of them are going to be attractive. So the first question you want to ask is, what are the ones that are attractive and not attractive, and how did they come about? If we're going to really sort of understand how to move forward into a realm in which you might say human flourishing is advanced, as opposed to just complexification and consolidation.
What might we think about? Well, there are a lot of multicelled systems that are not locked in like I just described. Plants, animals, and many fungi that are multicellular oftentimes can't any longer exist as single-celled organisms. Basically, for us, a good example, we go through a single-celled stage. It's called the fertilized ovum. That's a single-celled stage, but it's totally parasitic. That stage can't make it on its own, but there are about 20 other variants of multicellularity in evolution that have a phase in which the individuals can make it at times, but there are times when they become organized into larger multicellular units in which they can take advantage of their cooperativity to produce something.
My sense is that that's probably a little bit more like what we would like to see the future of a noosphere be, in which human beings have not lost all autonomy, are not just in service of some central queen bee. That in fact, that there's this capacity to maintain some. What I look at is then to sort of look at how these things have evolved, to understand how these differences have evolved and how the falling out story is gonna be played out differently under these circumstances. I think it's a great set of clues for understanding how we would like to tweak this process of moving forward into a larger collective.
Ilia: I actually think though instead of becoming autonomous we're going to become more free. I think we're going to become freer as persons. Like I think the greater the level of relationality or interdependency, the freer we are. Because in a sense, we have the opportunity then to participate in something more than ourselves. And that actually will help us become more of ourselves. Because it's just that who you are and who I am, that's our distinctiveness.
And yet we're part of something that we're connected to. So when you're by yourself, it's like anything, and you're not connected, you have the responsibility then to do everything. You have to feed yourself and clothe yourself and educate yourself and whatever it is. That aloneness or that the oneness that the autonomy or the autonomous subject kind of connoted, that actually was not helpful. It's not a world built for one. It's a world built for the many, I think, in the sense that it is meant for the flourishing of life.
Terry: Well, and this is, again, I think the lesson we learned from Plato. Yes, we have given stuff up. To use cell phones, we have given stuff up. But we haven't given up everything. We've gained something because of what we gave up. The key is to understand that it's not autonomy or slavery. There's a trade-off because we have aspects. As certain aspects, we will give up. As all of us in the West have given up a lot of capacity to sort of, as you say, feed and clothe ourselves. We depend on each other to do this. And in fact, when that breaks down, it's bad news. We just don't have that capacity. It's completely degraded, for the most part. The other problem of this has to do also with the third world, and that is this psychologist Abraham Maslow created this idea of the hierarchy of needs.
Basically, If your needs, and your need to be competent and to be autonomous because you have to do everything for yourself, pretty much you don't have the capacity to cooperate. You don't have the capacity to think about others. You know, you really have to just focus in. Maybe you think about your extended family or your immediate friends, and that's it. And what happens is that we become more, you might say, cooperative, more synergistic, willing to be part of a larger group, the more we don't have to be autonomous. And so the real problem is that if we keep the world in such a way that there are a large fraction of people in the world that aren't able to play, that don't have error tolerance, then there is no possibility to fall up into a better, more interesting situation. So to get there, to do it right, we're forced into this other more egalitarian move. Now we have the ability to see this, to analyze how this happens in the rest of the world.
It didn't require evolution cognition to figure this out, or prescience of the future, anticipation. It falls out in this process spontaneously. But that's in part because to be alive, you have to stay ahead of the second law of thermodynamics. And that means you can't just keep up, you have to produce extra. You have to produce redundancy. You have to have a little bit extra. That's made it possible. Now we are now in a world in which we can make extra, in which we can produce more than we need, where we can now play a little bit. Not everybody in the world can do that. So to some extent, to move in the direction we're thinking, a direction for human flourishing at a larger level, you need to first of all be able to provide that for most of the world. That's sort of a prerequisite to move in this direction. Otherwise, it's gonna be queen bees. And those queen bees are gonna be totalitarian and they're gonna be big corporations. They're gonna be those with money and influence.
Ilia: Yeah, that is not a bright future. So I do think, and what you're saying is there's a capacity in the new sphere, in this new level of AI complexified life. But I do think as we become freer, those of us who have these opportunities, we do have sort of a moral obligation as well though to use our freedom wisely.
Terry: Absolutely true.
Ilia: To be attentive to the needs of the world. I think of what's going on in Gaza today and that there are people literally starving to death and can't even function just to get up in the morning and do some work. So yeah, all is not well, but we have the capacity to improve it. And that's why our freedom, not only in terms of our creative gifts as we use these, but politically with decisions we make and how we work collectively for the good of the whole.
Robert: Next time, I'll have the pleasure of speaking with process philosopher and musician Donald Viney. Thanks to Terry Deacon, and a special thanks to the Fetzer Institute. I'm Robert Nicastro. Thanks for listening.