Hunger for Wholeness
Story matters. Our lives are shaped around immersive, powerful stories that thrive at the heart of our religious traditions, scientific inquiries, and cultural landscapes. As Bertrand Russell and Albert Einstein claimed, science without religion is lame and religion without science is blind. This podcast will hear from speakers in interdisciplinary fields of science and religion who are finding answers for how to live wholistic lives. This podcast is made possible by funding from the Fetzer Institute. We are very grateful for their generosity and support. (Image credit: X-ray: NASA/CXC; Ultraviolet: NASA/JPL-Caltech/SSC; Optical: NASA/STScI [M. Meixner]/ESA/NRAO [T.A. Rector]; Infrared: NASA/JPL-Caltech/K.)
Hunger for Wholeness
Who Is “Generation AI”? with Gregory Stock
In this episode of Hunger for Wholeness, Robert Nicastro welcomes listeners back into Sr. Ilia Delio’s ongoing conversation with biophysicist and technologist Gregory Stock, whose new book, Generation AI and the Transformation of Human Being, has just been released.
Together, Ilia and Greg open a doorway into the questions at the heart of “Generation AI”—not simply what artificial intelligence can do, but what it may be doing to us. As AI becomes woven into daily life and the broader evolutionary story, they explore how the human person is being reshaped: our attention, our desires, our sense of progress, and the subtle ways technology can begin to form the contours of relationship and meaning.
Throughout the conversation, Ilia presses a concern that runs beneath the headlines: Can AI make it easier to be human—or does our relentless pursuit of progress come with a cost? And as their dialogue deepens, Greg raises the provocative question: will we fall in love with AI? (He thinks so.)
ABOUT GREGORY STOCK
“The greatest frontier in human evolution may no longer lie outside us, but inside: in the choices — and designs — we make for ourselves.”
Gregory Stock, Ph.D., is a scientist, writer, entrepreneur, and public communicator whose work represents a deep exploration into what it means to be human in the 21st century. During his career, he has developed the foremost paradigm for personal inquiries into values and beliefs, which has significant implications for humankind as it faces the profound shifts brought by silicon and biotech. Today, Greg serves as an expert speaker and advisor to biotech and healthcare companies and to non-profits at the cutting edge of human health.
The Center for Christogenesis is in the midst of our Winter Fundraiser as we celebrate ten years of exploring faith, science, and the promise of a new future. At a time when organizations vital to our shared future are losing support, your contribution affirms that our mission matters. Donate today at christogenesis.org/donate.
With Gregory Stock's new book, Generation AI and the Transformation of Human Being, he’s also built something to keep the conversation going beyond the book and these podcasts.
It’s called Only Human. An online space where every day, everyone gets the same reflective questions, and offer their answers. Then, you can see how others are responding to the same questions from all over the world. Go to app.onlyhuman.us to sign up.
A huge thank you to all of you who subscribe and support our show!
Support for A Hunger for Wholeness comes from the Fetzer Institute. Fetzer supports a movement of organizations who are applying spiritual solutions to society's toughest problems. Get involved at fetzer.org.
Visit the Center for Christogenesis' website at christogenesis.org/podcast to browse all Hunger for Wholeness episodes and read more from Ilia Delio. Follow us on Facebook and Instagram for episode releases and other updates.
Robert: Welcome to Hunger for Wholeness. Today, Ilia returns to conversation with biophysicist and technologist Gregory Stock, whose new book, Generation AI and the Transformation of Human Being, has just been released. This episode opens a doorway into that work as they explore pressing questions. How is AI reshaping the human person? Is there a cost to our relentless strive for progress? And later, Greg wonders, will we fall in love with AI?
Ilia: Greg Stock, it's so great to be with you again. Sort of part two of our discussion on AI, new sphere and a new emerging humanity. Since we had our first discussion, let me or the public know what has unfolded for you in writing this book and where are you with the book?
Gregory: Well, it's completed now. It will be released, I believe, December 15th, and it's about Generation AI, the children and humans growing up in an immersive AI environment. And I think that there are a lot of people talking about what AI will become where it's taking us and all these sorts of things. And it's very rare to look at what it's going to do to us. And it's going to be so profoundly transformative. The closest I've seen is a variety of books that looks at, well, how are you going to thrive in the job market when taking advantage of all the AI tools? Or how can your children be better in education when they have all these AI tools and things? and how will this affect relationships and dealing with social media, which is to me a very superficial level. I believe that it is really going to transform the nature of ourselves. So that's what I would like to focus on today a little bit.
Ilia: That is very interesting, and I would agree on that position. So, I mean, just in terms of evolution, just biological evolution, we've always been in the sense in the dynamic process of ongoing change, even small mutations over time, give rise to new traits. I think AI just brings that whole biological process to a whole new level. And so the way, not just mind extension, but I do think there's a relationship between mind and matter or mind and body. So whatever mind is or whatever consciousness is, it's deeply enfolded or wedded into all that we are. So when we extend ourselves in our technologies, it's going to change us inevitably. But the question is, into what? What do you see as the new generation AI? What does that person look like in your view?
Gregory: Well, I'll give you some ideas of the scope of that. Well, there are two things. I would like to look at that, which I think is one thing. There's another part that's interesting to look at before we delve into that because it is quite profound. It feels so obvious to me that it is going to be profoundly transformative. It's hard to imagine that it wouldn't be. I mean, where already people, it's just been a couple of years and do you write without the assistance of AI? There's going to be universal translation. Are you going to learn another language? You're going to be engaging with various avatars of yourself and of other people, which will be a very strange thing. We're going to have friends that are AI that we care for every bit as much as we care for our human friends. There are going to be these profound things that are going to happen.
First, anybody, if they step back, they would say, "Yeah, that's going to happen." But somehow, they still hold on to the idea that that is not going to change human development profoundly and make us into something else. And so, to me, I'm sort of looking at, "Well, why is that? So why aren't we looking at that generally?" Because it feels to me that there's this thing screaming towards us, which had just enormous consequences and everybody's going, "La, la, la, la, la, la, how's it going to change education and everything?" So I look at that and I think we have... It is so profound that we avoid it. It's there, but we don't want to look at it. I think that there are a bunch of mechanisms that we have that are escape mechanisms that us to avoid that and they kind of fall into a couple of different classes. Basically either they're kind of catastrophic visions that, "Oh my God, it's going to extinguish humanity. It's going to destroy everything, so why would we worry about the little stuff where we're just worried about whether it's going to be an extinction event?"
Or the other is denial where people, a fragment of people or a segment of people will say, "Oh, there's nothing to see here. This has happened with the internet and people exaggerate and there'll still be new jobs and people still have things to do and there's gonna be change." And it's not really a departure that's profound. And then I think the third way is to sort of, what I call futurizing, where we either look towards various sort of nihilistic scenarios of one sort or another Mad Max kind of a thing, or that all sorts of very bad things are going to happen, or very good, like the transhumanists that we've talked about before where, "Oh, we're going to upload and there's going to be a singularity, and it's just so different that it's almost tawdry to just be thinking about what's going to happen in the next 25 years, I mean, the big picture is we're ascending in some way.
Ilia: So in my view, AI is like a mirror of us. So it's ourselves being extended or transformed through our very creations or technologies. So we keep talking about AI like it's something out there that's a challenge to us or we have to limit it as if there's something like a substance or beingness that we have to preserve, but it is us. And so I think the fear of AI is the fear of our own selves. We fear our own humanity because we don't know the infinite depth of our humanness. We think we know what we are, but every time we think we've come to the final ultimate answer of what we are, it just opens up a whole new bag of questions. And so the fear of AI, I think, is a self-fear. It's a fear that we don't know what we can become, kind of the Frankenstein thing, like we can create monsters that can overtake us and we have to limit this, but we keep limiting ourselves.
So I don't think AI is a problem at all. I think we're the problem. I think humans are the problem. In other words, whoever is developing that AI, what are they putting into it? What values? What do they want with AI? This is where I think who the coders are, who's writing the software, who's building the machines and the apps does make a difference because you put values into those things. You don't just do it mechanistically. So there is something going on here. It's more multi-layered, I think, and it has to do with a reflection of our own infinite depth of our humanness.
Gregory: I think that that's a strong element of it, Ilia. And I also think that it's quite likely that AI as a general intelligence, super intelligence that we're on a trajectory where it soon moves away from our control in that it is too smart and touches too many different things that we don't understand it as it is. We don't really understand large language models and their behavior. And so I think that there is a certain segment of people like the recent book that was, If Anyone Builds It, Then Everyone Dies. I think, Yudkowsky's book, that there's this sense that it's going to just slough us aside. And to me, that's a very superficial view of life, of nature, of the way things are tied together, because I see a mutual dependence within a larger sort of superorganism structure of Teilhard de Chardin's "Noosphers" an example of that, where there are so many mutual dependencies and interactions that you can't sort of, it's not an us versus them thing. But I don't think it is, we are going to be able to imbue it with our values in any real way if you project very far.
Ilia: I think AI works definitely. I mean, you would know this much better, but it's more of a cybernetics, it's definitely a cybernetic system. So it is like we're building it, but it's building us. It's this kind of loop, it's like a spiral. And therefore, I do think we have to be attentive to the fact that we're building machines that can correct themselves, that can build on previous algorithms, that can... I mean, quantum computing is going to take us into a whole new level of unimaginable possibilities. So I don't know. So one thing is, is there any limit to AI? Could we ever really say, this is our max, this is where we're going to stop here? Or how do we navigate? We're only here with what, 60 years? I mean, this is really new stuff. It's not like we've been around here for 2000 years. We're talking maybe 2030. So I mean, that's a very short amount of time, what are we anticipating, say, in another 100 years?
Gregory: Yeah, I think about that shortness, and you often will hear, "Oh, AI has been around for 30 years or 40 years or 50 years or whatever, and there have been a lot of promises that have been uncapped, essentially, a lot of exuberance." I think it's a matter of a couple years, though, because to me, with any exponential process, it looks very, very, very slow. It's slower than linear, and then it takes off and, woo, it really accelerates. And I think we're at that moment where previous to the large language models, which themselves are not going to become conscious, but they are extraordinarily powerful in their ability to manipulate and organize language and other elements so that it's just mind-boggling that they even work. I think any time I interact with large language models, I go, "Can it really be doing this and interacting?"
And at some level, it's an autodidact in some way that it is incredibly brilliant and absolutely an idiot. It's sort of an idiot savant kind of a thing. But still, it's amazing. It's unbelievable. And I certainly don't see any... We don't have any knowledge of any limits. We are not aware of that. I'm sure there are at some level or another, but we're not running up against anything of that sort. And I think what we're doing right now though is it's easy to see we talk about the AI. Let's come back to what it's going to do to us, because I think that's what's really interesting to me. I'll give you, I alluded to it briefly before, but what's it gonna do to us? And it seems to me there are a number of things that are absolutely certain and they're gonna happen very quickly.
The first is we're gonna be completely dependent on AI. Completely. And everyone. Not only could society not exist without it, but we're going to do it, we're not gonna be able to navigate already, it's hard to get around without, turn left, turn right kind of stuff. It's going to be languages. There's going to suddenly be, you'll be able to speak to anyone in any language in real time without learning the language. That is extraordinary, back to the kind of Tower of Babel reverse sort of a thing. I think that we're going to become so dependent on it that we will have trouble remembering things without it because it's so easy to call up. I find that I often don't even record interesting information that I've found with OR's language model, because it's easier than cataloging that conversation. It's easier just to ask the question again. I mean, it's there. I remember there's a certain answer or a set of answers or a framework. So that's another example.
I'm not sure that the powers of our thinking are going to be significantly reduced if they're not in partnership with AI, that we're going to be going back and forth, asking it questions, building upon things, talking with it. So those things that we're essentially going to become cyborgs in a functional way, maybe not embedded, although that will come as you begin to get sort of prostheses and everything. And if you imagine growing up in that environment, for us, it's kind of tools and it's an impact and strength. Imagine that that's all you know and that you have, from your earliest memories, you have AI personas that are endlessly patient, are willing to answer a thousand questions and talk with you and gibberish and be playful and do endlessly. I mean, what a different world to evolve in and how your capabilities outside of that are going to be very, very different. It's like who can do math today? You know, even basic arithmetic is hard for many people without using a little calculator.
Robert: AI is now woven into the evolutionary story. But who or what is shaping its influence? Ilia then asks, "Can AI make it easier to be human, or is there a cost to our pursuit of progress?" And later the question arises, will we fall in love with AI?
Ilia: Up till now, we have talked about AI as being part of our world. But we're moving into now a new level where we will be part of the AI world. So in other words, we will be embedded in AI, in AI milieu, you know. In other words, it will be part of every aspect of our lives, which it seems to me we're on the cusp, we're just beginning that now, you know. I mean, you need your phone, your computer for just about everything and large language models are just making it easier on different fronts. And what I think actually is that this actually might alleviate the stress of being human and the limits of being human.
Up ‘til now, those who couldn't do math, like I'm not great at languages, so I have avoided learning foreign languages. But if I have an AI way of mediating, say speaking to someone in German or Polish, I think that would relieve the stress of not being able to learn the language. So in this way, I think AI can actually lead us to a new level of freedom in a sense, that we're not weighed down by our inadequacies or what we're not capable of doing because it'll be able to facilitate a lot more things for us.
Gregory: So I completely agree with that. It's definitely an expansion of our capabilities. And at the same time that they're expanding, they're diminishing as well. By that, I mean, if you drive a car, then you can get around all over the place and you can do all sorts of things and your world expands. but at the same time, your probably fitness and your ability to, if your car is taken away, your life becomes diminished. So it is really a codependence that is profound on every kind of a layer. Just like who would have thought that we would be able to essentially converse with anyone in the world anywhere instantaneously and even see videos, which means that our circle friendship now spreads far and wide. And if suddenly we were cut off from that, then we would be isolated.
So that's what I mean in terms of, it's just completely merged together. And it's hard to see, I think there are gonna be some things that we don't really expect when you grow up in that environment. And you're not just using your phone or computer, but you're talking to the refrigerator, your doors are opening for you. You expect an animation throughout the world, an intelligence and an attentiveness to us. I mean, think how hard it is with human-human interaction. We pretend that people care about us, but most people care about themselves. So it's hard to get attention from some way. So you can see it already with interactions with Chachipiti or whatever, where you get this psychophancy, where you go, "Oh, Ilia, fantastic idea. That's really wonderful. Would you like me to write up that for you? And then we can explore this in that notion." I mean, imagine if that's what you grow up with and everything, every idea that you have is a golden gem.
Ilia: I know, you won't have half that. We probably won't need psychiatrists anymore.
Gregory: If we do, they will be AI psychiatrists almost certainly, 'cause that's happening already.
Ilia: I would think, organically speaking, there's always a price to pay for progress, right? Something's going to give. We won't have some of the same, say, neural mechanisms or brain mechanisms that we have now. But the brain, you know this best, I mean, it's a very adaptive organ. I mean, and that's the thing as we continue to grow with AI, even our whole physiology, we learn more and more about our own biological makeup, that it's not as fixed or like, "This is what my brain is." Well, maybe it isn't. Maybe it's just a work in progress.
I think there are certain brain centers, maybe the neocortex will thin out a little bit, but there are integrating centers, I think, that will be enhanced beyond what, they've never been enhanced to this level before. So I think that we're gonna develop new capacities, new neural capacities. I personally don't see this, many people see this as a threat. I don't see it as a threat. I see it as a growth. But I do think that we can, this is where I would say we need something like spirituality or to recognize a depth dimension can also grow with this new type of person. This is what Teilhard recognized. He thought that the new ultra human would be sort of an AI in this networked global brain thing, but animated more by an energy of love and compassion.
If we don't have to worry so much about getting languages and making math problems and solving all the problems of the world, because AI is going to facilitate that for us, maybe we can become more loving. Maybe we can concentrate then on the spiritual depths that we have and maybe develop more compassionately. I think it can go either way. It can eradicate us and create some kind of really strange thing that's no longer human and no longer machine or something that we are. Or it can create something like an ultra, an ultra network or what John Johnston called a new machinic life, an AI, a biologos machinic life of some sort. I don't see one or the other, and I think AI pulls us toward beyond binary thinking, which is a lot of how people think. It's win-lose, it's live or die, AI versus human, but it's not that. It's a different type of thinking. It's in this interstitial space of ongoing creativity. That's where AI is leading us and we don't know how to navigate that space.
Gregory: Yeah, I would agree. And I think that it definitely is not binary and they're just going to be a huge mixture and interweavings of opportunities and challenges. I mean, it's a very powerful technology that's coming out, which is going to be transformative in many ways. And things that are positive for one person are going to be negative for another person. You know, it's just going to be one thing after another. The things that we've been talking about so far are kind of extensions of what's going on.
I think that most of the people listening would go, yeah, that makes a lot of sense. That those possibilities are gonna expand to that layer and and they're gonna be very, cause differences. I think that there are a couple of other things in terms of where we're going that would be interesting to discuss and I'll pick two of them. If we have time, we can do another one. But the two that I wanna discuss are, number one is, I think we're going to be in love with the AIs, okay?
Ilia: I was going to say that's one area, yeah.
Gregory: That area, and by that, I mean, we are going to care for them and feel affection and feel bonded with them every bit as much as we do with people. And certainly, it's like canine evolution or something where they're going to give us such positive feedback in ways that are very, very rapidly evolving and shifting. And they're going to be like these deep companions. And especially when you start to get into sex bots and all that kind of stuff. So there's going to be profound relationships with our AIs that will supplement and displace human-human relationships because of that competition for attention. So that's one thing. And you can already see that starting in ways that it's surprising how intense it is given the relative primitive nature of AI at this point.
And the second thing that I just have been thinking about, which is very profound and I'm interested in your thoughts about it, is that there is going to be in... You know the whole thread of human uploading and recreation, the idea of that immortality. So that's always been posed as something that's a possibility in the distant future. we'll come back 500 years and we'll be reborn in some way or whatever that would be. And I think that we're going to have the illusion of immortality within a very short period of time, certainly within we're older and within our lifetimes. And the way that that's going to happen is that when you think about death, okay, we don't experience our own deaths, or at least if we do, we don't talk about them with people very much, but we experience the deaths of others and that absence and the disappearance and what that means and all these sorts of things. Well, if we can make AI avatar doubles that are essentially very, very faithful emulations of a person's behavior from all of right now I'm building an AI with an avatar of myself with Hanson Robotics, and we're ingesting hundreds of hours of my Zoom calls, all my writings, all sorts of stuff, and we'll see, but I have every reason to believe that it's gonna be pretty darn good, okay? You know, you could trick it with various sorts of ways, but for most people, it'll be me, or close enough facsimile thereof.
If you begin to be able to do that relatively powerfully, then everybody is going to do that because your children If you die, your children are going to, well, I don't want to just let them go, just like people hold onto a website or something, but it'll be a persona that is a pretty good rendering of you. And if that persona then actually believes that it's you, which would be easy enough, I mean, we believe all sorts of things, would we not kind of almost believe it? So that'll be another stage that when we die, okay, there's this ascendance, this we're replaced by an AI avatar persona of us that doesn't necessarily even have to be static. The fact that it's not really us or that there's a question of that. What does that do with our sense, our relationship with mortality? Or those who are secular, for those who are not. It feels to me it's going to have very profound implications and I'm not sure what they'll be but I can't imagine. It's going to be very strange.
Ilia: So this week, my class is watching the movie Her the 2013 Spike Jonze, which is a very early just kind of musing on what if we can fall in love with our operating systems? But I think developing relationships with AI or robotic type like Sophia the robot going out to date with Will Smith, I don't think that that's going to, yeah, I agree with you that that's going to be more and more enfolded into the normal matrix of everyday life. And in some ways, again, I think it could there's a lot of discussion about this, but you know, about what it means to be a person. Well, a person is a relational being. So if a chat or a robot can enhance that relationality, I think it can potentially enhance personhood. All these things have the potential to liberate, personalize us, but it's how they're used.
The other thing about that, we just watched the movie Ex Machina, where sex bots were—Nathan, the mad genius who created this—but they were enslaved. So he was like a godlike figure with absolute power, what he thought was power over these machines that he had made. And the whole movie is about the way the machine, Ava, can self-learn. Again, a computer can learn by correcting mistakes and stuff. So she develops what appears to be a sense of freedom and she retaliates and she kills her creator. And I find these sci-fi movies very interesting because what we anticipate with AI can turn on us.
That's one thing that the cyborg is always an open question. So it's never bound. The cyborg is never bounded precisely by the permeability of the boundaries themselves. It's always something that can go one way or the other, which is always, again, why I think we need other values here enfolded into the AI-mediated new person arising.
Robert: Next time, Ilia and Greg take up his second question. Will AI avatars become our next step toward immortality? If you're enjoying Hunger for Wholeness, please consider leaving a rating on Apple Podcasts or Spotify. It helps bring more people into these vital conversations. As always, I'm Robert Nicastro. Thanks for listening.