Hunger for Wholeness
Story matters. Our lives are shaped around immersive, powerful stories that thrive at the heart of our religious traditions, scientific inquiries, and cultural landscapes. As Bertrand Russell and Albert Einstein claimed, science without religion is lame and religion without science is blind. This podcast will hear from speakers in interdisciplinary fields of science and religion who are finding answers for how to live wholistic lives. This podcast is made possible by funding from the Fetzer Institute. We are very grateful for their generosity and support. (Image credit: X-ray: NASA/CXC; Ultraviolet: NASA/JPL-Caltech/SSC; Optical: NASA/STScI [M. Meixner]/ESA/NRAO [T.A. Rector]; Infrared: NASA/JPL-Caltech/K.)
Hunger for Wholeness
Can AI Calm Our Fear of Death with Gregory Stock
In this episode of Hunger for Wholeness, Robert Nicastro welcomes listeners back into Sr. Ilia Delio’s conversation with biophysicist and technologist Gregory Stock. Last time, Greg raised a profound question: how will AI shape two enduring human longings—love and immortality? In this second part, Ilia and Greg return to the theme of immortality, exploring not only what it might mean for individuals, but what it could mean for the human species as a whole.
As the conversation unfolds, the focus widens from personal hopes to public consequences. Technology has often promised abundance, comfort, and control. Nevertheless, who actually benefits when those promises arrive? Ilia presses into the economics of AI, asking whether it will become a first-world luxury or something that can genuinely serve the common good.
Later, the dialogue turns toward a civic question that is increasingly difficult to avoid: Are our social structures and our governments prepared for the realities of AI? Greg shares his hopes for the future and the aims of his new book, Generation AI and the Transformation of Human Being.
ABOUT GREGORY STOCK
“The greatest frontier in human evolution may no longer lie outside us, but inside: in the choices — and designs — we make for ourselves.”
Gregory Stock, Ph.D., is a scientist, writer, entrepreneur, and public communicator whose work represents a deep exploration into what it means to be human in the 21st century. During his career, he has developed the foremost paradigm for personal inquiries into values and beliefs, which has significant implications for humankind as it faces the profound shifts brought by silicon and biotech. Today, Greg serves as an expert speaker and advisor to biotech and healthcare companies and to non-profits at the cutting edge of human health.
The Center for Christogenesis is in the midst of our Winter Fundraiser as we celebrate ten years of exploring faith, science, and the promise of a new future. At a time when organizations vital to our shared future are losing support, your contribution affirms that our mission matters. Donate today at christogenesis.org/donate.
With Gregory Stock's new book, Generation AI and the Transformation of Human Being, he’s also built something to keep the conversation going beyond the book and these podcasts.
It’s called Only Human. An online space where every day, everyone gets the same reflective questions, and offer their answers. Then, you can see how others are responding to the same questions from all over the world. Go to app.onlyhuman.us to sign up.
A huge thank you to all of you who subscribe and support our show!
Support for A Hunger for Wholeness comes from the Fetzer Institute. Fetzer supports a movement of organizations who are applying spiritual solutions to society's toughest problems. Get involved at fetzer.org.
Visit the Center for Christogenesis' website at christogenesis.org/podcast to browse all Hunger for Wholeness episodes and read more from Ilia Delio. Follow us on Facebook and Instagram for episode releases and other updates.
Robert: Welcome back to Hunger for Wholeness and to Ilia's conversation with Gregory Stock. Last time, Greg raised a profound question. How will AI shape two enduring human longings-- love and immortality? Today, Ilia and Greg return to the theme of immortality, exploring not only what it might mean for individuals, but for the human species as a whole. And later they ask, is our social structure, our government, prepared for the realities of AI?
Ilia: A number of years ago, maybe 10 years ago, I was actually friends with Martine Rothblatt, genius, wrote the book, The Promise of Virtual Immortality. And she was saying then this is at least 10 years ago well, we're going to have multiple bodies you'll be able to download she has this idea of genes, memes, cultural DNA, and then beams, which she called like, sort of informational DNA and, in her view, we will be able to download our brains like beams into she called mind files. We would have find files that we could eventually use them. I used to say, sort of like an ATM machine, you can enhance happiness or whatever. But then digital immortality, we're watching the movie Transcendence in a few weeks.
We already have a level of that, a primitive level. So you write books or you do a YouTube, when you die, those things are still here. So you're still here in some way. So this kind of virtual living on, or this is the next level of the virtual living on. Now it's you, your doppelganger, so your kind of virtual self, but it's sort of you, but not you. And that's always the question in Transcendence. The movie is, is this Will Castor uploaded now onto a computer or is it not?
Gregory: Well, think about your relationship with those. So I'm thinking in terms of, say, my daughter, Sadie, you've met Sadie. We have a very, very close relationship. So we talk frequently and interact about things. If I were dying, she would want to have a double of me, but that was fateful. You don't have that sense of loss of people if everybody has in their real lives, they have multiples of themselves that are transformed a little bit. And if that's permeating our environment, then I don't know, death has a very different ring to it. For your family, the generational passage is transformed in some way. It's almost like we're getting more and more people in the mix, more and more personalities and avatars and different manifestations of ourselves. And I think that the way to look at these things is through, and that's what I've done in the book, is through the power of questioning.
So the idea of posing questions that really are not abstract and meant to be answered by some expert who's supposedly well, we've thought about the future a lot, but kind of like, an example would be, if you could reconstruct anyone you could have an emulation of anyone around you or that you've known, who would you actually want to have in your life as an emulation? It's interesting when I ask myself that, there are very few people. I was surprised. I was thinking, well, my mom's been dead a long time, would I really, and I was realizing, well, later in her life, I would have a call with her maybe once every couple of weeks or something like that, and interact, but it wasn't a big part of my life at that point. Would I really want my parents present now in some real way? Who would I really want?
When you start thinking in those ways about what would you want, what would it feel like, then we come up with answers to these things that are personal, that they're kind of malleable, and I think we've talked about threats, And I think that the biggest threat that we have is that we are not as a society going to be able to make the adjustments to these new technologies and that there will be increasing polarity and disruption and fragmentation. And so I think the most important thing is that we begin to have these conversations with everyone that we're actually facing that future in a pragmatic way. How is it gonna change? What am I going to feel like? What will it be like to have a copy of myself or of somebody else or a best friend, a lover that is AI? What does that mean to us? And start grappling with those things as a whole, or I think we can tear ourselves apart in one way or another just as you're starting to see with social media.
So I put in these kinds of questions with QR codes so that people can answer them and then can see how other people answer and how different demographics answer, how young people answer, how old people answer. I mean, a good example about our cyborgization is if you had to choose between either amputating a hand losing the use of a biological limb, essentially, or using all ability to connect with telecommunications devices, with computers, with the internet, with phone, all electronic communications, which would you choose? And for young people, it's in the '90s, especially STEM students, which really shows how that level of integration has already occurred. And I think it's really interesting to have these discussions where we're all experts.
Ilia: Yeah, no it's very practical because I think, as you say, we're not talking about it enough. And I think there's a lot of fear, but we do need to begin to process in a sense what's taking place right in our midst. I want to go back to death though, because from a Christian perspective, this kind of living on is what we hold as resurrection. So it's a type of resurrected life. So Christians don't hold that death is the last word. They hold that life is the last word.
Gregory: So Ilia, explain to me, I know that obviously, it feels to me that if you really believe that you are gonna be enfolded in the arms of God in some sense, and that the time of existence on earth is in physical form is really a very minimal part of eternity, right? Then you would think that there would be not a fear of death, that it would be almost embraced, and it certainly is in many spiritual Christian individuals.
But it feels to me that that's fairly narrow in terms of people who feel the depth of faith and belief that they are untouched by their own prospects of death in somewhere or another. It feels different too because even though you might live on, generally, you're not here and your loved ones still feel that deep loss. disappeared from temporal life, from life as we know it. So it feels different, similar, but different.
Ilia: You know what's interesting is that even with death, and so from a Christian perspective, this is what we mean by remembrance, right? To remember is to remember to say Christ, who has died and now risen to. And same idea, like you remember your parents or your loved ones or whoever has died, you're a member to them in a deeper way, in a spiritual way. And again, it's part mystery, so it's not meant to be explained away. But you know what the funny thing is, when people die, we often forget all the arguments we had with them, the things that they did we really couldn't stand, the many times we wish they did die so they would disappear up here. And then they die and we feel this profound loss.
We're very ambivalent creatures even with regard to death. We don't want to lose what has been, even though we say sometimes, "Oh, I'm so glad they're gone." We're not really glad. There's something about that loss sometimes that has been meaningful for us. So even though we might have kind of doppelgangers of some sort or replicas of us in AI form, that can be reassuring to people, but then there's something about death that also is about time, and it's about evolution itself. It's about something taking place that's more than the sheer or mere ongoingness. It's a larger than life, right?
What we're about is not just the continuation of biological life, but the continuation of life even beyond terrestrial life, cosmological life, or spiritual life and what that means. So I think AI can be of interest here, but it might also, in some ways, it might mitigate or kind of blur a deeper level of our lives are short, they're kind of temporary. And that's why you kind of give it all you got in the days that you're given and the breaths of life that you're given. But our contribution, I think, is immortal.
So are we just irrelevant specks in this very long process of evolution? I mean, we're only here, what, 90, 100 years. Maybe if you have an AI download, you'll be here much longer. But the fact is, even that's going to need to be downloaded again and updated. Like even your doppelganger or whatever it is, is gonna have to need some tooling up and then your children and grandchildren will die. What happens to you then? What if they said, "Well, I didn't know grandpa, "so I'm not keeping this thing."
Gregory: Well, I think if you look at all of life, our evolutionary history, no individual creature or being lives for very long and they all have a relevance, and there's a whole process that they are a part of, just as no particular molecule or no particular protein really is very meaningful in the constitution of our being. They're all expendable, and yet they're all, they're necessary as a whole in some way.
So it feels to me, it's kind of, and I think you'd somewhat agree, is it feels to me what's happening is that AI, which is going to lead to abundance, there's no way around that because if you could replace human labor and manufacturing and everything else, things will be largely free and that's happened before without even disruption. I mean, who would have thought that we could make free phone calls and have free video and have free photography and all of these things that are all packaged away on your iPhone are essentially free now, so that spreads out. But it feels that the possibilities of artificial intelligence and of this creation of a new substrate for kind of rich living beings almost, has the possibility of enabling us to retain and confront our humanity and the questions about what it really means to be human. Because we won't be as busy with the everyday momentary scramblings to try and survive and maintain ourselves in some way.
And I think we're going to need AI as therapists, because this is a really challenging period we're going in. And you can already see that people are using even large language models in that way, to constantly talk about what they're grappling with and this and that. And one of those questions that I mentioned, that power of questioning, is if you had your choice and they were equally effective, would you rather interact with an AI psychiatrist or a human psychiatrist or therapist? And virtually everyone I asked that would rather an AI because then it feels more private. It feels more, you're not—
Ilia: Well, you can be yourself without judgment. You can be yourself without any kind of human bias or sometimes even if it's a psychiatrist, you can feel, "Oh no, how are they going to take this or how are they going to understand this?" AI is neutral. It's just going to take what you say and respond to it. It's kind of freeing in that way.
Gregory: Yeah. And that's not even with the consideration of it being available 24/7. I wake up in the middle of the night, "Oh, I need to talk to my therapist. Okay." about this or that, calm down.
Robert: This is not the first time technology has promised abundance or greater comfort, but who stands to benefit most? Ilia asks Greg about the economics of AI. Will it become a first world luxury or something that truly serves everyone? Later, they return to a pressing civic question. Are our social structures, our governments, prepared for AI? And Greg closes by sharing his hopes for the future and the aims of his new book, Generation AI.
Ilia: Is all this, so AI, the new AI generated human, is this a first world phenomenon? Who's going to benefit from this?
Gregory: I think we're all going to benefit from it, or all going to be immersed in it, OK? And that because of the cost structure, which is that it costs a fortune, trillions of dollars to build these things, the marginal cost of distributing them is near zero. So all of us have access to amazing potency large language models already for free or for $20 a month or something. And it has made, for example, education, top-notch educational experiences are now available in any language, anywhere, anytime to anybody, basically. And so these things are, because their cost sort of dynamics, become available anywhere. All you need is a portal, which becomes cheaper and cheaper, a phone at the moment. Translators.
If you look at something that's a real-time translator, who does that benefit? It doesn't benefit somebody in the English-speaking world where that's a common language and we don't even know a second language and don't bother. Most people don't. Let's say you're growing up in some African state or somewhere in an environment where you speak none of the major languages in the world. And now suddenly you have access to education, you have access to communication, you could actually do work in the advanced world.
Ilia: Do you think that AI then can close the gap between rich and poor or first and third world countries? reduce that gap and begin to level the playing field of say a globalized planetary humanity.
Gregory: I sort of don't like that articulation of the issue in terms of closing the gap. And the reason that I say that is because it feels to me that we tend to dwell on the differences in capability that different people have. And yet what's really important is the absolute level because we're always feeling competitive. Do I have this? Well, whereas everybody now can fly all over the world. They basically have food, they have access to music. It's amazing what the lower level, the lower threshold of activity is rising very, very rapidly.
Some of the upper level stuff is rising even more rapidly, but it diffuses down very quickly so that the gap that you're talking about is not between the rich and the poor of one generation. The gap is between one generation and the next generation. Because anything that Elon Musk, that Bill Gates, that the richest person in the world can experience and derive from technology, their access to medicine, all of these things in 25 years are going to be available to everyone probably in much better way.
The best computer that the richest person in the world could purchase, the best, let's just look at it, translation, video messaging, video interaction, access to music, the best thing that that rich person could have purchased a generation ago does not equal what some tribes person in Africa or in an underdeveloped region of the Middle East or Asia has access to today. So it feels to me that we tend to want to have those models of, "Oh, can the rich buy more or whatever?" This is moving so quickly to enable everyone. And the challenge is going to be, I think the implementation of these things are going to be driven more by politics and by philosophy than by money.
Ilia: So this comes to the question of structures or structural supports for a new AI-mediated humanity. In other words, I think we'd still have outdated structures politically. Economically we're shifting, but we still have a hierarchical system operating around. But I think of, say, just looking at our political situation right now globally, it's a lot of male power and a tribal mentality of power and strength. And even if it's over the chip war or technology and who's going to get the fastest and make the most. And we would need, don't you think we will need a different political order to really mediate a new type of AI humanity?
Gregory: I don't have as much faith in government as I suspect that you do. And that I think almost by its nature, it's solving the problems of yesterday. It takes a long time for these things to percolate up. And then in addition, I think the motivations of those who are in control or the ruling classes say, are not really oriented towards solving problems. They're about power bases, about how they are positioned themselves, about their family.
So I think that what's really the challenge is what happens in a world that's kind of a post-truth world, where you can't really tell what's true and what isn't true, and they're what's simulated and what isn't simulated. I was looking at some video the other day, a stupid cat video, TikTok thing, where this cat was jumping up and protecting a child from a bear or something like that. verdict to my wife and she said, "Oh, that's just AI. That's not real." And I thought, "Well, it really looks real. I don't know." So it's like everything, you can't tell what's real and what isn't real. It takes an enormous amount of energy to see whether what people are saying is correct or not. I think it's beyond government. It's sort of beyond—
Ilia: Yeah, I agree to an extent, but we do need to, I think we're called to reinterpret these words of truth or one or beauty or anything like that. So they need new interpretations in light of our new reality. But I do think we're not going to escape any kind of political order. And even if the guys in government are just caught up in power struggles and power plays, the fact is we're going to need ethical guidelines, policies, public policies. I mean, in the workforce, it's clear that the whole education work is also, And you pointed this out several times. AI is changing everything.
Gregory: I know what I'm thinking. Let me respond more directly. It feels to me that inherent in the question about government and what's going to happen with government is that government sort of is an enabling force and can smooth things in a way as kind of a leading and requisite agency in some way. And in my view, I think that the real drivers are coming from technology, from other external sort of introductions into the system that reshape things dramatically.
And government is just trying to hang on. And if it's too poor, then societies will collapse, okay, because you end up in civil war or disruption or all sorts of things happen. And if it kind of muddles through, then you're kind of okay, but it's never going to be kind of the leading edge of carrying us forward. If that can happen, that's wonderful, but I think it's asking a lot.
Ilia: I think Teilhard kind of alludes to this and his ideas on planetization. But you know, what I can envision is instead of the kind of tribal competitions that are going on now with who's going to be more powerful with technology, imagine if we had an AI-mediated world, it could mediate a new type of political arrangement or connections between say, US, China, Russia, Italy, Europe, and we would get rid of world leaders and maybe put in place sort of maybe an AI data center. I mean, we would have a new kind of politicization that can really, we don't have a political order that can really mediate global community.
Think about socially, educationally, scientifically, how many scientists are just collaborating across many kind of geographical domains. So we're in this space already on many levels, but politically we're just old school. It's just like yesterday. And I think that kind of friction between the political order and what's taking place in other levels of AI mediated life. This is why it's difficult when we're conflicted, we're not sure what to do, the government's constraining that AI is pulling us. So we need something else to mediate and to support. I think of these structures as supportive structures, like any cell or anything biological will have a supportive structure that enables the physiological or biological flow or mechanisms to work more synergistically. But I think we have work to do and I think AI can be a great help. That's really what I'm saying is.
Gregory: Yeah, I think that you're pressing in a sort of a roundabout way something that I would say more simply, which is that I think that the problems of human motivations and human control over the powerful elements of AI and weaponry and biotechnology and everything, that the danger is having humans with all of their flaws and inadequacies controlling these extraordinarily powerful technologies in ways that it doesn't take a government, individuals can wreak havoc and are doing so increasingly.
So to me, we'll be safer when there are artificial intelligence and sort of there are systems that are built in to be self-protective and sort of self-writing in various ways and are sort of making more sensible decisions about how to expend resources, how to use them to maximize well-being and those sorts of things. So this idea that, oh, we wouldn't want to let AI be in control because then we're really screwed.
Ilia: Big brother.
Gregory: It's upside down. The real challenge is when people, and you can already see it with all of the cyber attacks and all of the potency that can be wielded by very small clusters of people.
Ilia: What is your final word or what's your hope for this new book?
Gregory: My hope for it is that we begin on a large scale to be interacting around the kinds of questions that we were dealing with today, which there are no answers to, there's no right and wrong answers, but begin to grapple with those possibilities. Because if we don't, I think we're in deep trouble because we're going to be blindsided by them in terms of employment, in terms of all sorts of things that are just, there's so many people that are very, are not thinking at all about these sorts of issues and aren't even aware at the depth and the power of what is occurring.
Ilia: This is so true. Can I just add here, I'm at two universities, two Catholic universities, both of which want to sideline, they want to talk about it, but in a very circumscribed way. We have to be careful about it. We have to make sure it doesn't impact our humanities. I'm like, "Oh my God, this is so crazy." Sometimes on the level of higher education, we're actually working against ourselves by trying to contain AI so that we can remain human with our humanities and our ancient philosophies and our historical knowledge that we have. I'm like, "I don't know how kids do it today."
Gregory: Let me take that point and make a comment about it. If you think about that, that all sounds very good. Okay, we want to retain our humanity, we want to go slowly, but you could look at it another way and you could say the world is changing dramatically and education itself is completely out of touch with the new technologies and possibilities and could very easily collapse traditional education because one can get access to all sorts of amazing thinkers in powerful ways and explainers and everything.
So that's an example of what happens in government as well, where people make all these arguments, but the reality is they're very much protecting their favored position in protecting the structures that are supporting them. And we have never been at a point, when I said I wanted to ignite a global conversation, I mean it. We could be talking with people in other languages, not even aware what languages people are interacting with, seeing how people respond to these issues of values. I mean, the possibility of assembling that almost overnight, I call it the map of human identity.
That's my goal, is to create that kind of disintermediation where we're not looking for expert answers, we're like talking to one another and looking into ourselves, understanding who we are, understanding who other people are in a deep way, and about society and values and beliefs. and it feels to me that's what has to happen.
Ilia: That's a fabulous idea. And I hope that's what you're gonna be working on developing.
Gregory: This book is the start of it, and the QR codes where people can answer questions and begin to interact with one another. So thanks for having me on your show.
Robert: A special thanks to Greg Stock. Be sure to explore his latest book, Generation AI, and join the community of readers engaging these big questions. Next time, Ilia speaks with philosopher of consciousness, Abre Fournier. As always, I'm Robert Nicastro. Thanks for listening.