Hunger for Wholeness
Story matters. Our lives are shaped around immersive, powerful stories that thrive at the heart of our religious traditions, scientific inquiries, and cultural landscapes. As Bertrand Russell and Albert Einstein claimed, science without religion is lame and religion without science is blind. This podcast will hear from speakers in interdisciplinary fields of science and religion who are finding answers for how to live wholistic lives. This podcast is made possible by funding from the Fetzer Institute. We are very grateful for their generosity and support. (Image credit: X-ray: NASA/CXC; Ultraviolet: NASA/JPL-Caltech/SSC; Optical: NASA/STScI [M. Meixner]/ESA/NRAO [T.A. Rector]; Infrared: NASA/JPL-Caltech/K.)
Hunger for Wholeness
How to Write a Catechism for AIs with Kevin Kelly (Part 4)
How to Write a Catechism for AIs with Kevin Kelly (Part 4)
In the final part of this four-part series, futurist Kevin Kelly asks Ilia Delio to propose a catechism for AIs. Likewise, they discuss what our relationships with the AIs of the future might look like—love? Companionship? Ilia asks what these new relationships teach us about ourselves, the cosmos, or even, God and Kevin imagines what would happen if AIs contact other than human beings.
ABOUT KEVIN KELLY
“Humans are the reproductive organs of technology.”
Kevin Kelly is Senior Maverick at Wired. An award-winning magazine, he co-founded Wired in 1993. He is co-chair of The Long Now Foundation, a membership organization that champions long-term thinking. He is founder of the popular Cool Tools website, which has been reviewing tools daily for 20 years. He is also the author of multiple best-selling books about the future of technology. His newest is Excellent Advice for Living, a book of 450 modern proverbs for a pretty good life. He is best known for his radical optimism.
A huge thank you to all of you who subscribe and support our show! Support for A Hunger for Wholeness comes from the Fetzer Institute. Fetzer supports a movement of organizations who are applying spiritual solutions to society's toughest problems. Get involved at fetzer.org.
Support 'Hunger for Wholeness’ on Patreon as our team continues to develop content for listeners to dive deeper. Visit the Center for Christogenesis' website at christogenesis.org to browse all Hunger for Wholeness episodes and read more from Ilia Delio. Follow us on Facebook, Instagram and Twitter for episode releases and other updates.
Robert: Welcome to Hunger for Wholeness. Thanks for joining us as we wrap up our four-part series with futurist and radical optimist, Kevin Kelly. Today, Kevin asks Ilia to propose a catechism for AI's. Likewise, they discuss what our relationships with the AI's of the future might look like; perhaps love? Or companionship? And what these new relationships teach us about ourselves, the cosmos or yes, even God.
Kevin: One of the little side projects I've been kind of musing on was, I think I mentioned to you this before, but it's called a catechism for robots.
So the idea that the AI component has two factors. One is where AI is brought into us and you have maybe a new religion. But the other one you could think about religion is you could think about AI's potentially becoming sentient, conscious and raising their hand saying, "I believe in God, what should I do?" So on that second one, and this goes back to kind of like setting out a code of ethics given to AI, is if we had a catechism for robots, what would you suggest be in it? So, the robot is an AI that has a body and has consciousness and awareness and intelligence, but it's not human. It's an alien.
Ilia: Sort of like Ava of Ex Machina or…
Kevin: Right. So it's what I call an artificial alien. So it's an alien being and it's new, it's a baby. It's like, "Tell me what's my purpose, or why am I here and what should I be doing?" And we want to give it a catechism. Can you think of anything that you would tell it?
Ilia: Yeah. Well, first of all that comes to mind is AI, because I just saw a clip of David, the robot boy…
Kevin: You mean the one by Spielberg.
Ilia: …Spielberg's AI.
Kevin: Pinocchio.
Ilia: Yeah, exactly. So that movie is very interesting because, and you're very familiar with this, but David is programmed to love. So I would take love like what would you have as the first commandment? Love? Love of self. Can a robot love the self? Does it have an awareness of a self? It can love another? David is programmed to love his family. His whole journey in that movie is not only to love, but to be loved. So a catechism of robots, I think like the guiding principles, I certainly would say love and care for another; compassion, can a robot have compassion? And I guess even in term—I'm sure we'll have sentient robots because they've been working on these for quite a while. I know at the hands of...
Kevin: Yeah. I think they will, but they'll be slightly different. So if you structured this in kind of a Q and A form, like a traditional catechism, and you would say...
Ilia: Why am I made...?
Kevin: Why are you made?
Ilia: Why am I made? That'd be a robot question.
Kevin: Yeah. What would your answer be?
Ilia: I am made to love.
Kevin: Okay. Made to love.
Ilia: To care. I am made to be your partner, your friend, your whatever. But I would be saying love is a good starting point because actually it's the Christian law. If we think of a robot—so this would be another question for us. Is a robot incarnational? In other words, is God as much as part of the robot, as God is part of us?
Kevin: So the robot would like to have an answer to that. So we'd have to answer to that.
Ilia: Am I made in the image of God?
Kevin: Right. Or you say you're made in the image of God and then it might say, "Well, what's the image of God? What does God look like?" What is good and how do I know it? Those would be—I mean, I think there's going to be multiple answers. I don't think there's going to be a single answer to this. But some people would say, "Well, the good is we've given you a program. You have to follow the program. There's a guide. You have an internal guide. Consult your guide to know what's good.
Ilia: Robotic goodness would be in my program to—how would we say, when I say enhance life, or it would be, "Am I programmed to make someone feel special, make someone feel joyful? Am I there if someone's sorrowful? Am I programmed to be someone's companion?" So I'm not programmed to destroy, I'm not programmed to inflict harm on another. So it would be a distinguishing between harm and violence and disorder, and the values that would bring harmony, peace, joy, or uplifting an order to a life.
Kevin: You've heard of Asimov’s three laws.
Ilia: Yeah, the laws of robots. Sure.
Kevin: Okay. So that would be part of the catechism. You can't hurt someone else or cause harm. Those are pretty basic. So you could give some laws of ethics in a very programmatic way. The question of what your mission is, or what—I mean, that may vary by robots are going to be built for particular purposes.
Ilia: But I could see actually from a Christian perspective, a robot could carry on the reign of God idea. In other words, continuing this work of Jesus in the world because it's helping the poor. How does that happen? How do we care for the earth? A robot can definitely participate or help us care for the earth; till the soil plant, plant, plant. So there's a way that robotic life can really actually participate in the ongoing reign of God, which to me is sort of the fullness—coming into of love.
Kevin: Yeah. So let's say this robot is a robot that drives a car. It's an oxy driver. They're driving around all day and they're built for conversation. They have some dim awareness, dim level of consciousness, but it has limited agency in terms of like caring for the earth because it's driving a car. It maybe objects if you're telling it to go the long way and it's environmentally conscious. It's like, "Well, I don't know if I can go the long way because I'm using more fuel." Then you're having a conversation trying to convince it. I mean, it would have to have some way to make those priorities and trade-offs.
Ilia: Have you seen the movie Robot and Frank with Frank?
Kevin: Robot and Frank?
Ilia: Yes. With Frank Lagella?
Kevin: No, I don't think I have.
Ilia: You really must see it because it's actually your question. Frank is an elderly man, older, and he's sort of getting disheveled and his mind seems to be going. So his son comes one day with a life-size robot, and he says, "I'm bringing this to care for you." He says, "I don't want that machine." He rejects it, et cetera. He takes the robot in. The robot is life size like almost the size of Frank and programmed. The robot cooks meals; delicious meals, walks with Frank every day; he's a walking partner. They go gardening together, and then Frank gets rejuvenated. So it's all about relationality, right? So that's really what robotic in robotic ethics is about. Does the robot enhance what we are about still as homo sapiens in terms of our relational dimension?
It doesn't have to be homo sapiens because we're moving out of homo sapiens anyway. But does it enhance the relational dimension of life? That's the question. And within that question of relationship, those were the ethics I think lie. But Frank actually begins to prefer the robot to his biological human being. Because humans, we have power issues, we want to control things. We have all our ways of doing things. The robot is very simple and really is Frank's constant companion. So Frank trains him to burglarize, he was a cat burglar, and the robot says at one point, "Frank, I shouldn't be doing this. This is outside my program type thing." But you can see a real relationship developing between the robot and Frank. And that's what's so interesting. That movie to me holds out a number of what does it mean to be in a "real relationship with a robot?" How can the robot...?
And there's all sorts of—I know people are arguing this back and forth ethically, like, we can't have a real relationship with a robot. I mean, they're just a machine. But there's something about those machines as we're building humanoids; as we build these human-like machines. There's something more than a machine that's emerging here. And I think that goes to your question of a catechism of robots and can we begin to think of an ethics here or a code of conduct. It's not only on the robot side. We too need a new code. We need a new pree as well.
Kevin: Like in the example of Frank, there's a trade off in the robot's program to follow your orders, and where they might conflict with the other programming that it has for ethics, and how does it manage that. But there's no doubt we're going to put emotions into them, and we absolutely will have real relationships with these. You've got a real relationship with a dog. It's not an artificial relationship. It's not a human, it's a relationship. So we'll have relationships that are stronger than having it with your dog, but different than what we would have with a human. But they're real. They're absolutely real. So there's another component of our ethics and treatment. What is that kind of relationship? Is it master slave? Is it partner? Is it a pet? Is it something else? So there'll probably be multiple kinds of relationships that we'll have with these entities. So I mean, we'll evolve etiquettes and protocols for that. But I think in the robots themselves, there might be religious ideas introduced to them, and then together that could produce a different viewpoint than what we have today. The little project about making a catechism is just to bring into relief the way in which these relationships might engender a different relationship about who we are and our role in the cosmos.
Ilia: No, that's really interesting because one of the things I was thinking of is we tend to think that these robots are only machines, but as we know, we're developing artificial and general intelligence. I think a robot can have self-correcting algorithms in them so that the idea of learning virtues or rethinking certain things will be possible, even robotic light. And I know that a lot of people just find all this stuff not only frightening, but this idea that we're diminishing the human earth. And certainly from a Christian perspective, you know... But I actually don't think so because it's exactly what we do. We've always been extending ourselves and relationship's the key. The whole thing is about relation—It has been about relationship through and through. And as relationships develop, and this is why consciousness will shift with robotic life because we are going to deepen, I think what we are. We have the capacity to deepen what we are with robotic relationships and consciousness is going to change.
Religion is going to change because of this as well. So I do think—right now we're at this beginning stage like how will a robot be perceived within the overall context of the human sphere of relationships. But then I think we need a wider thinking on the type of ethics. We need a new ethics and maybe some new—Maybe that's why you're writing "Catechism for Robots." We just need new to create a new codes of conduct for developing these new AI relationships. But that they will develop to me, it's not much of a question. They're already here in a very incoherent way; they're beginning. So
Kevin: Yeah, again, I like to call them artificial aliens. And if you rehearse a reaction of what we would do if aliens came down from another planet. They land on earth, we find that we can talk to them, they can talk back to us. So how would that impact religion? The aliens are here, you have a lot of questions for them. Where do you think you're—who made—why are you here? What's your purpose, your understanding of the world? Do you have concepts of God? You'd ask them all these kinds of questions, and presumably they would have answers that might surprise you. The question is, well, how would that impact the existing religions today? I mean, they would have a big impact. Of course, it depends on the aliens, but they were nice if we liked them or not. But let's say we light them. Let's say that maybe they were far advanced, maybe they were closer to what we imagined our best selves could be. Their views on the world would be very powerful. They could shift what we believed, and maybe we would try and see the world as they saw it in their religion. So my point is that it's possible that the AI's and stuff that we make are the aliens and that they come up with, or they begin to have their view where they act better than we act in terms of their behavior, their ethic; that they would have the same effect on us.
Robert: Both aliens and AI's can frighten us. But why? If we are part of a cosmic evolutionary process, how can we remain open to such monumental changes? Next, Ilia and Kevin discuss how we can face this future together. And later, Kevin imagines what if AI are the first to contact extraterrestrial life?
Ilia: Part of what's needed here in this new age is openness and we don't need to be fearful. I think that those two things, openness and lack of fear will help us realize that when we engage with the other who's radically different from us, whether it's an alien from another planet or an alien of robotic life, and we allow that other to enter into our lives to cross the threshold, our whole world changes. We see the whole world out of a new set of eyes. So I think that's part of our whole transcendent nature and I think that in itself is part of the religious quest. It's part of the whole drive for perfectability or some type of completeness or wholeness that we just so long for. So I actually see that as a good. I'm probably unique and probably listening to this thinking, "Wow, what happened to her? She really went off the deep end." But I love to think about the future. I actually do. I mean, if I had another career, I'd love to be like a computer engineer and build some of the software. I wish I knew so much more of that stuff. I'm fascinated by the fact that we can imagine what doesn't exist and bring it into existence and it changes us. And that is so amazing.
Kevin: Well, I think it's great that you're interested, and I would've to say, I think we have enough engineers programming. What we need are people like yourselves thinking about the ethics and the morality and the religious component. We need more of people like you. So don't give up what you're doing.
But you know, maybe with your classes or things, you could actually pick a crack at writing a catechism for AI, writing a catechism for robots.
Ilia: That's very interesting because I do have a class coming up on robotics.
Kevin: So make it into an assignment and see how far you get to actually trying to craft something that could be... I mean, because as I said, no one else is really doing that.
And can you put it into something that could be coded at some point, and it's sufficiently specific that it can actually be used to guide somebody. So that's something we need.
We need that kind of orientation. We need that kind of foundation for these aliens. So they're like an artificial alien, but they're also babes because they don't have the thousand years of culture that real aliens will come. Kind of like they're being born without that. You're having to equip them very rapidly with several thousand year old cultural tradition that they can use to guide them. It wants to be slightly better than what we have right now.
Ilia: I know in the Catholic Church, I think I saw something like the use of robots in confessionals, or we're using them in this kind of utilitarian way more. It's like, "Well, we can use the robot for confession or to generate a prayer. If we don't know how to pray, we can have the robotic." But that's not what we're talking about here. We're moving beyond, "That's a robot slate. That's a human relationship still." But we want to move the robot to a higher level of life, and that's always going to be a huge, "Is the robot lie?" And to me, whatever is enacted that generates life in the overall sphere, that has to be life giving in this way. So I think we need to rearticulate why robots are no longer just the slaves as we once were.
Kevin: Sure. One of my constant refrains is that intelligence is not a single dimension. Consciousness is not a single dimension. Life is not a single dimension. These are high dimensional things. They're continuums. They're large spaces. So you can have something that's a little bit alive, something that's more alive in different ways. You have something that's a little bit conscious or more conscious in many different ways, et cetera. So are they alive? Well, they'll have different levels and different kinds of life. And as they get more complicated, they'll become more alive. So I believe that a human is more alive than an ant. It's more alive than the bacteria. The bacteria is more alive than the virus. The virus is more alive than a rock. So it's not binary. There's very few things that we care about that are binary. So consciousness, life are huge, continuous gradation, very large spaces with multi world dimensions. So part of what we're going to learn with AI is to understand that intelligence is not binary. There's many different levels or varieties of smartness for types of cognition. We'll discover the elements of intelligence and then recombine them in thousands of different compounds. So we don't want to avoid the binary questions.
Ilia: I think established religions still are at the binary level. But I do think a lot of it is the deep fear factor. A lot of people just have this tremendous fear that we're going to be annihilated or extinguish ourselves. But I'm saying we're in evolution. So, I mean, we kind of will, but it's not going to happen in this kind of major movie-like scenario. Like the robot's going to come and just wipe us out. It's the gradual creative change over time as relationships deepen and they grow.
Kevin: Yeah. There's still some very serious forks in the road ahead. And one of them is, "Do we remain one species or do we become many?" That's a very serious division. Because one of the things right now, the foundation of the modern liberal progressive world is that there is no difference between us. Fundamentally, biologically, we're so closely related among each other; almost unique among all other animals in terms of the degree of similarity we have with our genes. And that's the foundation of equity in ethnic. Our sense of fairness is because we are fundamentally the same. But if we were to actually speciate, then there would be an us and them. You could say, well, "You do that and we do this." That's a very different world. And again, I don't know which way we would go, but there are a lot of people who are very interested in modifying themselves, and there's a whole bunch of other people who are never going to modify themselves or their descendants. You could see a speciation in the far future which would be very challenging for being able to say we're all equal.
Ilia: Right. Even for the life of the planet, I mean, sort of a planetary consciousness would be difficult because you would have just different...
Kevin: Yeah. Unless you had an “uber-” in the sense of something that transcended that consciousness among all the different species, that's possible too. So while I think we're nowhere near that right now, we are just the same, and we are kind of changing what we believe. We're changing our genes slowly, but we're still together changing them. So for the foreseeable future, we can say that's an agenda. But I'm just saying in terms of the far future possibilities, it is something that we haven't confronted.
Ilia: I guess, in this way I've always been influenced by Teilhard's idea. He thought that there was a lot of recurring patterns. So while I think speciation in sense marks the whole spectrum of evolution if you go back to the very early hominid stuff. And as these evolutionary changes and progressions support, and the branch of the whole hominid tree begins to develop, you do see speciation along the way. But it does sort of, kind of level off into a particular species for various reasons; environment and genes, et cetera. But here's where I would wonder too, in terms of Teilhard's omega. Is there a power or a governing principle? Is there kind of even a spiritual power of love? If love is our core reality as Teilhard thought, then is it possible that even as we develop AI and we ourselves will be developing with robots and AI, is this power of love such that it will be an irresistible development towards a greater unified? Now, I don't mean all the same—and in fact, I think the best unity is precisely in the diversity.
But something that—because AI can enhance relationality. It will definitely as it has done now, enhance our communication and our awareness of everything around the planet almost at the touch of a button. So there's a possibility actually that all that we're talking about in a religious sense can indeed be the emergence of God Omega, or that wholeness. A type of wholeness.
Kevin: Yeah. You could certainly imagine other planets in the galaxy, advanced civilization had multiple sentient beings that had a harmony or some kind of convergence overlap. Some unity that had their own unity among multiple sentient species. I think that's entirely possible. I would also note just in passing that the period in which this species was the only sentient species on the planet has been very, very short because before this, we've been multiples. And after that, we're going to have multiple ones because of AI.
Ilia: Exactly.
Kevin: So we're an anomalous little window. We were the only sentient beings on the planet.
Ilia: Right. We are 0.04 percent of higher earth life. So you're right, like blink of an eye.
Kevin: So this idea that we're the only, we're kind of loneliness.
Ilia: It’s an illusion.
Kevin: It's temporary and we're going to go back to that by making these artificial sentient beings.
Ilia: As we develop AI life, I mean, I do think the possibility of encountering life on other planets is going to increase that probability as we'll just have much better capacity for communication and stuff. So I see all that we're talking about. It's not going to be limited to terrestrial life. I do see up in the future that terrestrial life is going to complexify itself with—could be Kepler life, or Mars life or whatever life; whatever planet out there. And I think what's so hard for religious people is we've been taught a narrative for so long with our own little lifetimes. And it's very narrow. It's very defined. It has a beginning, a middle, and an end. And now we're in a world that's very open and expansive and actually, the possibilities are rather endless at this point. I think the best thing for us is to really just let go of some of the old stuff that keeps us bound and fearful and just open up to the possibilities of what life can be. That's part of it. That should be part of our ethics; inviting people into maybe a fecundity of life project, an Omega project that life really can be enhanced.
Kevin: Yeah. You said something earlier, which I think is true, which is that even with the perspective for a religion, you need more than perspective. You need a myth and a story. That's not really present in these future religions that we're imagining could be possible. We may not be able to have a historical myth because we don't have enough time, but I'm just wondering what kind of stories. I mean, certain science fiction are myths, future myths about this. But something could happen where there was—there could be a story told that has—I mean, if something happened in this direction where people had a view of the world or were empowered in some way from a view... I mean, it's like the difference between a myth and an urban legend and a conspiracy theory...
Ilia: Exactly.
Kevin: Yeah. I don't know. Joseph Smith finding golden tablets in his backyard.
Ilia: I don't know. I like Teilhard's Omega. I actually think the New Testament, I think the gospels, they're stories. So we need stories that build on the story. I think we don't have to get rid of the stories. We can build on those stories. I think one maybe perspective of myth is they gather in; the storytelling gathers people in. That's what a religious energy does. It gathers people into a vision. It has a vision for what lies ahead. And I think that's one thing even as we move into AI robotic relationships. The kind of myths we need or do they gather us in and envision us, empower us for continue on. That this is a journey of hope and anticipation of the moreness of life. That's part and parcel of the religious myth.
Kevin: I could imagine somebody with AI of such intelligence and smartness, maybe overlaid with some new psychedelic drugs or something where people began to report contact with other beings in the universe. And they would have enough evidence to persuade people that their channel is not coming from them. They would have maybe some insight, maybe making some kind of scientific discovery or something based on their contact saying, "Well, here's what we learned and people try it and it works." So there is a certain credibility that they have maybe because of AI being smart enough to solve some problems. And so there could be like a religion formed around that, that is kind of channeling alien that's not unknown. Religions that have aspect where they are in contact with greater forces. And those greater forces, are they alien or are they actually just AI's?
Kevin: People don't know, but they have brought back something or a few things that actually work. So I could see religions forming around that. By the way, I'm not just talking about America because Russia, China are just primed for religious revival, particularly of a pseudoscientific type.
Ilia: Interesting. They don't have a religion, they don't have a scripture. They don't even have the constitution. They have no guiding something to grab onto.
Ilia: Right. And they need that.
Kevin: They need that, and there are engineers. So then again, in imagining a scenario, I could certainly imagine something happening in China where people are reporting contact with alien or greater than human spirits of some sort and powers that they have gotten from that contact. That could be a new religion with millions of people following it.
Ilia: It will be, I think actually. I think things like that already exist in some very small way. People have told me—A number of people have, I mean, not too many, but they've asked me about microdosing or the use of hallucinogenic drugs to induce these type of spiritual experiences, which again, it just says to me, first of all, how mysterious is this mind of ours. What we call the realm of the mind and what consciousness is and the infinity of consciousness and where it can bring us in terms of other worlds, alternate worlds contacting, in terms of communication with other worlds. So all of that is possible, and it has actually been done in some small way. Kind of religion far around that. I don't see—I wouldn't rule it out that that would not be a possibility. I do think the majority or the mainstream—And here, you're absolutely right.
So thinking worldwide, that's a different question because I'm always thinking as a western North American. So my world tends to get little narrow sometimes down to this. But I do think these possibilities are real. I think it's good to be thinking about them. Because someone's thinking about them, so why shouldn't it be us you know? I mean, you know that someone's going to come along and write the catechism for robots or develop new religious sensibilities and new religious prees. So I think this is the time to be developing these ideas and to be as far ranging as possible. Because if we only think according to what we've already known, I do think we cut ourselves off from other possibilities—what the possibilities are. I think that's where your genius comes in, thinking outside the box and thinking of what doesn't yet exist. But we can begin to think in that direction. So I should say thank you for bringing me along.
Kevin: Well, yeah, I mean, it's a joy to brainstorm with you. I enjoy your spirit and your ideas as well so I feel really privileged to have you.
Ilia: Thanks Kevin. Well, it's an honor to be with you as well.
Robert: A huge thanks to Kevin Kelly for all the time he spent talking and thinking with us. That's not all for this season. We're very excited to have integral philosopher, activist and author Bayo Akomolafe joins us next. If you enjoyed this series with Kevin, be sure to let us know with a message. For more of his profound insights, find his newest book, Excellent Advice for Living online or at your local bookstore. A special thanks to our partners at the Fetzer Institute. As always, I'm Robert Nicastro. Thanks for listening.