Hunger for Wholeness

How to Face the Future with Kevin Kelly (Part 2)

Center for Christogenesis Season 5 Episode 8

How to Face the Future with Kevin Kelly (Part 2)

In part two of this four part series, Ilia Delio and Kevin Kelly discuss the potential social and economic impacts of AIs–whether these fears are founded, and where there is hope it will enhance our global unity. Plus, Kevin answers why he helped found Wired magazine, and how he’s learned to face the future.

ABOUT KEVIN KELLY

“Humans are the reproductive organs of technology.”

Kevin Kelly is Senior Maverick at Wired. An award-winning magazine, he co-founded Wired in 1993. He is co-chair of The Long Now Foundation, a membership organization that champions long-term thinking. He is founder of the popular Cool Tools website, which has been reviewing tools daily for 20 years. He is also the author of multiple best-selling books about the future of technology. His newest is Excellent Advice for Living, a book of 450 modern proverbs for a pretty good life. He is best known for his radical optimism. 

Support the show

A huge thank you to all of you who subscribe and support our show! Support for A Hunger for Wholeness comes from the Fetzer Institute. Fetzer supports a movement of organizations who are applying spiritual solutions to society's toughest problems. Get involved at fetzer.org.

Support 'Hunger for Wholeness’ on Patreon as our team continues to develop content for listeners to dive deeper. Visit the Center for Christogenesis' website at christogenesis.org to browse all Hunger for Wholeness episodes and read more from Ilia Delio. Follow us on Facebook, Instagram and Twitter for episode releases and other updates.

Robert: Welcome to Hunger for Wholeness. Today, we pick up with part two of a four-part conversation with futurist Kevin Kelly about technology, human life, and our future. Ilia asks Kevin some of the more practical questions, such as, can technology merge with human biology? How will it impact our economic and social futures? And later, Kevin tells us about his part in founding Wired Magazine, how he learned to face the future and his optimism about global unity.

Ilia: What do you think of the transhumanist move towards brain downloading, towards wanting to create mind.

Kevin: So the idea is if you could understand the workings of a human mind, then you could kind of emulate that onto silicon, and you could move a mind from a human there. I am incredibly skeptical of it for this reason. The reason is that, from my observation and everything that I know, the substrate of what a mind runs on, the brain shapes in some ways what can be thought. So, another way of saying that is, there's a famous conjecture in computer science called the Church Turing Hypothesis. It was Turing and Church together, they made this hypothesis, which said that all universal competition are equivalent. In other words, that if you have a computer that can do the basic functions of universal competition, that it's equivalent to any other computer. So, the computation doesn't matter, and it doesn't matter with a caveat, which is that if you have infinite time and space and real minds are not infinite and time, they're limited. And because they're limited, it does matter what you're going to run on.

So, I make a conjecture that to have truly, absolutely human-like thinking you have to have human-like tissue to run it on. If you're running it on another kind of substrate, it will shape what it's thinking. So it might be very much like a human, but it won't be exactly like, because it's running on something different. And for simple things, it doesn't really matter what it is, because you have an infinite time and space, and the differences of the time that you have, it doesn't really matter. But when you get very, very complicated, those differences of timing and whatnot make a difference. They're Gregory Bateson's idea of a difference that makes a difference. And so, my conjecture is that if you really want to make a humanlike thing, you will have to make something that's wet tissue-like and neuron-like to really have that experience. So you could download a human mind into this other thing, but it's unlikely to say that it feels the same, or that it feels identical to what it was in, in the other brain. And so the reasons for doing that, like you said, well, we can download you, but you won't feel the same, you'll feel a little different. Some people say, that's fine, other people say like, no thanks.

Ilia: But I mean, the efforts now with, oh, who is it? Rodney Brooks lab and Sophia and that building Androids that mimic literally humanoids, that look like we can build robots that look like us, and then we can perhaps not so much download or perhaps reproduce certain patterns of our choices or algorithm in these.

Kevin: Yeah, yeah. There's a reason to do that. I mean, again, there'll be several thousand different species, and most of the AI in your life as you go forward, you're never going to even know about. It's kind of like plumbing. Most of the technology in the world, technology succeed by becoming invisible. So motors succeeded because we have them, but we don't see them unless they're not working. And so, most of the AI coming up, you'll never see, it's going to be behind the scenes doing things, you'll not be aware of it. There'll be some forward facing AIs, and many of those will be doing different functions, and they don't have to look like humans. But there are going to be some AIs and robots that we're going to interface with on a daily basis, and we want those, because that's human scale. Because there's the face is incredibly compact, powerful messaging interface. So they'll have faces because they work so well on us and with us, and they'll be at human scale, and they'll have other things like us, and so there'll be a reason to make some of them like that. But there'll be a tiny minority of all the ones that we make. But there will be some benefits to making ones that are our scale, our size, like us, because when we'll find it really easy to interface to work with it that good, and so those are going to be made like us.

Ilia: Do you think we should worry about—so those which are made like us, the kind of humanoids, for example, in Japan, they use a lot of humanoids now for receptionists, for dental assistants, and they are beginning to replace human jobs. So should we be concerned about that? Or should we just accept this is part of our AI evolution and we just need to get better jobs or better skill?

Kevin: Yeah. Most of the jobs that robots are replacing are jobs that I don't believe any human should be doing. I think we'll be embarrassed that we paid humans to flip burgers.

Ilia: Yeah. Like McDonald's in Texas, there's a McDonald's in Texas that is completely roboticized automated, no human person.

Kevin: I think that's a waste of a human life to be spending flipping burgers, so I'm eager to replace those kinds of jobs that humans are not really looking forward to doing every day. And the more of those that replace fine, however, there are lots of things that people would love doing that even don't pay well, that may pay better in the future. Like sitting with somebody who's sick—that's something that a lot of humans could do without much training.

Ilia: Right?

Kevin: A lot of people would love to do that, to be paid to do that, and I think we'll end up paying humans to do that because they're human. And so, teaching is another thing that we're going to have incredible teachers with AI, but they're also that kind of human oversight, the human bonding. People will come to understand how valuable that is and pay people better to do it. So again, the employment thing, I'm not concerned about because there are so many jobs I want to eliminate for humans. They shouldn't be doing them unless they look forward to doing it every day.

Ilia: Yeah. Or they don't have other skills, or they don't have the education for some of the jobs, so I think we also need to make yeah.

Kevin: You know, as I said, small talk, most humans can do that. And I think that becomes incredibly valuable, being present—just even someone who is overseeing the bots. So, there were the AI LLMs, the current LLMs are pretty good at answering questions and doing help. So one of the few peoples who have lost their jobs are some of the help desk people in the Philippines. And what's interesting about it is, because they have these bots that can answer questions and get help when you have a technical help desk. And what's interesting is, not all of them have lost their jobs, because what's happening is being able to afford 24 hour help support is something that only very few big companies can afford because of humans, all the humans, you need 24 hours running. It's a big, big operation. It's a lot of money. But if you have an AI help, that means that the tiniest little business can afford to have 24 hour help.

Ilia: That's true.

Kevin: And so what happens, and then the big companies, what happens is that what they find out is that it increases the level of the quality because you have to escalate to some point where the human comes in. And so the general level and quality raises and the amount of it is spreading. And because it's spreading, people are now willing to pay for that human oversight. So the people who lost their jobs, actually, they're now more in demand than ever. And so what you get out of it is you get increasing quality and spreading it because it's now affordable. And once people have it, they demand it because everything has 24 hour support. They demand, they expect it. And so because they expect it, humans are at the highest level are needed, and that actually increases the number of people that are hired. So like lots of things, we're probably going to see an increase in the number of humans doing help and the increase in the quality of the help all around, because people who had never even thought about having tech support 24 hours, now they can do it.

Ilia: Recently the historian, Yuval Harari, he has been speaking of the ills of AI. So the recent article of his isn't an economist. He has, in a sense, in his view, it's about massive corporations, this technopoly that we are all now subject to. And there's this kind of digital colonialism where big corporations are managing the algorithms that really create the biases and the societies. The gap between wealth and poor, in his view, is just widening exponentially. And in his view, we're on the cusp of an imminent collapse. Do you think that is an accurate, I mean, listening to you, I mean, I feel much more, not just optimistic, but I think it's more balanced looking at AI in the context of human evolution. How might you see Harari’s deep concerns for AI and especially generative AI?

Kevin: So I'm going to put it very well, which most critiques of the AI really critiques of capitalism and corporatization, basically.

Ilia: They're pretty tied up together.

Kevin: They are, I think it's true even outside of AI, that the global corporations just play a larger and larger role in our lives, and so the AI part of it's just one part of that, and so that's a larger concern. And my concern about that, the corporatization of it is—the limit to that is the coming global population collapsed. The decreasing number of people that we'll have on the planet every year after, say, 2070. And the fact that, every country, even developing countries are going to go below replacement level. We have no experience with capitalism where there is a shrinking market every year. The whole thing about capitalism has worked because we've been growing. You can have that kind of a growth because every year there's more and more people on the planet. And so, like, if you have a planet where there's less and less people, that kind of growth does not work. There's no way. So, we're going to have to do something different because of that. So there's a limiting factor just in this sort of shift as we turn around the demographics and that every year we begin to have fewer, fewer people. Like Japan is already experiencing. Japan and Korea are kind of out in front, but all of us, every country, there's not been a single country that's gone below replacement level that has ever managed to go back up, no matter what they've done. Korea's now spending $75,000 per child, that still doesn't work. We don't know how to change that. So for me, that's the limit and there has to be something else, another kind of growth besides literally growth because that's going to be impossible.

Robert: Technology has been impacting our economy and societies long before computers. Next, Ilia and Kevin address more fears about the impact of these tools, particularly who has the power rand influence. And later, why Kevin helped found Wired magazine.

Ilia: I think what people fear today is the lack of any kind of ethics or constrictions or boundaries or norms around AI in the hands of the wrong people could really land us in a very difficult situation. And it is a question of ethics. And so far as is there a right way for AI to develop or not? You know, is it, I agree, I like the open source AI, it does give a lot of freedom for creatively expressing ourselves and developing ourselves, but at the same time, it is a concern of how we make choices and what kind of choices that we're making with it. So people are concerned that—I think one of Harari's concerns, and I've seen it with others, that we're on this cusp of an imminent, for example, the threat of nuclear war—just the touch of a button, the way AI, we're so wired together, and yet, who's behind the controls? Who's managing the data, so to speak, who's pushing the button? That's what makes us very fearful and nervous.

Kevin: Yeah. People are fearful, but when people act on their fear, they always do stupid stuff. I mean, all the stupid things in the world comes when people acting on fear. So yes, there's fear, but you don't want to act on the fear because that's going to make you do something stupid. So yes, I would say this, I would say that AI will produce the biggest, weirdest, most difficult problems we'll have ever seen as a society. I don't know what they are. I don't think the ones that he's worried about are actually going to be the really big ones. But I think it's inevitable that they will be really, really big and problematic. But the reason why I'm optimistic is that, so far in the past three, 400 years, technology has expanded our ability to solve problems faster than it has expanded the problems.

I'm optimistic, but not because I think our problems are smaller than we thought, but because our ability to solve them are larger than we thought. And so, AI will continue to give us more powers to solve the problems that it will create. And there will be some problems that we have not encountered before. I think the issues of ethics, the problem—ethics are actually pretty easy to give to the AIs. It's just code. The problem is we don't have a set of ethics that work for humans. Our own ethics right now is very shallow, very inconsistent, and when we try to code it, we realize that they don't work. And so, all the AI companies are bending over backwards to try and make this ethical. In many ways, they're constraining what can be done. That doesn't mean that somebody who was a rogue or uncivilized party could take and misuse those. That is a real concern.

It's not so much the AI themselves, but what you do with bad people who get access to them. I think that that's a legitimate concern. And it's like we have the same question, is, what do you do about bad people? It's like all the reasons why we can't have nice things, right? Because one or two bad people make it really tough for the rest of the world to deal with. But that's what society and civilization is about. It's about dealing with those bad characters who are a minority, and how do you encourage most of the people in the world to be good citizens?

Ilia: You kind of pointed to it before when you were talking about how we can reshape the workforce with AI and keeping things. I think one is, AI, I think it's an invitation to know ourselves in a deeper way. So I think meditation, yoga, or centering or just slowing down and coming to an inner awareness of this ground to being, that's within us this creative ground, you know? That the very thing that allows us to create a magic that doesn't exist. So I think the more we're in touch with ourselves, the more we're at home with AI in as many possibilities to build a better world. Second is, I think also we can use AI and we ourselves, how do we, instead of alienating ourselves, I mean, Sherry Turkle is very, very concerned that we're alienating, right? We need to talk more, she says, to one another when we're alone together.

And you know, is it, who's alone? Are we alone within ourselves, and therefore we're using technology to satisfy our deep need for relationship or conversation? When in fact, I think it's not AI, it's us. I think very similar to you; I don't think our problems are AI, I think the problems are within us. And I think we need to do some soul searching. You know, what makes us like you said before, what are we about? Because we're creating this stuff. We're putting into it the very things we want. We want to go, we want it to improve, we want it done better. So, yeah, I'm really happy to hear this because I think sometimes we overreact to AI and we set up, especially when the media gets hold of this stuff, these tremendous fears. And then everyone's on this cusp of existential threat, and then everyone becomes an enemy, AI is an enemy, and we have a very fractured world so, yeah.

Kevin: Yeah. No, I mean, one of the things about AI right now is the frontier is very fast, but adoption is going to be very, very slow. It's going to be decades before the current models are embedded into the society, and they make any difference at all right now. I mean, how many people do you know, who've been fired of the jobs because of AI? It's like zero. But people have the fear of it. And so part of that fear would go away as people become used to it, they realize that they can't really do all these other things that people are imagining, that they're kind of limited in terms of what they can actually do day to day, so that would help the fear just kind of associating with it. But I do agree that we have the opportunity to improve ourselves, both collectively and individually, to use these tools to transcend the things that have bound us in the past.

And I think, again, I think there is a divine component to these things that allowing us to align ourselves with the good and the better and the true, and that gives us another tool to do that. And to not have to spend as much time working to make food and shelters so that we can elevate our own horizons and be much more about actualizing who we really are and what we can really do. And so that to me is a boon. That's a benefit to us. There will be bad uses of it. We know that. We will continue to work on minimizing those, but I think the positive benefits outweigh it like they have in the past, for the past couple hundred years. It's possible that could all stop tomorrow, but it's very highly unlikely. It's likely that this will continue going in that direction.

Ilia: I agree. Yeah. I am completely with you because I think this is what it means to be part of nature and part of nature's evolution. And God is part of this as well, so I have no problem looking at this kind of cosmic conflict. What led you to Found Wire Magazine? That's really a great magazine and it's really current. And I've actually forgot that you are the founding father.

Kevin: Yeah. So there was another couple who were really the true founders of it, who brought it to me in an idea phase.

Ilia: Is that the Earth movement?

Kevin: No, this was, I was editing Whole Earth and they were looking for an editor to run the magazine. So the idea was that there was going to be this technological thing happening that was going to be really big. At that time, tech people were not cool. They were the geeks. They were the nerds and the people beat up on them and made fun of them. And part of the agenda was to make it cool to kind of really to say—and to focus on the people who are making this and say, what are their dreams? What are these people who are making this really want? And where's it going to go? And we were optimistic. And that was the other thing, is it was a place for us to be optimistic about this because everybody else in the nineties was pretty pessimistic about this. Kind of like, similar to the AI. People thought—I mean, I had a list of authors who said they would never, ever use a word processor.

Ilia: Really?

Kevin: Because it was going to destroy writing. Oh, yeah. They would never use a word processor because it would destroy writing. Well, I mean, everybody uses one. So that was part of it was to report on this and make technology... technology was kind of like an afterthought—there was no technological columns. I mean, it's hard to remember how marginal this technology stuff was. And we felt that this is going to become the major central thing, which it did. It became, I can't go to a single dinner party that isn't where AI is not being talked about now. It's like in the center of everything. And so, that was the premise of Wired, was that Louis, he convinced me by... Louis and Jane were the couple. Louis said, I want to make a magazine where it feels like it's been mailed back to you from the future.

Ilia: Oh, wow.

Kevin: And so, I saw my job, I described to the writers, my job is like, the future's happening, but this is Bill Gibson's quote, "The future's happening. It's unevenly distributed, but it's erupting in these places, and we want to go to the places where the future is erupting and we're going to report back and say here's what's happening." And so that was the model of it. I think as things happen, seem to happen faster, faster, people realize that thinking about the future is like a life skill. You have to be concerned a little bit about where things were going. A hundred years ago, 200 years ago, it didn't really matter. It didn't change that much. That's not true anymore. You really do need to think about the general trajectory of where things are. And so everybody kind of has gotten that message that you have to face the future a little bit more than you have in the past.

Ilia: You began with your own story from New Jersey to Asia. So I want to ask—some people see we're kind of in this chip race, this kind of AI race between Asia or China and the US and Russia. Do you think AI is actually enhancing a global race between nations? Or do you think it can actually unify us into a planetary type of community?

Kevin: That's a really good question. I’m generally a total globalist, I think right now what’s happening is that we are becoming a planetary society reluctantly, but nonetheless is happening. We have planetary scale problems like climate that we need planetary scale cooperation and government. I'm a total world government fan. However, right at this moment in 2024, there's a lot of saber rattling happening in the US against China. And I'm a little worried about that. You know, there's this idea of keeping the chips from China. From my perspective, this is going to accelerate the speed at which China makes these chips? They're going to make their own chips down to the five nanometer scale, and they'll make really good AI chips. We're not going to prevent them from doing it—but I would say that's a good thing for China. So, it good for them.

So in some ways it's like, I think this is shortsighted, I would say. I think the tariffs are shortsighted. I think there's a shortsightedness to it. I think what we want to think about China is a very able competitor, but not as an enemy. So I think the idea of competition is actually good, and to compete with China would be great. I think making them into an enemy is a step backwards. So right now, AI is being used to turn China into an enemy, but I don't think that's inevitable, or I don't think that's actually optimal either. But I think AI itself wants to be pretty global. I think there'll be benefits to having AI try to manage some global things, because that is beyond the level of human or even a bunch of humans, or all humans. So I think there's some of these planetary systems, planetary systems that should have AI governing it, and we'll have to trust the AI to do that, and we can. So, I think AI is an inherent factor in the true globalization of the planet.

Robert: We're halfway through our series with Kevin Kelly. In the next two episodes, Ilia and Kevin focus on religion. In particular, they ask, can religion help shape AIs? Does it still have the power to answer questions about our humanity? A special thanks to our partners at the Fetzer Institute and our team at the Center for Christogenesis. I'm Robert Nicastro. Thanks for listening.