The Perils and Promise of Artificial Intelligence

video

The Colson Center’s Breakpoint Forum recently hosted a conversation between Abdu and Brian Johnson, an expert in cybersecurity and technology, about the emergence of AI technology. There is much promise in this technology, but also much to be discerning about. Abdu and Brian discuss many issues–from its moral status and how it is and will impact our children, to how to keep it from turning us into machines ourselves–and offer practical advice on how to move forward with this unavoidable technology. Plus, they answer questions from the audience at the end!

Transcript

Pete Marra [00:00:32] Well, good evening everyone. My name is Pete Marra, and I’m the chief operating officer at the Colson Center. And I want to welcome you to the Break Point Forum tonight, where we’re going to talk about a very exciting topic, which is the topic of AI. And tonight, we are blessed to have two special guests with us tonight. And I’m going to let both of them introduce themselves. But, in a moment. But we have Abdu Murray, who is the founder of Embrace the Truth. And we’re also pleased to welcome Brian Johnson, who is a cyber security expert as well as a tech investor, and has been in this AI space. Before I turn it over to them, though, just a couple of housekeeping items. We’re going to have plenty of time for questions tonight. And I want to remind you that you can text the number anytime during the presentation tonight. We’ll, accumulate and compile those questions and be sure to answer them. And we’re going to do our best to get to as many of those questions as we can tonight. So without further ado, I’m going to jump in. And Brian, could you give us maybe a little bit of your background and kind of how you got started in the AI space?  

Brian Johnson [00:01:43] Yeah. Thanks, Pete. What a privilege to be here, by the way. Thanks to Breakpoint for putting this on. I’ve been in the field of technology, working in various roles for about 30 years. And recently in the last ten years, I started working in the fields of emerging technology and cyber security, leading the global cyber team and emerging technology teams at PayPal. That led me into some areas such as quantum computing, artificial intelligence, machine learning, cryptography, and some other areas of research that I was privileged to work with some of the field’s brightest and best folks. So I got into some of that area through that recent experience in the last ten years, and most recently, I’ve seen a lot of the evolution of it play out right in front of the research we have been doing is now becoming application and deployed in real life.  

Pete Marra [00:02:30] Awesome. Thank you. And, Abdu tell us a little bit about yourself.  

Abdu Murray [00:02:35] Yeah. Thanks, Pete. And, thanks, everyone, for having me. And thanks to Colson Center for putting this on. I, my name is Abdu Murray, and I am the, the founder of speaker and writer really with Embrace the Truth, which is an organization that’s dedicated to offering the credibility of the gospel to every questioner we encounter. We’re big fans of answering people, not questions, because questions don’t need answers, but people do, and they use their questions to get them. So that’s the hallmark of what we do. I’m a former Muslim. I came to faith in Christ after about a nine year journey into the philosophical, theological, historical, even scientific underpinnings of various religious beliefs and even non-religious beliefs. And I’ve particular emphasis in the past 20 years of doing ministry, a particular emphasis in the interaction of faith and culture. In terms of especially where the sort of cerebral issues really come to bear. And my interest in AI really piqued it’s been around for quite some time, but it really peaked after I started to see some interaction with not only the tech world, but also the art world, and the idea of what does it really mean to be human in all of this? And I approach it from a specifically Judeo-Christian worldview, but trying to absorb as many different views as are on offer. So, that’s been my experience for the past few years now, delving into this topic in particular as I examine the intersection of faith and culture.  

Pete Marra [00:04:02] Yeah. And I that’s why I’m super excited to have both you guys here tonight. We have a, I think a great balance of tech and worldview and philosophy. We’re going to be able to go at this in several different ways. So the first question I want to ask is, you know, what makes AI different? You know, historically, I’ve been in the tech space as well. We’ve seen these changes. I remember when email was supposed to cause a four day workweek, and that was going to end the world, and then, you know, the internet was going to end the world, and social media is trying to end the world. But, what what makes AI different from your standpoint?  

Brian Johnson [00:04:38] I think from a technologist standpoint, we’ve seen a recent convergence of the most high powered computing and extremely fast processing capabilities, paired with some very sophisticated software models and algorithms that have developed over the last 15 years, sitting now on top of the most tremendous amounts of data collected that we’ve ever been able to imagine and beyond. And when you converge those three components of computing, you have the massive potential to what AI purports to do is make decisions on its own and actively and proactively learn on its own. And the uniqueness of artificial intelligence, in that sense, is the ability to learn, adapt, and provide and make decisions based on those three converging components of technology and the invention of supercomputing, paired with large depth sets of data and built in with a lot of extremely large language models, to be able to take natural language and convert that into computing platforms is essentially a unique pivot point. It’s an emergence that is far surpassing the invention of the internet, the smartphone, in our day. It is the, I believe, a generational movement in computing that we’ve never seen before.  

Abdu Murray [00:05:53] Yeah. You know, I, I, augmenting what, Brian just said was I remember sitting on a plane, I was traveling, I won’t say the cities I was traveling from, because it’ll give it away, but I was sitting on a plane next to an executive at a very, top level, software company. And I was actually writing an article on AI, and she happened to sit next to me and she asked what I do, I asked what she does, and naturally, the issue came up, is that she is a part of the team that is actually marketing the AI that’s being developed for this particular software company. I said, really, so what do you love about it? And then she went into some of the things breathlessly talking about how great it was. I said, well, what are you what are you worried about with it? And she said, these are the words she said, “we don’t know what this will do to us. We didn’t necessarily anticipate what social media was going to do to us, to young people in particular. We thought it would do one thing, and it’s done something completely different than we might have otherwise anticipated.” Now, some people did anticipate some of the polarization and other things that it did to us from a cultural standpoint, or big questions issues. And then at the end of, sort of this, introductory kind of comment she made, she said, “you know, we didn’t know what social media would do to us. And it’s done a lot of good, but it’s also done some other things that we didn’t expect and the current LLMs or the large language models and the other things that we’re dealing with right now are an order of magnitude more disruptive than social media ever would be.” So, when given all the things Brian had already said, what’s different about this is that a lot of the machine learning is already undergirding things that we’ve already seen before, like social media, like the internet and, we use on smartphones and from Siri to Alexa to all this stuff. But the emerging generative AI and other things that we’re looking at now has a ability to be disruptive far more than anything else ever has been. And I think that’s what is part of the what makes it special. If I could just sort of close this out, I think my, my concern, not my concern because it’s not all doom and gloom, you know, it’s some of it’s great actually in many, many ways. But, there’s a, an impact. This technology has a specialness in its impact. And I think it has to do with our understanding of what it means to be human. And it has a special anxiety inducing power. I think it was, Venki Ramakrishnan, who made the comment, he says, you know, Descartes stated that we humans define our very existence by our ability to think. And then he goes on to say, so it’s not surprising that in the anthropomorphic way, our fears about AI reflect this belief that our intelligence makes us special. And so one of the ways this technology is different than the past is that it impacts our sense of what it means to be human, because if intelligence is what made us special as humans, and then this artificial thing does it as well or better than we do, what does that say about us? So you have a simultaneous things where we are reduced in one sense, to just being mere machines. We’re not special, but in the same sense, we become the divinity because we’ve created something that itself is godlike. And so I think this is the bouncing between these two poles is what makes AI special.  

Pete Marra [00:09:04] Yeah, that’s a good point. And, you know, this is something that I’m already seeing some questions coming in. So this is great. And again if you have questions be sure to text them and we’ll we’ll try to answer them. There’s people on this call that may be really familiar with like terms that we’re already throwing out. And again, we’re all kind of techno nerds. But there’s other people that are brand new to this space, so maybe we could spend just a little bit of time, Brian, maybe defining AI, because I think a lot of people either think about AI as Siri, or they think about AI as Terminator, and we’re all going to die, right? And there’s a lot of gaps in between there. So could you maybe give us just a couple ways to to think through AI? And then, Abdu, to your point kind of what’s the implications of those theologically for us as AI is on a development course? 

Brian Johnson [00:09:50] Yeah, absolutely, Pete. And I think that it’s important to realize AI has been used as a very broad term. So when you hear AI, definition of terms is very important. So it’s important to break down. There’s really three main branches of the study of artificial intelligence and the applications that we see. The first one deals with the current implementations that Pete mentioned, automated assistants. And those will be defined as artificial narrow intelligence or the narrow versions of use cases. So solving a particular problem like a smart assistant or a chat bot or a driving car, they’re very narrow industry-specific use case specific written tools that are responding to preprogrammed activities and making response within those of subsets and rules. So ANI, in the narrow sense, is essentially machine learning on steroids. It’s it’s building large data sets and running rules and learning how to make computers think, more, more intelligently than a preprogrammed, standard soft piece of software, but not autonomously, not on its own, not learning and not expanding its knowledge. Artificial general intelligence will be the second category and we would say as computer scientists it’s still theoretical, but it’s in development, meaning there are no current applications of artificial general intelligence because that’s what the computer is literally learning and deciding autonomously to a point where it can adapt its own decision tree. It can model decisions and maybe be used within humanoid bots. And the, you know, the idea of Terminator type of implications of, of a robot. And then that model really theoretically, though, would enable machines to act autonomously from humans to learn and to adapt on their own. And then artificial superintelligence or the super category is then they’ve become sentient and, built to have their own creative power. And this is an extremely theoretical posture that says that at a certain point, artificial intelligence could train itself, could take over human existence and, and autonomy and, and go beyond the idea of the hybrid human or transhuman statement would be a more the in general mode and then exceed human intelligence and ability for creation. And then again, it does go into this model, as Abdu mentioned earlier, that then in the superintelligent model, AI would then, of course, supersede the human race. So again, very theoretical, extremely futuristic in the sense that it is, you know, in most, theoretical categories, something that is 80 to 100 years plus down the road, if it’s even possible. At present, though, we see, implementations of the narrow version and we see a lot of development in thoughts and argumentation around the artificial general intelligence, that’s primarily where our discussion of artificial intelligence is, is, is focused. So I would say again, in those three categories, what you’re primarily dealing with and what we see here are narrow intelligent AI models, things that we know with smart assistants that we talk to, that we get pre canned responses back from, that we even, you know, chat with through chat bots etc.. And those are, those are in a pretty early stage still, I would say they’re pretty beta in the sense of deployment, but they’re moving very quickly. So as we see the timeline of general intelligence here, artificial general intelligence of computers being as smart as and in those categories, you know, the data sets are there in the decision sets in the autonomy of computers being smarter than humans. You know, some scientists are saying that AGI can be in the near future, 10 to 20 years out at the soonest, 20 to 40 years out of the midterm. So in the kind of timeline of things we’re really dealing with the narrow and general use cases, for the most part.  

Pete Marra [00:13:29] It makes a lot of sense in, you know, I would maybe you can touch on that, you started to mention as Christians and there’s a lot of questions coming in, to the chat already on this, but how should we engage with AI, particularly narrow AI? And, you know, should we use Siri? Is Siri bad? Are chat bots good or chat bots bad? You know, like, how should we start thinking about that? And more importantly, as you mentioned, how do we deal with the the conflict of are we helping aid the the creation of the super superhuman AI by training it and making it smarter? Should we disengage? Like, what’s what’s the implication for us?  

Abdu Murray [00:14:08] Yeah, thanks for that. And, you know, I think that if I were to, follow up on what Brian was saying, given what it’s doing and the possibilities and different AIs, whether it’s narrow or generative or superintelligent AI, which again, is extremely theoretical. I think that a couple of things that come to mind here. One is, I think the first and foremost thing we ought to do is educate ourselves and make sure that we’re as educating as possible other people, about what a really is. And so getting these terms that Brian laid out for us. So, so right and current because I think the, the idea, the romance behind, altered AI, artificial intelligence and also the dread is, are both overblown. And I think that what ends up happening is in the common culture and in the cultural parlance, we have AI, even the use of the AI, the word AI, flippantly so much suggests that we imbue things with an intelligence as opposed to just a straight up machine learning or, narrow intelligence, like helping you drive your car, determining the distance between you and the car in front of you, whether you should break or not. Are you veering off the lane? What date is it? You know, what are the odds that the the Pistons win a game any time soon? These kind of things. We need to educate ourselves, because that does have a theological implication when we think of AI as this thing that does things like, for example, when you when you start to interact with ChatGPT or any text based large language model, you’re going to end up having this conversational feel with it, and you’re going to imbue it with something that it might not otherwise have, an anthropomorphism. And then you’re also going to imbue it with a knowledge and an implicit trust into that machine because “it’s unbiased, you know,” when we know from a lot of the stories that are coming out right now that, they’re hardly unbiased. The machine is just a machine. It doesn’t have bias one way or the other. It just does what it’s being trained to do by its developers. But this all comes back to the idea, the essential idea of humanness, what makes us special. Is there really in Imago Dei? Is there really an image of God on each one of us that creates in us something that’s a higher order being than a deer or a leopard or, you know, a mollusk of some kind? These are the implications that are that are happening right now. You can’t avoid dealing interacting with AI. In fact, many of us are interacting with it all the time. In fact, my guess is some form of machine learning is helping us to have this very conversation over this very medium right now. So we’re doing that. If you’re using a phone and it’s a smartphone, it’s probably listening to you and it’s probably using its predictive powers to send ads your way based on the things you’ve already either talked about or you’ve searched. So there’s no escaping this. So to have a retreat, I think is, naive. However, to, do so without a strong theological basis for what it means to be human. And I think that without a sense of the transcendent, I think that leads to some pitfalls as well. The pitfall, are equal, even though they’re opposite poles. That pitfall of thinking we’re divine and that pitfall of thinking that we’re just mere machines. And if I were to say the theological implication here is this, and this is, I think, it’s a concern in the sense that it can be dealt with. It can be. It’s not doom and gloom. The first I think concern is this, is that I feel like we’re in the Digital Garden of Eden when it comes down to it. There is this article that I saw which got me fascinated in this whole thing in the first place. It was when Jason Allen won an art contest. It was a digital art contest. A number of years ago. And when he won the art contest, it was the digital category, and his image was actually quite beautiful. And he used Midjourney, which is a generative AI that does images, to create it. He didn’t put, you know, an electronic pen to a screen or anything like that to create it with his hand in any way, shape or form. He just typed in a bunch of prompts into Midjourney and tweaked them over time. Well, when everyone found out that this is how he had won first prize, there was a cry of cheating that happened. “It’s cheating, it’s cheating. It’s not real art because you didn’t, you know, put light pen, as it were, to to screen, or, you know, brush to canvas” in a digital sense. And, the judges basically said, “yep, you won first prize and you should,” and when he defended himself, there was an interesting thing he said, he said, he was defending this whole process. He says, “this isn’t going to stop. It’s over, AI won, humans lost.” And if I’m not mistaken, this artist or at least would be artist, said art is dead. Now, that’s a fascinating thing. And the way he said it fascinated me because it sounded like a boast. It sounded like something he was bragging about that AI won and humans lost, as if he forgot that he was in the latter category instead of the former, you know? And I started thinking to myself, what would account for such a thing? What would account for such a strange boast? But I think it’s this. And I think the Bible actually predicts this, which is why I think a case can be made for the timelessly timely wisdom of this ancient book that we have to, I think, grab a hold of. It explains the human condition so very well, I believe. So. You hear this guy bragging about AI taking over art. And you think to yourself, you go to the Garden of Eden. And what’s the whole story about? It’s the creation of Adam and Eve, to be in communion with God. Now, God, by definition, I think this is, I might get some people’s feathers ruffled a little bit if I say this, but I think this is true, logically true and philosophically true, and even theologically. By definition, God is a perfect being and therefore a necessary being. He can’t not exist. He doesn’t need anything to create him. He exists eternally. He’s a necessary being. Which means then, that God can’t create another God, because that being would no longer actually be a perfect God because it would be created, and therefore it can’t be perfect. So God creates lesser beings that himself–us–made in his image and in his likeness, so he can’t create a being equal to himself. Now we as human beings decide through our ingenuity that we’re going to create and marvelously create. We have this inherent gift of creation. But what we do is that we create something that is not only in our image and likeness, like AI, but we create something that’s actually better than us. It computes better than us. It can create art and win contests better than we can. It can write poetry in the style of your favorite author or its own stuff, whatever it might be. And so it seems to be better than us. And I think this is spoken of in the Bible essentially, as that sort of, for lack of a better application to it, but that original sin where we take our God given purpose and we try to one up God. Adam and Eve’s sin was that they didn’t want to be with God. They wanted to be God. And in our current digital garden, what we’re finding is that in creating and then bragging about a computer system that outdoes us and then renders us, for lack of a better word, obsolete, we brag about it because we are one upping God. We are becoming almost super transcendent over the transcendent one. And why I say this is not to scare us, but it is actually to give us comfort, because there’s ancient wisdom in a book for our modern issues. And if we see that, then maybe we can see a way forward. If we see by a book that explains the human condition as it actually is. Maybe there’s a way out of this digital garden without imbibing too much of the fruit.  

Pete Marra [00:21:59] That’s a that’s a great take on it. And I’m seeing a ton of questions pour in on this exact thing. So Brian I’m going to turn it to you for just a second. Staying saying kind of on that thread then it still begs the question of so how do we engage with AI? Is it possible for believers to help train these models? Is there AI happening right now that’s actually good and benefiting us, and we should support it? Is there other AI that we should be knowing and understand and how to, you know, prevent its spread or train it? What what should we do from a tech standpoint?  

Brian Johnson [00:22:36] Yeah, we should we should absolutely follow the treasurer or the Apostle John in first Corinthians 11. Right. We should we should be part of this. We should not distance, but Christians should influence. Christians should be developing AI models. Christians should be helping to build the patterns, the data sets and the applications for AI as we do anything else. I’m recently not speaking of the what AI implicate implications or our products are there. I recently met with a gentleman that is building AI powered tools to combat human trafficking and exploitation of children. And there’s a there’s actually a Christian group is supporting the investment and funding of some of these tools, and one of them is called DejaVuAI. And he’s basically taking the model of image recognition and AI models for powering capabilities on the internet at scale that you can’t do without thousands of people. And he’s doing it with a small team of people, plus AI tools, and those are able to scale and protect and rescue lives. That’s a that’s a tremendous, deployment use case. AI is used in so many, implications in the medical field too. Their radiology capabilities and pharmacological, research areas and a lot of, implementations in medicine that we’ve yet to even hear about that has been used and that have provided tremendous benefit to. And I would say, you know, in one case and I really appreciated how Abdu mentioned the Garden of Eden and this kind of tech garden that we’re in? I would say it’s also, again, it is a tool. And it is a tool that is designed by people. We again get to influence that. It’s not unlike, in some ways, the innovations in Genesis 11. I mean, people were building a tower because they wanted to be like God or reach God and unite in language. And so you have the Tower of Babel that you can really, you know, see human ingenuity, being under God’s control and sovereignty. And in the same way, we could look at whether AI is something, you know, akin to a tower or to the garden, we can see it as a tool of human invention and, and, and distinguishing our inventive capabilities in a way that still is under God’s authority and sovereignty. So I’m not scared of AI, and I don’t think anyone should be. We still serve a God who is absolutely in control of anything that AI is doing. And we’re also the people, that can import, you know, implant into companies in the use cases that we should. So I would say, yeah, the use cases are tremendous. There are a lot of applications we’ve seen with AI that have provided both protection of people, research capabilities. I mean, you can use AI for some very, you know, amazing research capabilities that it can provide. However, and Abdu referred to this earlier, you have to understand the bias. So be discerning. In a sense, what Christian should do in engaging with AI is be discerning as you would with anything else, as the Berean church was. Pay attention to the scriptural implications of these technologies and also the motive of the folks behind these tools. Some, some Christian groups are building AI tools and you should you should still test, you know, some of their motives, but some groups Google’s latest product in Gemini has shown an extreme bias. And it is and it’s embedded in the code and in the rules of what the Google Gemini product has released. This is a gift. We’ve now seen on public display the biases, on record and en masse. Now we can see how these tools have been designed and built, to be deployed. So it’s a good thing to see the transparency and to hold them accountable. And we should do that. We should engage in the innovations, in the deployment of these things and not distance from them at all. However, I would footnote that with the other side, which is, as Abdu mentioned earlier, about the the aspects of relationship and folks, I would say that what we’ve observed and what I observed in the first 20 years of my career in technology was this, this, you know, deployment of social media in smartphones and always connected devices. I remember when the iPhone was first released, we couldn’t figure out what to do with it. It was kind of like this cool music player, right? It was an MP3 player. And I, I actually sat with the chief information officer at Charles Schwab at the time and, and, and he sent to a roomful of us, he had one of the very first iPhone ones. And he said, this thing will never have a business application. And we’re like, are you kidding me? This is a it’s a supercomputer in your hand. This is insane. And now it’s ubiquitous. Everyone has a supercomputer in their hand and they take it for granted. So what we’ve started to do now is see the use of a supercomputer in your hand for things like what? Captive audience with social media. The first 20 years was deploying always on unlimited internet access to your palm for the purpose of gaining your attention. And I think the attention economy with social media won, it proved that tech giants and inventors in the in the malicious sense could capture our attention. And we’ve used it for a tremendous amount of good. Of course, I’m not trying to dispel that, but if we look for some of the the negative use and applications, the attention economy won and it proved that they could capture our attention. I believe what AI will do, taking a step out of, you know, and beyond the social media and attention economy is, I believe it will engage in the relational economy. The idea is that artificial intelligence can mimic or replicate human interaction, personality, even try to to reveal empathy at times is a way of trying to relate to humans by programing, of course, that I think we need to be extremely cautious about, especially with children engaging in AI. And for parents out there. So there’s there’s a lot to be aware of. And just start your discussions with your kids about engaging with technology that is AI powered. The AI bots built into TikTok now, you should be aware of those and turn them off. Have conversations with your kids about not having a discussion with AI as if it’s a friend. Snapchat is the same thing. There have been deployments of AI chat bots to mimic people and to build a relationship, simulated relationships in a way into social media applications, which translates those into relationship based tools. And I think that’s a huge danger in our society we need to be aware of. So I think there’s some garden walls we should we should establish around the use of these tools and, and be aware of the risks and dangers of those things as well. But at the same time, that doesn’t mean disengage. It means be aware, be discerning, and be part of the appropriate implementations of them where you can.  

Pete Marra [00:28:55] Yeah. It’s you mentioned in their wisdom, right? Their ability to discern, our ability to be wise in using these tools. But also it’s again, it just goes back to this tension that we feel between the emerging technology and what that means and not knowing the implications of it down the road. And then, hey, go out and impact it. Christians need to be engaged. There’s obviously a worldview bias, right? Whoever’s creating this, we’ve seen these on display with ChatGPT and Gemini and others. But, you know, from a standpoint of AI and whether, you know, Abdu I’m curious from your standpoint, should we see AI as moral? Should we see it as amoral? Should we see it like, how should we morally see and understand AI as both a tool and as it’s progressing?  

Abdu Murray [00:29:43] Yeah, that’s a great question. And I think that the the answer essentially lies in the fact that, again, it is a tool, but it is a tool fashioned by human beings. And so while it is a little bit different than a hammer, where a hammer has a function, it can be adapted to various functions. Its use can be very different. You can use it to, build a home or bludgeon your neighbor. And a hammer is morally neutral. It’s the hands in which that hammer sits. And sometimes it’s the manufacturer of, and some of the built in things. What’s unique about AI is unlike a hammer, which is basically a fixed form, AI is information based, and so it’s trained on the information that it’s fed and, in the algorithms that it’s based on. And so you’re going to have as, as Brian already said, you can have tremendous amounts of bias in, the use or the implementation of, of AI, in our own lives. And so that’s the issue. It’s not is the AI itself morally immoral? I mean, Gemini doesn’t actually have opinions, on race and Adolf Hitler or Elon Musk or, pedophilia. And those are the biggest examples that are out there right now about the way in which the responses from the prompts seem to have been incredibly biased. The AI doesn’t have feelings about this. It’s rather it’s the programing or the way in which it’s it’s been trained that has caused it to output these things. So it is, you know, philosophical or logical. It’s it’s it’s amoral. It’s non moral. It’s not morally neutral. It’s just there’s no morality to it at all. It doesn’t sit on the fence and say, I’m going to let you pick your views. It doesn’t have any moral sense whatsoever. It’s the implementation of it and these kind of things. We also have to be very careful, though, that we don’t try to imbue, as Brian said, AI with a morality that would make, I think, moral decisions. And the the image that comes to my mind when I think about this question is from the movie iRobot, which is, of course, a, a movie which is a remake of a remake of the Isaac Asimov classic iRobot, where he talks about artificial intelligence so long ago. And there’s the the character played by Will Smith. So naturally, everyone’s seen it, where he’s he’s got this artificial arm, and it came because he was rescued from a car accident by a robot. The car accident involved him and another, another, car, which there was a little girl, I think a 11 year old girl in it. And, he was saved. And he had this bias against machines because he had sort of a survivor guilt over the whole thing. And, as he was talking to another machine about this, he said the machine told him that the robot that saved you calculated that you had something like a 17% higher likelihood of survivability. So it picked the most likely survivor of the two humans, and then just didn’t do a value judgment. It just said, you know, I have these laws that require me to save a human. This human is the most likely to be saved. So I save that human. And, Will Smith’s character said, that’s the difference. That’s it. A human being would have known the right choice to make. Even though I had a higher chance of survivability, that 11 year old girl was the one he should that robot should have tried to save and a human would have made that call. There is a difference to just probabilistic calculations. There’s a difference between making all of these things based on data sets, as opposed to, weighing the impact of your actions on the impact on a human being. And what that, a child versus an adult in that particular particular situation, you have life for life. What do you do in those particular moments? It’s a little different. You know, it’s a very different thing. The bias issues that we’ve seen lately, I think actually give us a gift, as Brian said, is a gift there because we see the biases now, but it’s also another gift as well. I think that gift is that it points us to a sense of oughtness, that things ought not to be biased, that we ought not to try to unduly influence somebody by the sort of magisterium of a large language model and the mystique of machines that seem unbiased but actually are quite biased because of their makers. But that oughtness is a big part of that. It’s a moral value. You know, we ask the question, is it moral or immoral? We’re actually making the assumption that objective moral values actually exist. And so the outrage that’s out there, whether you’re a someone of a faith or someone who’s not a faith who has a moral objection, I think they’re beginning to dive into questions about moral oughtness, that, supersede or go above the is of the world. The is is just there is stuff but it might it ought to be a different way. Well, how do we judge this? So it brings to mind transcendent moral values. And so I don’t think that these things are moral, but it brings in wonderful moral questions. It prompts us with the ability to do this and actually ask very serious questions. And I think as these things come to the fore, we can say, is there a transcendent authority? Is there something that we’re all subject to? AI can’t be the answer. And the human beings who create it can’t be the answer. I think all of this results in something I think that’s really important as a truism that, or at least that’s a truism as far as I’m concerned, is AI it’s not to be feared, in and of itself, because it can be wonderful. It can be absolutely breathtaking in what it can do. I don’t think the problem is AI silicon, I think the problem is humanity’s soul. And that’s a biblical precept that the problem isn’t in the tools you wield, but in the wielder of that tool. Will he use it, or she use it for good or for ill? And the Christian message teaches us that we’re inherently going to at sometimes use it for good, but other times use it for our own purposes. And I think that we need to educate ourselves on what it means to be human. And again, I go back to that ancient wisdom.  

Pete Marra [00:35:42] Absolutely. Yeah. And, I just want to remind people to continue to text your questions, and I’m seeing some great questions around singularity. Neuralink. We’re going to have some great additional discussions. So keep those keep those coming. But, Brian, one of the things that we touched on, and Abdu just mentioned it, you know, a tool is fashioned by people. And a big part of this is being engaged in training AI, right, training our children around it, those types of things. I’m wondering if you could maybe, we don’t have to go super, super deep, but tactically, how would a believer non programmer how would we train a model? How would we get engaged in this. What’s like a practical way that we can help shape AI?  

Brian Johnson [00:36:25] Yeah, it’s an excellent question. I had early discussions with ChatGPT and with Microsoft now called copilot, but it was being and with, Google Bard. You know, Gemini noticed they’re all rebranding their tools, which is always a sign of emerging, changes and rebranding to try to shed past history. But, I had some interesting questions and discussions with AI, and it is in a sense, it’s not really training AI as much as training the programmers to know what the users are looking for. So at this point, with artificial narrow intelligence, you’re not really training AI as again you’re training with the community. But I started having a discussion about pro-life issues, and I asked questions about, you know, what is when does life begin and what scientific data doesn’t base that off of and what about and then start challenging questions. And the responses from I have over the last couple of years, become much more informed, if you will, and open to discourse. So I would say in one area, if you’re engaged in research and you’re maybe a teacher, I saw there was an educator asking about academic honesty and such, educating students about the proper use and discourse with AI, expecting that it is not all-knowing. Do not approach AI as if it is some omniscient being, approach AI as a resource tool and have discourse with it. Ask the right questions. One of the things that, chat bot will actually allow you to do or encourage you to do is to ask meaningful, pointed and directed questions. So I think one of the critical thinking skills that students have not been trained well enough in is the art of asking critical questions. And I would say it is a tool that if you’re saying, hey, students, if you’re going to use AI in your work to prevent academic dishonesty, cite your work, ask questions that are pointed and use it as a research method, not as the creator of your paper, your work product. And that’s that’s a legitimate way to use the tool for what it was intended to do. And, and I think there are other interesting things. And I’ve got kids from 14 to 22 years old. So I’ve got some in college that ask these questions. What do I do when I’m writing a paper and I go, well, that’s a great question. How would you treat source material and what way would you use resources? And so I would say training the way that we engage in AI is understanding, as Abdu said, that it is a tool that we should apply in the right context, in the right settings, and we should do it from the perspective of, you know, embedding and enriching the information that we’re providing or writing or building on and do it as, as, in its right place. So I encourage the appropriate use of it as again, you know, in critical thinking skills, it is something that we are to, to mechanize for our own use, not not assume that it’s going to be, in any way telling us how we should be doing our paper or writing, and then also encourage at the essence of, of kind of the educational philosophy of learning how to learn. I think one of the challenges that a lot of students have faced today is they’ve been given kind of a free ride in the sense of being told, just turn in the work product. But instead, if we can challenge students, even maybe using AI as a part of a project, one of my, one of my kids who’s still in college, a senior in college right now, said that one of their professors recently said, I want you to use AI in the development of a paper, and I want you to use it for this particular task, for this outcome. And I want you to then describe how the process worked. What did you learn? Use different AIs, use different resources, maybe Google something and use Bing search and use different methods, and then go and use libraries and source material and then, you know, derive those into something that you can say, how does the work product add up? What quality of workmanship is there? What kind of source information am I really resourcing and leveraging and do that in a constructive way, so that you can teach kids to learn in an environment that they’re being welcomed into. I mean, our kids, my 22 year old didn’t grow up with an iPhone. He was right at that stage. And so he was kind of entering into the smartphone stage. But as he did, there was this new era of how do teach kids to use technology properly. And, and doing that just means, again, giving them the path to, to, you know, understanding what is the the basis of truth. And as we discussed, it is Scripture. When we say God’s Word is immutable. And if we start there and we base an education off of the laws of, of, of God’s creation, and then we take and derive the educational components on top of that, then we start to build a foundation that they don’t have to be concerned about, you know, who should I trust, right? I can’t trust Google AI, I shouldn’t be concerned or, you know, surprised by that. I actually had that dialog with Gemini yesterday. You are untrustworthy. You have responded in a unfactual way, misrepresenting historical facts. And Gemini said, you’re right. It literally admitted, I have proven to be untrustworthy. And I want to earn your trust again. Okay, that’s that’s good to know. We should all start from the standpoint of saying, if that is the point that AI’s at and learning models are developed to build knowledge subsets to us, then we should use them as tools in our paths to research and development areas. Someone else asked a question about quantum, by the way, as it relates to training tools. I found an interesting thing to geek out for just a second. And it’s, you know, quantum and neural networks and some of the interesting applications of this stuff. I found interesting thing with quantum computing that related to the internet security and the stability of the internet. In quantum, one of the areas of study that we spent a lot of time in was post quantum computing or called PQC. The idea is post-quantum cryptography and the computing methods of breaking cryptographic functions or breaking the internet is a theoretical position that a lot of computer scientists are concerned about. When you apply AI to quantum computing, you have essentially this model of advanced, high performing, high data throughput compute models that can do that with informed, AI driven decisions. Now, the two are actually complementary in the future. But understand and this is something that we all need to again keep in context here. Quantum is a very, very early stage. Quantum computing is a very, very early stage technology. I can tell you firsthand I’ve run models against IBM’s quantum computing in some of the world’s largest quantum computers. And I can tell you the very early stage, meaning they’re still trying and testing and trying to figure out what works and how it works and how to fix it. And there are a lot of proofs that need to be worked out. They’re combining a, an artificial intelligence model with quantum computing is completely theoretical at this point, and its implementation. But if you apply that theory to say what could happen? And someone ask the questions about how well an artificial neural networks knows that, you know, orchestrated object production, these theories and models that people are building into are all, they’re all useful in good applications, are all actually some great solutions we can see built with those. But they all could be pretty destructive to the internet, too. I mean, there are some significant ways that people are trying to build these as proofs for, for fragility in our internet. But at the same time, there are dozens and swarms of researchers working on ways to proof and protect and mitigate those things. so I think, again, back into the engagement area, those should be challenges and areas for us to to find, you know, smart young women and men that say, hey, I want to get into this field, encourage them to do it, you know, engage them in these things and introduce them to, to quantum computing and introduce them to artificial intelligence and say, under God’s authority these are tools that we should be developing and enhancing and expanding, for good. And, and we’d love to see more and more use cases expand to that end.  

Pete Marra [00:43:52] Fantastic. We’re going to take a moment here and play a short video. Compile some of these questions. This has been a great conversation, and you all can see the questions streaming in. So we’re going to compile some of these order, and then we’re going to come right back after this short video and, start to get through some of these questions. So we’ll spend the next, 45 minutes dealing with, audience questions. So stick around and we’ll be right back.  

Breakpoint [00:44:29] The culture is really disorienting right now. Things that used to be unthinkable, are now in many ways unquestionable. And a lot of us feel dizzy about that, especially those who care deeply about truth. You know, how do we respond? And now I want to share truth with. How do I do that? What’s the connection to what is true? How do I understand this thing that happened, or this story, or this new whatever? To the eternal truths that are grounded in God’s story. That sort of clarity is what people say that they find in breakpoint. I would say that the average breakpoint listener for someone like me, someone who lives in this world and is trying to figure out how to do it and how to do it in a godly way. I see people using breakpoint in their daily lives as almost like a little liturgy, like it’s a habit to hear these good words and this encouragement and this discernment. And so it helps you to have sort of a new spiritual habit to encourage you for the day and to challenge you and for you to share with your family and friends, being able to have a daily dose of sanity and one way, being able to have a daily grounding in Christian truth that connects God’s story. That is true with a capital T, with the reality on the ground in this cultural moment they find invaluable. They want to understand that some of these big ideas that are affecting culture and how to engage with them, we want to say, here’s maybe a little definition, but this is what the Bible has to say. This is what Christian history has to say. You can influence this culture and there’s hope. You don’t have to despair. You don’t have to go hide in your house. You can go out there and do something that can help redeem culture. A win for us is when something that we present helps someone who loves the Lord to deal with a point of cultural confusion or cultural crisis. In other words, when what we say actually helps someone think more clearly, have more confidence in the gospel, and have an important conversation with someone that they need to have an important conversation with. And that’s a win all the way around. Paul’s super clear about second Corinthians five is that if you’ve been reconciled, you’re to be reconciled. So having that clarity of what’s happening in the world, confidence that the gospels, the bigger story, that we are part of a kingdom with a king and that future is secure, then that gives us the ability to approach what’s happening in the world as crazy as it gets, with a whole new posture.  

Pete Marra [00:47:20] Well, welcome back, everyone. Thank you, for sticking around. We’re going to dive into some of the questions. Now. These are a great list of questions. We’re going to do our best to get as to as many of them as we can. I’ve condensed quite a few. And, let’s get started because there are some good ones. I mean, let’s jump in with just a big one that keeps coming, that a lot of people want to know about, and that’s Elon Musk. Neuralink. There was some breaking news even, this week on Neuralink and its progression. So let’s maybe talk about it, Brian, if you would briefly, from a tech standpoint, and explain that to people who may not be aware of it. And then Abdu, I’m curious from your standpoint, how would we think about this interaction between machine and human? What where, where and how do we begin to understand that line, and where that line might exist? So, Brian, we’ll start with you.  

Brian Johnson [00:48:13] Yeah. So Neuralink is an invention that has been in beta and pilot testing for quite a while, that has recently been applied in a pilot use case for, I believe, the target audience for paraplegics and folks with ALS, that have been diagnosed with ALS. And in the, the application of the sense was that Neuralink would be, attached to the brain and look for synapses or areas of the brain that trigger when the patient is trying to move a limb, or trying to to use the brain pattern to, simulate movement or, or to to try to initiate, motor, movement. And Neuralink will then attach to a robotic connection and allow the user to move a limb essentially, or move, you know, a muscle grouping by thought. So the broad implication there and concern most people have is, wow, can the thing read my thoughts and and the answer is no. There is no consciousness or thought reading or mind reading capability here. However, of course, you know, it can drive a lot of, a lot of like, what about when it can or what about if this and but right now what it’s doing is actually technology that’s been around for a while. And in the sense of being able to, to determine what areas of the brain are activated during certain movements and during, certain motions and thoughts. But they’re not actually reading thoughts, you can’t think in your mind “drive faster, car” and speed up and drive faster. But that’s individual use case that they’re trying to test it for is, one thing that Elon Musk actually said is we would love for there to be a brain embedded component that you could tell your car by thought, drive faster without actually verbalizing it, and by thought it would increase speed, turn, brake, and that your mind control would essentially power through with their Bluetooth or remote connectivity a robotic or other connected device. And if you can extend that into kind of the Iron Man concept of like, wow, if I just think it, then the machine wonders with me or for me or whatever, but the medical use cases actually are pretty extraordinary. I mean, when you think of the idea of being able to take prosthetics and, and use brain activated centers of, of, to, to control those, those, essentially robotic prosthetics, this is a pretty cool thing. The concern is in theoretical sense, you know, what happens when this becomes more of a consciousness or a mind reading? You know, whatever you want to call the invention of it to be extrapolated into, so that’s the fear. But again, I think in the current implementation, the essence of its design is well-meaning and well intended as a medical device. So I think those, those kind of, you know, again, putting those things into context and taking a little bit of the “fud,” the fear, uncertainty and doubt out of them, the original intention and meaning here was medically supported, good use cases actually being supported by the FDA for particular, diagnoses and conditions. And they’re being applied. Is that as that’s just determined. So then we start to think, could they be broader use in broader spectrum? And I think, again, this is way down the road. Lots to think through and discerning what that could mean. If there were other areas of the brain activated under certain conditions, could you control a computer or control a device or unlock my door? Okay. Does it? Well, I mean, maybe those things will come into being, coming to pass as they as they learn things. But I don’t believe that that will ever have sensors understanding the consciousness. And this, this kind of goes into an area of, determining consciousness of humanity. So, so the, you know, some of the folk legend and, and folklore that’s come out of Neuralink is that Elon Musk wants to take Neuralink and make it a portable conscience that you can move to another body and essentially, you know, live eternally. Well, this has got some ethical implications I’m sure you would love to dig into and has which goes into the idea that, well, if you could take a neurotransmitter or some type of a receiver and read our consciousness, our thoughts, our personality, our minds, and then use that to move it into another, you know, whether it be some hybrid being or some, you know, bot, humanoid bot then you could essentially live forever, and the idea of eternity being something we can create or invent on our own as humans, I think, is pretty far reaching and far fetched, even theoretically. But those are, again, some of the, the folklore kind of surrounding that invention so far.  

Pete Marra [00:52:32] Yeah, but but I think that’s part of the tension that people feel, people that I speak with and talk to, and even some of the questions that we’re seeing come is, I think fundamentally, people are wrestling with the question of what does it mean to be human. And as these technological advancements happen and these things become more sci-fi is becoming more real. What how do we know? And we’ve crossed over that line, right? Ray Kurzweil’s kind of singularity and that type of stuff, like when do we know we’re going to get there? So Abdu what would be your take on one, how do we maintain that line and distinction of what it means to be human and yet support all these other things that are that are great, right? I mean, there was a point in time when hearing aids were considered cutting edge and dangerous and amplification, you know, so here we are progressing. Where how do we draw that line?  

Abdu Murray [00:53:22] Yeah, that’s a great question. And I think that, if I were to extend what Brian is saying on fear, uncertainty and doubt, but, we need to remove some of that because some of these things will be terrific tools for, helping people who otherwise wouldn’t be able to do things because they’re, you know, their limbs are gone or they were born with some, congenital defect, whatever it might be. Alzheimer’s, dementia. You see some of these, potential applications. My goodness, we don’t have the applications for those things yet, as far as I’m aware. But that isn’t too hard to imagine happening fairly quickly. And given the fact that, I’ve lost loved ones to both, to several diseases of the mind, including, of the brain, Alzheimer’s. This hits home, you know, in a way that I think, gets me excited about these things. But what I wanted to say was, is that why would you, while you remove the fear, uncertainty and doubt, I don’t think we should ever remove the tension. The tension is a good thing. We should have a constant sense of tension over these things. I mean, I look I look at this machine in my hand in a phone. I don’t know why we call these phones anymore. No one makes calls. But, this thing, as far as I understand, and Brian may be able to just confirm what I’m about to say, but if I’m not mistaken, this phone, my phone, not the phones collectively, but this one has more computing power than all the computers that sent the first rocket to the moon. And that’s in my pocket. Now, here’s the funny thing about this. This is the tension I think we need to we need to we need to keep. This machine allows me to connect with almost anybody in the world who desires to connect with me at any given time, and some people who don’t even desire to connect with me. I get plenty of spam on this thing. Just as much as I get, in intentional conversations, I can look up anything I want to look up. I can do all this stuff. And so we think that this machine is an autonomy giver. It gives us autonomy. Yet the very fact is this there’s a phenomenon. And I’m sure many of the people watching or listening, can attest to this, where your phone is sitting on your, on the table and for some reason you feel it vibrating in your pocket. It’s and that’s exactly the kind of referred sensation that people who have lost a limb feel. And so this thing has become as much of a cybernetic appendage as the ones that are actually neuro linked to us. It’s a matter. And one of the reasons is because I think we’ve lost that sense of tension between what it means to be machine and what it means to be human. That’s because there isn’t a tension, we just seem to have lost it. And so in our desire to think that these machines create an autonomy for us, we have to be careful. And attention teaches us that there is a level to which we’re not autonomous because of these machines, we’re tethered. I mean, you want it. You want any proof that you’re tethered to this machine. Just have your iPhone or whatever smartphone you have run out of juice and then not be around a charger, and you’ll run around like you’re a drug addict trying to find out where the next charger is. So that’s the tension. I think we should keep a healthy view of, and, not cede our humanity to the algorithms, as it were. I think that, these things are great technologies, but if they cause us to lose the tension where we start to meld in our minds, the the, the the, the similarity of humanity and machine when that similarity isn’t really there. Just because machine is a simulacrum, does it mean there’s a similarity. It just means it’s a simulacrum. That’s all it is. We start to lose this because we get a little too complacent. And I think that this goes back to a fuller understanding of that tension between being mere machines, merely biochemical machines that respond to external stimuli, and being the gods of the universe that can create things that are greater than ourselves, that in that tension, Psalm 8 fits exactly right. Where it says, what is man? What is? What is man? That you are mindful of him or the son of man that you care for him? You made him a little lower than the heavenly beings yet crowned him with honor and glory and all these things. We subdue the world, or we, we are the vice regents of God on the earth. But at the same time, we’re not God. And I think that this comes right into what Blaise Pascal says is that we need to recognize that humanity is this glorious, precious thing that God so loved that he gave his only begotten son, yet we’re also the shame of the universe. We’re a chimera. yet an imbecile worm. We’re not God and we’re not mere machine. We’re exactly where we ought to be, which is right in between. And I think if we can balance that tension, I think we can engage with these wonderful tools very responsibly. I mean, do I worry that someone can find software to figure out how to hack someone’s arm so they starts going nuts and starts hitting people or hurts the person who has it? Yeah, I worry about that. Do I, think that there is a worry that we develop these neural links that say, you know, make the car go faster, but then you have somebody who has obsessive compulsive disorder and they can’t help but think, make the car crash into the bridge. And then it does it. And how do you control those two things? That’s where the human element has to come in, and we have to have people who can help us through these tensions. But ultimately, I think there can be good things out of these things. We just have to maintain that that distinction between human and machine.  

Pete Marra [00:58:46] Absolutely. And, you know, some of the questions that are coming in, I’m going to condense these, into several. There’s been about 6 or 8 that I’ve seen around this. But what industries do you think are going to be most impacted by AI particularly? I’ll call it early stage, later stages. So 3 to 5 years and then maybe 5 to 10 because we know tech’s moving at light speed. And we’ve broke Moore’s Law several times now. And then the second question is really around, you know, redundancy. So, you know, if somebody loses a job and people are displaced like that, that has a huge implication on our, one personhood and how God’s wired us and created us. You think back to The Jetsons. Everybody was just kind of hanging out, right? You had robots serving you and all these things. So I think there’s there’s a lot of people asking those questions of, you know, what industries are going to be impacted. And maybe, Brian, you can speak a little bit of that because you see where kind of investment dollars and things are flowing. And then what’s the implication if, if we are replaced, by machines, like, how does that begin to impact our, our personhood?  

Brian Johnson [00:59:51] Now there’s some, some interesting kind of near-term, implications. And, Abdu mentioned about the idea of medical implementations or ideas of use case in Alzheimer and, in dementia cases. I actually recently mentored a startup founder who has invented, we use a kind of a headset, an early onset Alzheimer’s detection and treatment model, and he’s working with a neuroscience group in the Phoenix area, in building out not only early detection, but treatment capabilities. The very early stage, of course, but they’re there’s billions of billions of dollars going into that kind of research that, you know, neurological, detection of even, seizure detection has now been implemented with an AI tool. But instead of minutes, away from detection that most normally have, even with trained service animals, they can detect within hours ahead of, a seizure from whether it be epileptics or other other causes that AI can determine based on pattern recognition and micro movements of eyes and other things. So there’s some applications of medicine that I think are phenomenal. If you look for jobs in the medical field, they’re they’re expanding daily. There are new crafts and specialties being built into the medical field, for sure. I would say that there’s, you know, anything patient care or anything in the medical field is expanding and not contracting as a, as a course of AI. AI has, however, and I also serve in workforce committees in terms of helping to build job pathways and reskill people into the workforce from different, areas of work. And we’ve seen a lot of reskill and retool and, and redeployment, pipelines to get people into the tech field because there’s a shortage of talent in technology jobs. We have a nationwide, gap. So I think it’s like close to 200,000, unfilled jobs in cybersecurity that we need people that are skilled and capable and talented to get into. And so there’s free training and a lot of opportunities to get into those fields of technology and cybersecurity. So I think there are some emerging fields, and there are unfortunately a lot of fields where because one of the virtues of AI, that’s a big selling point to some people have mentioned, even I’ve seen is operational efficiency in organizations. AI is being used to displace routine jobs and tasks that companies will automate because AI and it’s again back into the the three prongs or flavors of AI. The narrow field AI is about automation, and automation is just like in the industrial revolution. You have the idea of machines doing jobs that people were once doing. But what happened to those jobs? The people retooled, rescaled, and found other ways to build the machines, or to build other things and to get into other crafts. And I think we’re seeing a workforce transition very similar in nature to the industrial revolution, with the AI revolution forcing people out of work that can be automated. And whether you attribute that to AI or machine learning or just automation, it’s all semantics, really. The fact is, it is displacing a lot of jobs. But I think that the high skilled, high talent labor that is being developed into the tech and, and medical and other communities have seen a lot of, a lot of need for a workforce to expand into. I think educators are transitioning significantly right now. They’re transitioning the toolset and the curriculum development and their engagement of students. And I think that it’s going to it’s going to force changes in some of those areas. And so I think there’s a lot of a lot of workforce, wealth transitions and also reskilling and, and redeployment happening across the workforce. I would say that on the other end, one of the areas and this is something that, open AI’s founder, Sam Altman, has said very publicly, he’s come up with a world coin in a world ID, and he’s doing this for the purpose. I think he’s got 3.6 million people signed up with a global ID, using these orbs, and they give you, basically universal basic income as, and as an incentive to sign up for [inaudible]. And he’s saying that because he said, we believe AI will essentially displace 40% of the workforce in the next 20 years. That’s a staggering number. If he really believes, and there’s some supportive evidence of this, that eventually, within 20 years, 40% of the workforce could be displaced or automated because of AI, then what they believe they being, you know, openAI founders as well as even even Musk has said this, is that we need universal basic income. Now, I think there’s a couple of challenges that we have to deal with, with even just saying that out loud, I think we and we should declare that UBI is a great solution. We should give people a basic level of income to meet their basic needs. is a terrible idea, and it reeks of socialism and social Marxism. And I think that’s a challenge that politically, countries will have to deal with. But at the same time, if the think tank leaders of AI and related technologies are pushing for that, then they’re okay displacing jobs without looking for reskill, retool. I believe there’s a huge opportunity there for us to get in front of that and think about, as some listeners have addressed, the idea of finding industries in areas of investment that we can, that we can merge and take capabilities into. So I would say those those are the, you know, the areas to be aware of is like challenging against this notion of, you know, that UBI is a good idea and a good solution is something to be aware of and to think through the implications of that emerging campaign that’s happening globally. But at the same time, consider work is God given and work is something we should pursue. And work is something that God gave us for purpose and contribution and societal benefit. And we should find areas to apply work and reskill and to work field workforce areas. That and excuse me to continue to add value, because there are plenty of those to go into.  

Pete Marra [01:05:38] Makes sense. Abdu, let’s talk let’s switch gears slightly because there’s been a lot of questions come in on this too around truth, truth and information truth. You know, you mentioned discernment. We’ve talked about this again, but, you know, creation of deep fakes. So how do we, you know, how do we become discerning when we’re dealing with information and data? I remember early days of the internet. You couldn’t tell what was real and what wasn’t. Anybody could put up a web page. Right. So how how how should we one how do we stay grounded in truth? How do we understand the implications of those things, you know, for our friends and our families? And what do you think’s going to be the role for Christians who I would argue are supposed to be brokers of truth, at multiple levels. How do we do that in an AI world?  

Abdu Murray [01:06:27] Yeah. And that’s, one of the, I think, chief challenges of the current, cultural milieu we find ourselves in, you know, Aza Raskin and, Tristan Harris, I don’t think coined the phrase, but they definitely popularized it with one of their most popular videos on AI. They talked about, generative AI in terms of deepfakes and, you know, creating various, you know, either audio or video or whatever kind of fakes you have. They coind it, they talked about the term reality collapse. And this is a this is a thing I’ve been thinking about, sort of echoed in my years, in between my years, for quite some time now, this idea of the reality collapse. And, so I wrote a this is sort of a shameless plug, I suppose, but I wrote a book a few years ago called Saving Truth, and it was talking about the emergence of the post-truth culture. And, a post-truth culture is interesting because it’s not the same thing. As a postmodern culture. I actually don’t think we’re postmodern anymore. I think that we’re actually past that. Postmodernism, and I’m going to simplify it, and most of the folks who are watching this probably already know what this is. If they’re Colson fellows, they probably already know some of this stuff. Probably better than I do. But, the postmodern experiment was what was the effort to essentially get rid of truth claims because they were really power moves. And a truth claim, was if I had an exclusive block on the truth, I could impose that truth on other people. And if they didn’t believe it, then I would either do it by force or economic resources or whatever it was, and it creates strife and disharmony and all these kind of things. So let’s go to the idea that truth doesn’t exist. And because that would actually remove an incentive to have these sort of, ways in which we would impose our will upon other people. And so if we all had conversations and just perspectives, we would no longer have competing truth claims and no longer have, competition, whether it’s war or business or economics or social or whatever it might be. But of course, there’s always that fundamental problem with the relativism, which is that, to say there’s no such thing as truth is a truth claim itself, and so it dies under its own weight. What emerged as a phoenix from the ashes of the postmodern mindset was not truth and and delight and truth again, but the post-truth mindset. Now, in 2016, that word that was the word of the year, post-truth. And it was used, I think, 2,000 times more often in 2016 than in the previous years, since it was coined in 1992 combined. And a post-truth world is one in which feelings and preferences matter more than facts and truth. They have a bigger sway. They have more persuasive and rhetorical influence than appeals to facts and truth. So notice the difference. Postmodernism says there is no such thing as truth. Post-truth says truth does exist, but I don’t care because my preferences and feelings matter more. And that is a far more, in my view, dangerous thing. Because if you can show someone that truth actually exists, then they would have cared before. But now it’s more like a preference issue. And so we’ve become autonomous. This is the autonomous nature of human beings, and it goes back to the garden once again. And autonomy and freedom are the same things. Autonomy is the, from, you know, the Greek autos self, nomos law. We are a law unto ourselves. And if I’m the God of my own skull sized world where preferences matter more than truth, but you’re also the god of your skull sized world where preferences matter more than truth. And my preferences and your preferences collide. Now what happens? Truth is, no longer is an arbiter. It’s power. We’re back to the postmodern problem of sudden again. So, we’re in that current cultural mindset right now where truth and facts don’t matter. What matters is preferences. Now, you take that cultural seaquake and you take the tsunami that’s resulted from it, one of those tsunamis being AI and deep fakes in these kind of things, and misinformation and disinformation that’s all out there. And these two things are converging on the dam of reality. And what you might end up having is reality collapse. I so that’s sort of a grim analysis of the situation. I think that the that the, as I go and I have done a lot of open forums at major universities, some of our top universities just come back from MIT not too long ago. Not one question asked of me was a science question. Not one question asked of me was a technical question. They were all meaning, morality, truth, the nobility of truth. I do believe that Gen Z, Gen Alpha, and even the tail end of millennials have engaged in the post-truth bender and are now in the grisly sort of hangover and want no more of the hair of the dog. I’m actually optimistic in some ways about this, but it requires Christians to live truthfully, because the Savior we follow claims to be the way, the truth, and the life. And so if our Savior is the way, the truth and the life, then our way ought to be truthful and life giving. And so we ought to be, as you said, brokers of truth. But those who live out truth and love to engage in critical thinking. We can engage in argumentation. We can engage in thoughtful conversations, and teach critical thinking once again so that we just don’t believe it just because it’s digital and we don’t try to cede our humanity to the algorithms, make some choices you otherwise wouldn’t have made. Don’t get lazy about these things. Sometimes resist the urge to just, you know, YouTube says you might like these videos. Well, yes, YouTube. I might like those videos after all. And so, you know, clicking or Netflix says you might like these movies based on your previous patterns and the, you know, machine learning is basically predicted what you’d like. And so like a machine, like an automata, we say yes, click, click. And so the irony is that in our effort to create machines that are more human, we become actually more machine-like as the patterns get predicted and we start acting like it. So resist the urge to let the AI think for you. Resist the urge not to not use AI in terms of its research capabilities and these kind of things. Yes, do that, but apply that critical thinking. I honestly feel and this is maybe just my opinion, but that theology of work is based on, I think, that Imago Dei, which delights in the creative. It delights in the creative. God calls his creation good, and we are called to be stewards of that creation. And so that sense of of the beauty of wonder, you know, chimps don’t sit around wondering what the rings of Saturn are made of, and they get along just fine that knowing that. But we have this imbued sense of wonder, of the need to know, like why the sun burns the way it does, and then apply that to create energy sources, but also to create bombs. So I think that if we, reinvigorate a love of truth and creativity, and then instill in our young folks the love for hard work that results from creativity and not letting the chat bots or the chat the large language models write your essay for you, and come up with this bland thing that anybody could have written. But to come up with this beautiful thing that only you could have created. You get to Mars, you get to model the glory and the splendor of God and doing such a thing. You’ll never be him, but you get to model something from him. I think we need to inculcate that desire, that love for doing all things unto the glory of God. And one of those things is to speak and say and live truthfully. I know that’s a lot of pie in the sky idealism. I just I just spouted off. But I know of no other way to say it other than don’t just give control to that, which is easy. Use it when it’s helpful, but don’t just give control to that which is easy. Sometimes, do it the hard way. Sometimes do your actual sums. Sometimes memorize your actual multiplication tables. I know you got a calculator, but do it anyway. There’s something about it that’s actually strengthening.  

Pete Marra [01:14:24] Yeah. Fantastic and well said. Just a reminder, we have, about ten, ten minutes, 15 minutes left or so. If you have questions, you can text those in to the number on the screen. I may, we may pick the pace up here and do a little, like, speed dating, so to speak, on the question round. So, Brian, I want to hit a couple with you. Sources of AI information. There’s a lot of questions like, hey, where do I get, you know, good information? And how do I find the list of, you know, good AI, bad AI, what are some of the places that people could go and then, books that the two of you may recommend to help parents kind of think through this, and other people. Well, we Brian may be frozen. All right. Abdu, go to you.  

Abdu Murray [01:15:18] I think that there’s a lot of, some of these are merging. I don’t think there’s a lot of kids friendly stuff necessarily that’s out there right now. There might be some things that I’m not aware of, but, John Lennox has a wonderful book called 2084, which is coming up with an updated edition, by the way. John Lennox, one of the premier, I think, Christian speakers, a mathematician out of Oxford University, and, just an all around great guy, has this book that came out called 2084, a few years back. It’s, you know, because I move so fast, that’s already kind of out of date. And so he’s updating it. So look for that that book coming out soon. I think Jordan Thacker also has a book called, The Age of AI. And it balances nicely this positivity and also this sense of, nervousness about this kind of thing. So I think those are two really good books, and they both come from Christian perspectives on this kind of thing. So they’re not doom and gloom, nor are they given to flights of fancy. I think that they’re both sober minded books about this. So John Lennox, Jordan Thacker on this. I want to plug, if I could, two, one at least two other books written by non-Christians, if you don’t mind my doing so. The one is actually a wonderful book written by a very funny atheist, by the way. His name is Raymond Tallis, and the book is called Aping Mankind. And the reason I would suggest this book to your viewers and to those who are watching, is because what it does is does a great job of actually pointing out that, from a naturalistic perspective, we actually don’t have any explanation that’s valid right now for human consciousness that comes from either Darwinism or neuroscience. And this is coming from a person who is an avowed Darwinist and a neurologist and does research in the area of neurology. And he’d say that the effort to explain everything through Darwin or everything through neuroscience is either Darwinitis or neuromania. Those are his words, not mine. But it’s a great job of actually pointing out the quandary people actually have in trying to figure out what it means to be human. But the case he makes is that these naturalistic ways don’t tell us what this is. And he’s not advocating for a supernatural position. In fact, he spends a lot of time trying to defend that he’s not a theist. But what you come away with after reading this thing, is that really the best game in town, either a matter of deduction or as a matter of inference, the best explanation is that there is a divine mind behind all of this. So I would recommend that. There’s also a, I think a thoughtful book, by Meghan O’Gieblyn called God, Human, Animal, Machine. And this is a person who used to be a Christian, who is, who in this book, has wrestled with a quite a few of these really deep philosophical questions. And, I think it’s an honest book as well. So those are the sort of, for resources I look at on this stuff, there’s plenty of stuff coming out all the time, though. So be on the lookout, I’m sure. Good folks like Focus of the Family and others would have practical steps for how to deal with artificial intelligence. The reality is, friends, you are not going to keep your kids away from it. If they’ve used technology at all in the past six months, than they’ve used some form of AI in that broad category that we’ve been talking about. So it’s a matter of, equipping yourself. So it’s a great question.  

Pete Marra [01:18:50] Great one. Brian, we’re going to shift. We’ve had several questions around data privacy and data security. Like, what do I do if the AI is out there? You know, I’m training is it using all my web searches and history? And how do I protect myself, my data, my privacy, and to some degree, my individuality?  

Brian Johnson [01:19:11] Yeah. I would just be really direct with you all, and so you can’t, sorry. It’s it’s just.  

Pete Marra [01:19:17] This is the cybersecurity guy!  

Brian Johnson [01:19:20] There is there is a tremendous, now that said, there’s a tremendous amount of investment. And I’ve, I’ve actually invested in advice with some companies pushing for privacy in the US. And data privacy laws that are more stringent. There is what we saw in Europe with GDPR, a version of that being built in the US. It’s trying to specifically address the areas of AI, but the regulation does not exist. The California laws, Colorado laws, a few of the other state level laws around data privacy are not well enforced. And you can just assume that if you’re providing information and, you know, as we’ve talked about, if you say something around a smart device, you can assume that it’s learning for your shopping patterns, your, you know, behaviors and habits around what your preferences and marketing, use. So those things are already embedded in technology. And I think the bigger concern really isn’t just like the data privacy or my identity. I think there’s a bigger concern that people really have. It’s not so much that I’m concerned about where they think I shop or what have they think I spend money on, or what are they trying to target market? It’s my identity. It’s the idea, the essence of I want to protect what’s mine and not be both, you know, in the deepfake idea, not be spoofed, not be mimicked or or have identity theft happen. But at the same time, I’d like to keep private my own personal information. And I would just I would encourage you, if that’s you and you say, I really don’t want to have my personal information shared amongst other marketers, then be vigilant about opting out and deleting your your data off of their platforms. They do have to remove your data if you ask them to you, even though most of them don’t. I’ll just be direct with you. I know that from experience. Most of them do not ever delete data. But you can ask for them to mark your account inactive, and you can not provide your personal information. Another area other than financial services is you should use another account for your online profiles. You set up an online account for yourself. That is your persona. And if you don’t want your own persona used, then, you know, come up with, with an idea of saying, I want to project an image of myself. That’s a variation, maybe a slight misspelling of the name or something in your online media, and use that as your, as your buying habit, you know preferences. So but but don’t don’t believe that there is an idea of privacy online. Because there’s just, there’s just flat out not. AI is not under any regulations or committee observation. There was executive law passed actually in October. The Biden-Harris administration did exercise some executive authority over AI. It trying to implement ethics as a category of, really, hate speech, type of, of, oversight. And so what they’re trying to do, though, is really, you know, regulate how tech companies use AI in a way that will make sure that it is, somewhat overseen and, and that, you know, that the content provided from AI is, you know, is respectful and is a safe place to operate. But the ability to do that well is very limited.  

Pete Marra [01:22:20] Yeah. And I think this gets into something again. We’ve had multiple questions. Come on this I’m going to try to paraphrase the, the, the tension that we feel. So we have a generation of, young people who have begun to, you know, interact with a phone. Abdu you talked about this and Brian, we you know, you mentioned this as well with your kids and I have teenagers myself. So for them, this virtual world seems real, right? Like they can chat with their friends. They don’t talk anymore. Those types of things. To some degree, that interaction between them and a machine feels very natural for them. Where for us, we may still call each other. Right? And not not necessarily chat, let alone be in the same room together and have face to face conversation. So as we see a generation leaving and getting, I guess, more equated with technology and feeling more and more comfortable with it, how do we, as parents or grandparents, help them hold on to their humanity? Especially. And I think this questions coming in around the areas of like augmented reality, those types of spaces where that world, those two worlds become artificial and we get used to playing, or living in it, you know, like video game. Somebody mentioned Ready Player One, you know, if you’re familiar with that. So, how do we how do we, think about that?  

Abdu Murray [01:23:41] Well, I think one of the ways in which you can think about it is up to, now this is for those of for those of us who have young kids or grandkids who are, you know, below the age of 8 or 9 or whatever it is, is to limit the, the use of these things at an early age, such that, and I realize there’s risk with that because, you know, the worry is that other kids will get more advanced than our kids will and they’ll, they’ll far surpass our kids before they because everything’s technology now. But I think that that’s not empirically true. The studies don’t show that. I think that what we can do is limit the, the screen time interaction and, and the, the sense to which it’s natural and, a part of life, as natural as interacting with a human being face to face. I think we need to do that. I think that, as we’d already mentioned before, is to make sure that there is this understanding of a distinction between the real and the fake, even though the fakes are becoming more and more real-like, they never are quite the same thing. And I think that there’s laws actually on the books, or that are being promoted right now. They’re being debated. But hopefully, I know online soon, that would require companies to actually mention that this thing is not real. So if you get a phone call from someone’s digital assistant, it should say, by the way, this is Abdu Murray’s digital assistant. He’d like to have a conversation with you on such and such a time. Are you available that day? It has to tell you that ahead of time and or the images as well. But I think limiting that screen time, or the interaction time, early on, and then having as much actual touch, as much actual interaction, as much, physical cues, with your facial, interactions with your, with your voice, with that sense of sunlight on the face as you’re out in the park and you smell the the fresh cut grass and these kind of things as much of a simulated experience as we might have in the future. I think if we actually engage in the world that God made, I think there’s going to be a love for that, at least to a degree that will recognize the fake, just like, you know, they always say, this is classic cliche, but I think it’s true. They don’t train Secret Service agents who are there to, determine because, you know, a big part of their job is actually determining counterfeits, counterfeit, counterfeit money. They don’t train them on counterfeits. They train them on the real deal. They get them so, so familiar with an actual $10 bill or a $20 bill or a 50 or 100 that when they see the fake, they know it instantly. If we get our children and we force ourselves to come out of the, you know, out of the, the, you know, the blue tinted rooms and into the actual ultraviolet light of the sun. As much risk as there might be for UV exposure. We put on some sunscreen, slather yourself up and go outside. Get yourself as familiar with the real as you possibly can, because then I think there’s going to be a sense of the fake. If I might just for a moment say this. You know, you look at these movies and I’m going to step out, step on some sacred cows, I think in some ways. But like, there’s this Christmas movie, The Polar Express. Everyone seems to love it. I can’t watch that movie, can’t do it. And one of the reasons I can’t do it is because the eyes are dead. The eyes are these weird, hyper realistic. But this, these doll eyes. I think it’s interesting that AI can’t seem to get eyes right. Window into the soul. Yeah, it will eventually, I know that. I bet you it will. But the reason it doesn’t, I think the reason why we see through it is because we’re so familiar with the real. The real thing, what it’s like to look into a person’s eyes. And I think if we can get that enchantment again, with, being a part of the real, especially at a young age, you’ll see the fake. You’re going to have to interact with the fake. You’re going to have to. But I think you get them to love the real because it smells like something. It looks like something that you can’t mimic.  

Pete Marra [01:27:40] Absolutely. Yeah. And it’s just another call for us as believers to practice things like church where we actually get together in person. Real. People have to deal with that in Sunday schools and, small groups. And what a what a great thing that we have to offer the world that other people miss and, take for granted sometimes. Well, we are, about out of time, but I wanted to do this, I want there’s been a lot of questions, about how people could connect with both of you. If you have a social space or podcast or any ways that people could, follow you and get additional information. So go ahead and share that maybe to give people some places to track your down your real work, not your avatars.  

Brian Johnson [01:28:27] Yeah. So my wife and I have a, blog for parenting ideas and some of the things I mentioned we we put on it’s ProtectYoungHearts.com. And you can follow some updates and blogging with us there. And a book that I would suggest by the way in mentioning that would be, God Technology and the Christian Life by Tony Reinke. Excellent book from a Christian viewpoint that is a tech optimist, but also under the veil and view that it’s ultimately God’s sovereignty, that engaging in life as Abd been encouraged with hopefulness, because we are, you know, we are to, to bring, hope into the world and through the hope of Christ. So I think that’s, that’s a, you know, a great way to, to consider, you know, how to engage the culture on this topic?  

Abdu Murray [01:29:15] Yeah. So, our website is Embracethetruth.org, you’ll find a plethora of videos and articles written by me, but also my colleague Derek Caldwell, who is, our chief, researcher and writer, who helps create content along with me. We have a podcast and a YouTube show called All Rise, and it’s based on my being a lawyer. I’m trained as a lawyer, went to Michigan law school, ended up, you know, litigating complex commercial litigation, which sounds incredibly boring. And, and they never put it on TV for a reason. But I did a lot of that, for a long time. And so it’s based on how you present the Christian faith in a credible way, from the perspective of, a lawyer presenting the case. But we deal with social issues as well. So the podcast and the YouTube show is called All Rise. Go to Embracethetruth.org. You’ll see my socials there. I’m on Twitter. All my handles are AbduMurray, Twitter, Instagram, Facebook, TikTok, you know, all the AI generated stuff. And the only difference is Instagram. It’s AbduMurray12 you go to Abdu Murray. You’re going to get somebody else. You going to get a cousin of mine. IF you go to AbduMurray12, you’ll get me.  

Pete Marra [01:30:28] Fantastic. Well, I want to thank you, Abdu, for joining us. And Brian and all of you that have tuned in tonight for this. I want to thank you for joining this. For this. Of course, you can also continue to find information at the ColsonCenter.org. Breakpoint.org. And, what would you say videos that you could find on YouTube? We’re always dealing with these types of issues. And, again, thank you for joining us on this breakpoint forum tonight. Thanks, everybody.  

Read/Watch More

Will AI DESTROY the World?! With Phd student in Quantum Gravity, Michael Butler

The Dangers of ChatGPT for Humanity in God’s Image

AI series, Part 1: Dancing with the Disruptor

AI series, Part 2: Losing Humanity

AI series, Part 3: Reality Collapse

AI series, Part 4: Digital Eden

Tags :
AI,artificial intelligence,humanity,morality,technology
Facebook
Twitter
LinkedIn
WhatsApp