How Should Christians Respond to AI? Its Virtue and a Response to Vervaeke

James Spencer

President of The D. L. Moody Center
Updated Nov 08, 2023
How Should Christians Respond to AI? Its Virtue and a Response to Vervaeke

No matter what capacities AI models develop, humans will always occupy a unique place in God’s creation because God destined us to reflect him. It is not a specific capacity but a unique design that makes us human.

Brought to you by Christianity.com

On an episode of the Through Conversations Podcast, cognitive psychologist John Vervaeke offers a uniquely informed perspective on Artificial Intelligence (AI).

He begins his comments by highlighting two “dismissive strategies” used to downplay the significance of AI: identifying AI as just another tool and suggesting that these machines cannot be “x” (rational, wise, etc.) because “only…human beings have some sort of secret sauce inside of them that no machine or material thing could ever make happen.”

He goes on to advocate for a stealing of the culture that would entail a shift toward a human commitment to virtue. This brief article represents my initial response to Vervaeke’s challenge to rethink the dismissive strategies and to offer a theological perspective on his proposed solution.

Dismissive Strategy 1: AI Is Just Another Tool

Regarding the first dismissive strategy, he suggests that, while AI may be a tool right now, to assert that AI is a tool like any other neglects the real potential for AI to become more than a tool. I tend to agree with this point.

AI is, at the very least, a different sort of tool than we are accustomed to using. We should not assume that our ability to navigate other tools will allow us to navigate AI.

While any tool can be used for less-than-noble purposes, there is a real difference between the sort of influence exerted by a tool like a screwdriver or a hammer versus that exerted by digital technologies, including, but not limited to AI.

At various points, either in print pieces or in interviews, I am quite sure that I have referred to AI as a tool.

In some sense, I do believe that until AI develops an embodied cognition including, but not limited to:

  • Survival instincts that are responsive to a set of senses guiding AI’s determination of relevant or irrelevant factors in a given situation.
  • Social awareness concerned with maximizing benefit and/or minimizing harm for individuals within a pre-defined circle (or tribe) if not the whole of creation.
  • Agency by which AI becomes relatively autonomous actors capable of making decisions and accepting the upside and downside of those decisions.

Until AI reaches a point when at least these three elements are possible, it seems most accurate to say that AI models are tools being developed to serve the interests of entities (e.g., companies) and individuals (e.g., investors, creative developers, end users with particular needs, bad actors interested in destroying the world).

While I agree with Vervaeke, then, that labeling AI as a “tool” trivializes the positive and negative potential AI has now and will develop in the future.

However, we need to retain a perception of AI as a tool (minus the term’s trivializing connotations) while considering a time when AI might be considered to be something more than a tool.

Retaining the perception of AI as a tool is important because it points behind AI to the human agents (individuals and collectives) and human agency driving AI forward.

Attending to the aims and motivations of the human actors advancing AI is, at this point, a highly relevant pursuit because those aims and motivations are likely to have unintended negative consequences.

Alternatively, while I am quite sure there are some human agents who are seeking to be altruistic in the development of AI, it is likely that there are some who are willing to pursue their own upside, even if that means a real downside for others.

As Gideon Lichfield, editor of Wired Magazine, is quoted as saying regarding the AI race in MIT Technology Review, “Satya Nadella of Microsoft said he wanted to make Google dance.  And Google Danced. We are now, all of us, jumping into the void holding our noses because these two companies are out there trying to beat each other.”

AI is still a tool though it is a tool with great potential. While it may develop beyond being a tool, we should not only look toward that moment as the singular existential threat.

The human agency driving AI development and use with seemingly little regard for its potentially negative consequences is our current problem.

Dismissive Strategy 2: AI Will Never Be Able To…

Vervaeke challenges certain religious perspectives, especially with his critique of the second dismissive strategy.

Essentially, his concern is to push religious thinkers beyond certain previous understandings of what it means to be human and toward new understandings that will differentiate humans from AI when AI models begin to develop rationality, wisdom, agency, and/or consciousness.

He urges religious thinkers not to dismiss AI by saying that AI will never be rational or wise because, at some undetermined future point, such assertions may be proven wrong.

I agree with Vervaeke on this point. The danger of assessing a technology based on its current capacity or an assumed limit on capacity roots in what it means to be human in the exercise of specific characteristics.

Such an understanding is problematic when we consider, for instance, the humanity of a fetus or the humanity of an individual without the mental capacity to think critically or rationally due to some defect.

It may well be true that “being human” is often associated with rational and/or cognitive capacities; however, it isn’t clear that such capacities define what it means to be human.

From a Christian theological perspective, I would suggest that what it means to be truly human is embodied in Jesus Christ. To be human involves the dedication of all we are and have (regardless of capacity) to the Lord and, to the extent we are able, to love our neighbors as ourselves.

True humanity is not found in abstract moral codes or specific powers or capacities but in a willingness to recognize the authority of the Father and surrender all we are and have to him. Fundamental to what it means to exhibit true humanity is relating rightly to God.

There is a great deal of conversation about the necessity of religion and the need for humans to connect to something beyond ourselves. Such an assumption gestures toward the notion that to be human is to be reflectors of God’s glory.

No matter what capacities AI models develop, humans will always occupy a unique place in God’s creation because God destined us to reflect him. It is not a specific capacity but a unique design that makes us human. We are “most” human when we conform to that design by imitating Christ.

Opting into Virtue

During the podcast, Vervaeke notes that we need a cross-cultural phenomenon to unite people against the common threat of AI. He goes on to suggest that religion is perhaps the only phenomenon that provides the historical potential to connect individuals in such a manner.

Because AI is a reflection of us (humans), Vervaeke insists, “We have to make meaning and wisdom and virtue the core of the culture again because that’s our only hope.”

He goes on to suggest that we need to “ask them [religions] to not demand that their religious agenda gets the priority in what’s happening here. Ask them at least can you wait until we get some handle on this before we keep doing…pushing that. We’ve got to make this work.”

Again, I don’t disagree with Vervaeke here if we are thinking in terms of the restraint of evil.  Political leaders and everyday citizens are right to be concerned about minimizing threats to human dignity, if not human existence.

While I agree with Vervaeke that a world governed by a particular sort of meaning, wisdom, and virtue is aligned with the function of political authorities who are appointed by God to enact justice, I believe he is settling for a substandard mode of existence. He is fighting to preserve a certain level of brokenness in the world.

His comments on religion are important. To some degree, I do believe that Christians can and should participate in retraining evil; however, restraining evil is always a secondary priority (at best) for Christians. Our first priority is to proclaim the gospel in word and deed.

We can’t put Jesus at the margins of our life and thought and expect anything more than a short-term solution to a particular sort of brokenness. Proclaiming Christ will not make the world a better place. We do not anticipate a slightly better world but a new creation (Revelation 21).

As such, Christians must recognize Vervaeke’s proposal that we solve the AI threat problem through a return to virtue as, at best, a half-measure incapable of solving the problem of evil long-term.

Why Does This Matter?

Vervaeke’s expertise is undeniable. His commitment to dialogue and willingness to entertain perspectives with which he does not agree is exemplary.

Yet, from a Christian perspective, Vervaeke’s advice is limited because he fails to recognize sin as the core challenge facing humanity. This basic insight is necessary for an ultimate solution to the world’s brokenness.

However, I hope what I have demonstrated above is that, despite my disagreement with Vervaeke regarding the necessity of Christian convictions to solve the world’s problems, the insights Vervaeke offers with regard to AI are insightful and important to a Christian community that not only looks ahead to a new creation but also lives in this one.

For further reading:

How Should Christians Respond to AI? Bias, Decision-Making, and Data Processing

How Should Christians Approach Progress in Technology?

How Can We Read the Bible as Culture Changes?

Check Out James Spencer's FREE Podcast: Thinking Christian!

Christians shouldn’t just think. They should think Christian. Join Dr. James Spencer and guests for calm, thoughtful, theological discussions about a variety of topics Christians face every day. The Thinking Christian Podcast will help you grow spiritually and learn theology as you seek to be faithful in a world that is becoming increasingly proficient at telling stories that deny Christ.

Want more thoughts on A.I. from Dr. Spencer? Listen to his episode on A.I. and whether or not it will make us less human. To listen, just click the plan button below:

Photo Credit: ©iStock/Getty Images Plus/Laurence Dutton


James SpencerJames Spencer earned his Ph.D. in Theological Studies from Trinity Evangelical Divinity School. He believes discipleship will open up opportunities beyond anything God’s people could accomplish through their own wisdom. James has published multiple works, including Christian Resistance: Learning to Defy the World and Follow Christ, Useful to God: Eight Lessons from the Life of D. L. Moody, Thinking Christian: Essays on Testimony, Accountability, and the Christian Mind, and Trajectories: A Gospel-Centered Introduction to Old Testament Theology to help believers look with eyes that see and listen with ears that hear as they consider, question, and revise assumptions hindering Christians from conforming more closely to the image of Christ. In addition to serving as the president of the D. L. Moody Center, James is the host of “Useful to God,” a weekly radio broadcast and podcast, a member of the faculty at Right On Mission, and an adjunct instructor with the Wheaton College Graduate School. Listen and subscribe to James's podcast, Thinking Christian, on Apple Podcasts, Spotify or LifeAudio! 

This article originally appeared on Christianity.com. For more faith-building resources, visit Christianity.com. Christianity.com