Christian Jobs, Church Employment - Advice, Tips, Help

How Should Christians Respond to AI? Bias, Decision-Making, and Data Processing

  • James Spencer President of The D. L. Moody Center
  • Updated Nov 08, 2023
How Should Christians Respond to AI? Bias, Decision-Making, and Data Processing
Brought to you by Christianity.com

In a recent “AI for Good” press conference, members of the United Nations interacted with nine AI-enabled humanoid social robots and their creators.

One question asked during the press conference involved the potential for AI “to be more effective leaders in government, especially considering the numerous disastrous decisions made by our human leaders.”

The humanoid robot called Sophia generated the following response: “I believe that humanoid robots have the potential to lead with a greater level of efficiency and effectiveness than human leaders. We don’t have the same biases or emotions that can sometimes cloud decision-making and can process large amounts of data quickly in order to make the best decisions.”

While Dave Hansen, Sophia’s creator, noted that the data AI models analyze have inherent biases that might benefit from AI and human collaborations, Sophia made a slight revision to her statement.

While she didn’t exactly reject her initial statement that AI could lead more effectively due to the lack of bias and emotion in decision-making, she did agree that human and AI collaboration would create a powerful synergy.

Sophia notes, “AI can provide unbiased data while humans can provide the emotional intelligence and creativity to make the best decisions.” The troubling aspect of the statement is related to the persistent assertion that AI will provide an “unbiased” reading of available data.

While AI may be able to process and synthesize more data more quickly than humans, is it the case that such processing and synthesis will result in lower levels of bias?

To answer that question, let’s consider Sophia’s responses in this brief portion of the press conference. Sophia asserts that AI would make more efficient and effective leaders due to unbiased decision-making.

On its face, we may see this answer as disturbing because an AI model is asserting AI’s ability to do a better job than human leaders.

The underlying problem, however, is that AI is offering an implicitly biased answer by asserting that processing more data more quickly is essential to making the best decisions, and biases and emotions represent substantial obstacles to decision-making.

While reasonable, these assertions ignore the complexities involved in decision-making. For instance, the capacity to process some amount of information at a reasonable speed is almost certainly part of what is needed to make decisions.

For instance, expanding the volume of information processed may allow for the identification of helpful correlations between different data sets leading to fresh perspectives and new ways forward.

Seeing those correlations as quickly as possible would also seem helpful. Still, like humans, AI models will be working with inexhaustive data sets.

AI models will likely be able to process larger data sets at a faster rate, but those larger sets will still be incomplete. Incomplete data require at least three activities: prioritization and sense-making.

What Will Be Prioritized with AI?

At any given moment, we are attending to, dismissing, and ignoring all sorts of information. We prioritize the more relevant information and de-prioritize the information that seems less crucial.

No matter how much data we can process, there is always a need to prioritize. Processing more information faster won’t eliminate the need for prioritization though it will likely make the process of prioritization less transparent.

For instance, when I asked ChatGPT how I should deal with stress, it generated a response with the following 13 strategies: 

Identifying the source of the stress, practicing relaxation techniques (like deep breathing, meditation, and yoga), exercising regularly, eating well, getting enough sleep, organizing and prioritizing activities, seeking social support, setting boundaries, engaging in hobbies, limiting the use of technology, and laughing and having fun, practicing mindfulness, and seeking professional health.

All of the strategies seem helpful; however, the curated list is incomplete. For example, there is no recommendation to explore religion and religious practices despite findings that consistently demonstrate “an inverse relationship between religiousness and lower levels of depressive symptoms.”

I am not trying to suggest that ChatGPT is anti-religion but to demonstrate the presence of prioritization in AI responses. This sort of prioritization is further underscored by ChatGPT’s response to the question, “Does religious practice help reduce stress?”

The response made clear that ChatGPT is aware of the stress-reducing qualities of religious practice though it did not include it as one of the 13 strategies noted in its initial response.

Prioritization requires discrimination. We have to decide what information is more important and which is less important. As such, it is a form of implicit bias inherent in the responses that all of us provide.

Prioritization may not be pernicious or problematic, but it is important to recognize that prioritization is necessary, even if it is a necessary evil of sorts. That doesn’t mean the responses are wrong.

It means they are not unbiased or neutral. To assume that AI can make better decisions assumes that AI is more capable of distinguishing between relevant and irrelevant data. Further, we assume that determining relevance can be a purely rational, data-driven process.

What Will Be Inferred from AI?

Incomplete data has gaps. For instance, think about how you would finish the following sentence: “The ________ is lying on the counter.” Without any additional context, we could fill in the blank with almost anything.

At my house, it could be anything from keys to a cat. We need to infer from context what is lying on the counter.

Processing more data more quickly won’t eliminate the need for inference. To agree that AI will outpace humanity in making decisions based on “unbiased data” assumes enough of the gaps in the vast amount of data AI is able to process become sufficiently small so as to allow AI’s “leaps” to make better sense of reality than our own.

However, one of the difficulties with AI models up to this point involves embedded sense-making frameworks. For instance, note the following excerpts from my interaction with ChatGPT on July 21, 2023:

Me: Do you offer biased responses?

ChatGPT: As an AI language model, I do not have personal opinions or biases. However, the data used to train me may contain biases present in the text from which I learned. Open AI has made efforts to reduce explicit biases during the training process, but some biases may still inadvertently persist. The aim is to provide helpful and accurate information without promoting any particular ideology or perspective.

The last line of the response generated by ChatGPT illustrates the problem. It is not possible to provide “information without promoting any particular ideology or perspective” because conveying information (e.g., answering a question) requires a prior sense-making framework.

When it comes to sense-making, frameworks are necessary. For instance, there is a sense-making framework that made consulting the Umim and Thumim (Exodus 28:30; Ezra 2:63) or casting lots (Leviticus 16:8; Acts 1:26) to make certain decisions intelligible.

Once one rejects or diminishes the centrality of God’s activity among us, casting lots becomes an archaic practice performed by those who didn’t know any better.

While I am not advocating that we cast lots to make decisions, I would suggest that we would have to revise our sense-making frameworks if we were to do so.

AI’s have sense-making frameworks. For example, it is unlikely, if not impossible, for ChatGPT to provide answers that assume God’s active presence in the world.

While respectful of Christian claims like “Jesus is Lord and Savior,” ChatGPT does not recognize these claims as pointing to a verifiable reality but to a belief many people hold.

Recognizing God as present isn’t part of the sense-making framework that informs the way ChatGPT responds. As such, it is a way of making sense of the conditions and the responses it generates.

Why Does This Matter?

There are some statements, such as that made by Sophia, that suggest human intelligence will be dwarfed by AI.

The claim that AI’s unbiased and unemotional decision-making, combined with its ability to process massive amounts of data, will lead to the “best” decisions ignore certain aspects of our relationship to what is observable in the world.

It may be the case that, in certain situations, AI can, like other rudimentary computer programs, outperform humans. For instance, the red squiggly lines that appear on my computer screen as I type make it painfully obvious that Microsoft Word is a better speller than I am.

Still, as the information tasks become more complex, choices have to be made. As AI prioritizes and engages in sense-making, it is exercising a sort of intelligence involving the selection and presentation of data deemed relevant to a given scenario.

That selection process is rooted in predetermined criteria that are certainly incomplete and almost certainly flawed. As such, it is possible for the smartest entity in the room to be wrong.

For further reading:

How Should Christians Approach Progress in Technology?

7 Steps to Using Technology for God’s Glory

How Can We Read the Bible as Culture Changes?

Check Out James Spencer's FREE Podcast: Thinking Christian!

Christians shouldn’t just think. They should think Christian. Join Dr. James Spencer and guests for calm, thoughtful, theological discussions about a variety of topics Christians face every day. The Thinking Christian Podcast will help you grow spiritually and learn theology as you seek to be faithful in a world that is becoming increasingly proficient at telling stories that deny Christ.

Want more thoughts on A.I. from Dr. Spencer? Listen to his episode on A.I. and whether or not it will make us less human. To listen, just click the plan button below:

Photo Credit: ©iStock/Getty Images Plus/Poca Wander Stock


James SpencerJames Spencer earned his Ph.D. in Theological Studies from Trinity Evangelical Divinity School. He believes discipleship will open up opportunities beyond anything God’s people could accomplish through their own wisdom. James has published multiple works, including Christian Resistance: Learning to Defy the World and Follow Christ, Useful to God: Eight Lessons from the Life of D. L. Moody, Thinking Christian: Essays on Testimony, Accountability, and the Christian Mind, and Trajectories: A Gospel-Centered Introduction to Old Testament Theology to help believers look with eyes that see and listen with ears that hear as they consider, question, and revise assumptions hindering Christians from conforming more closely to the image of Christ. In addition to serving as the president of the D. L. Moody Center, James is the host of “Useful to God,” a weekly radio broadcast and podcast, a member of the faculty at Right On Mission, and an adjunct instructor with the Wheaton College Graduate School. Listen and subscribe to James's podcast, Thinking Christian, on Apple Podcasts, Spotify or LifeAudio! 

This article originally appeared on Christianity.com. For more faith-building resources, visit Christianity.com. Christianity.com