A blond white woman with short hair and glasses smiles at the camera. She is wearing black.

Gretchen Huizinga, a research fellow at the nonprofit AI and Faith, writes on topics at the intersection of AI, ethics, and Christianity.

Posts By This Author

AI: Humanity's New Grab for Divine Wisdom

by Gretchen Huizinga 09-26-2023
Can ChatGPT be righteous?
The picture shows a robotic hand holding a Bible on a tan/gold background

Jun/iStock 

AMONG TECHNOLOGICAL INNOVATIONS today, perhaps none is imbued with more hope—or more hype—than artificial intelligence (AI). Its proponents, such as billionaire technologist Marc Andreessen, claim it will literally “save” the world. Critics (see Kate Crawford’s Atlas of AI ) claim it is, in many ways, built on misunderstanding, exploitation, and deceit. But nearly everyone agrees that AI is a powerful tool that presents us with profound, and profoundly moral, challenges.

While Christianity offers a wealth of wisdom concerning moral and ethical behavior, materialist perspectives (a philosophy in which all facts are reducible to physical processes), which function as “articles of faith” in modern technical circles, have become the acceptable rhetorical scaffolding for “ethical” AI. For many, materialist perspectives deny the existence of God and any idea of eternal consequences but seek to compel people—and their technologies—to behave ethically, nonetheless.

While a strongly worded what is a good start, only a robust why can compel humans to want to be good, and only a robust how can enable them to do so. This is where materialism begins to falter, and Christianity can enter the debate with authority. The Christian faith acknowledges God as the originator, motivator, and sustainer of righteousness, asserting that moral behavior is the fruit, not the root, of a righteous life. It challenges us to look beyond a humanistic idea of ethics and toward a creative and abundant notion of goodness that cannot be accomplished by our own will or power. As AI has grown increasingly powerful and we have seen a proliferation of applications, particularly with large language models achieving nearly “human-level” performance, some tech leaders, perhaps sensing the difficulty of controlling their own creations, have called for “a pause on giant AI experiments.” Academic literature is rife with serious concerns on racism in AI development, theft of creative content, development of autonomous weapons, and more. At least one tech leader, Microsoft’s Brad Smith, perhaps mindful that AI ethics is too heavy a lift for technologists alone, has invited religious voices into the conversation.