Technology and ‘moral discernment’

Fresno Bee, November 16, 2025

Pope Leo XIV’s important warning on ethics of AI and new technology

It’s a long way from Silicon Valley to the Vatican, but the journey may be enlightening. Recently, Pope Leo XIV addressed a conference on artificial intelligence in Rome, where he emphasized the need for deeper consideration of the “ethical and spiritual weight” of new technologies. The pontiff said, “Every design choice expresses a vision of humanity,” and called upon technologists “to cultivate moral discernment as a fundamental part of their work — to develop systems that reflect justice, solidarity and a genuine reverence for life.”

Some tech-wizards responded to this pontificating (pun intended) with a disdainful shrug. Engineers and entrepreneurs are focused on building cool stuff, and some don’t think it is their responsibility to worry about ethics or spirituality.

A sophisticated way of saying this is to claim that technology is morally neutral or “value-free.” A version of this idea is found in the motto, “guns don’t kill people, people do.” Defenders of this approach to technology point out that tools do not have a fixed meaning or purpose. Rockets and airplanes can kill people, or we can use them for peaceful purposes. Moral judgment, from this perspective, should focus on what people do with their tools — not on the tools themselves.

A different conception views tools as “value-laden.” From this perspective, technological innovation expresses some set of values. Machines reflect the values of their creators — individuals who build them, after all, with some purpose or function in mind. Guns are made for killing, as are nuclear weapons.

The value-laden conception of technology suggests that new technologies reflect or embody the web of cultural and economic values that supports their creation. New technologies also create new forms of culture, as we are witnessing in the era of social media and artificial intelligence.

Some critics of technology reject the whole modern world. So-called “primitivists” worry that we are stuck in a technology-driven doom loop involving fossil fuels, nuclear weapons, advanced biotech and super-intelligent machines. In response, “techno-optimists” argue that technological development has allowed humanity to thrive in previously unimagined ways.

Furthermore, advocates of technological “acceleration” suggest that the solution to technological problems is more advanced technology — they hope that smarter machines will solve the problems created by the previous generation of tools.

We have just scratched the surface here with regard to the complex issues discussed in the philosophy of technology. This begins with the insight that human beings are tool-using animals. Tools extend and amplify our operational power, and they can also either enhance or undermine who we are and what we care about.

Whether we are enhancing or undermining our humanity ought to be the focus of moral reflection on technology.

This is a crucial question in the AI-era. The AI-revolution should lead us to ask fundamental questions about the ethical and spiritual side of technological development. AI is already changing how we think about intellectual work, such as teaching and learning. Human beings are already interacting with artificial systems that provide medical, legal, psychological and even spiritual advice. Are we prepared for all of this morally, culturally and spiritually?

Our tools influence how we understand ourselves and the world. Before telescopes and microscopes, we had no idea of the vastness of the cosmos or the wonders of cellular life. Before the printing press, only elites had access to written knowledge. And the cyber-era has changed how we think about friendship, information and entertainment.

The idea of value-free technology ignores all this. It seems fairly obvious that tools express and influence what we value. That’s why we must employ critical moral judgment — what the pope called “moral discernment,” as we develop new technologies. At the dawn of the age of artificial intelligence, we need a corresponding new dawn of critical moral judgment.

Now is the time for philosophers, theologians and ordinary citizens to think deeply about the philosophy of technology and the values expressed or embodied in our tools. It will be exciting to see what the wizards of Silicon Valley will come up with next. But wizardry without wisdom is dangerous.

Read more at: https://www.fresnobee.com/opinion/readers-opinion/article312903757.html#storylink=cpy

What Artificial Intelligence Cannot Do

Self Reflection

Fresno Bee, April 5, 2025

Artificial intelligence is already changing the world. But will it change our humanity?

Bill Gates recently predicted that AI will soon be widely employed to supplement and even replace a lot of labor that currently requires human experts. This may include accountants, teachers, doctors and computer programmers. Any profession that requires repetitive information processing and rule-following expertise can be supplemented or replaced by AI.

This may free up human intellect to engage in more creative and imaginative tasks. It may also leave humans with more time to focus on interpersonal and relationship-based work. But there are also AI “therapists” and “friends” available online. AI companions are always available. The AI friend, Replika, touts itself as “always here to listen and talk. Always on your side.”

The convenience and efficiency of AI will lead to its widespread use. AI never sleeps — it never tires, or becomes fed up or impatient, unlike real human companions.

As AI development increases, it will be used to create even more powerful technology. This technological acceleration has led some experts to predict that artificial general intelligence will soon be created (something akin to human thinking but faster, tireless and not prone to laziness, procrastination or daydreaming). Others think the creation of artificial general intelligence is decades off; some say it is impossible.

As AI transforms into artifical general intelligence, it could be applied (or apply itself) to generating even more intelligent machinery. Some fear the creation of artificial super intelligence, a fear fueled by fictional sci-fi dystopias in which artificial super intelligence takes over and kills or enslaves humans.

Leaving that nightmare aside, there is no doubt that AI is already changing the meaning of a variety of human tasks. This will continue to happen as the technology becomes so efficient that resistance is futile. This may sound ominous, but it happens all the time as technologies improve.

The inexorable efficiency of technology explains why we prefer to ride rather than walk. It’s why we send texts instead of writing old-fashioned letters. The efficiency imperative will likely lead us to replace inefficient human beings with efficient AI in many parts of life. Why bother to write a report if AI can do it for you faster and better? Why bother to wake a real friend in a crisis in the middle of the night when AI is there to chat?

Of course, some people still write letters or walk. And there is a kind of pleasure to be found in completing your own tax form, or in writing computer code. But those quaint human activities are now a matter of choice. They represent a kind of boutique curiosity, chosen not for efficiency but for some other reason.

This is where the human element returns. Many things are valuable not because they are efficient, but because they are good, beautiful, intellectually challenging or uniquely human.

Friendship is like that: An AI-companion may be more efficient at giving advice in difficult times, or at keeping us entertained. But real human friendship is valuable for other reasons. Human friendship is not simply a one-sided exchange in which we use the other person for our benefit. Rather, friends make demands upon us. Their impatience reminds us to slow down. Their needs give us reason to look beyond our own.

The demands that other humans make upon us are infinitely more valuable than the cult of efficiency can imagine. Other human beings are part of who we are. When a friend or family member triumphs, we swell with pride for them. When they suffer, we suffer with them. And when they die, they take a part of us away with them.

AI will never replace the deeply inefficient existential reality of love, suffering and mortality. AI is fast, convenient and always available. But it cannot supplant the difficult experiences and troublesome relationships that make us fully human. Efficiency is a machine-based good. But human life is not mechanical. The wonder of existence is found in the tragic and often beautiful mess that is human nature.

To be human is not to be efficient. Rather, it is love, suffer and die. And that’s what no machine can ever do.

Read more at: https://www.fresnobee.com/opinion/readers-opinion/article303416756.html#storylink=cpy

Deep fakes, AI, and the need for ethical supervision

Fresno Bee, March 19, 2023

In the era of deep-fake videos, tech companies must not dismantle their ethics teams

Someone forwarded me a story about Microsoft laying off its ethics team. My first thought was “fake news.” It’s surprising to learn that Microsoft even had an ethics department. It’s even stranger to hear that the group has been disbanded at a time when technological innovation is getting wild.

These are the days of deep-fake videos, internet trolls, and artificial intelligence (AI). And so, in chasing down this story, I used my best internet skills. I checked multiple sources. I refused to believe websites I had never heard of. Eventually I found a report on Popular Science. A reporter there named Andrew Paul explained, “This month saw the surprise dissolution of Microsoft’s entire Ethics & Society team — the latest casualty in the company’s ongoing layoffs affecting 10,000 employees.”

The article explains that the Ethics and Society team once had 30 members. It was reduced to seven people in 2022. And now it is gone. The article notes that Microsoft still has a department of “Responsible AI.” That led me to search Microsoft’s website for the Responsible AI department. There I discovered a number of documents and reports based on the following six principles: fairness, inclusiveness, reliability and safety, privacy and security, transparency, and accountability. It’s reassuring to see that Microsoft has this guidance in place. But one wonders how humans are administering this, as personnel are being cut.

Anyway, I recount how I tracked down this story as an example of online critical literacy. You need to actively search for information, rather than letting it flow into your feed. You should check multiple sources, rather than relying on the first click. Double check URLs to make sure they’re not phony. Seek legitimate sources in mainstream or legacy media. Corporate documents, policy statements, and legal filings are also useful. And legitimate sources of information typically include an author’s name.

Of course, it requires effort and experience to sort things out. It helps to understand that the internet, in all of its tainted glory, is as much about making dollars as it is about making sense. Websites want clicks. They entice with spicy stories and sexy pictures. Algorithms force-feed us stories and images. Search engines profit when we click.

There is money and mayhem to be made online. So, you should enter that space with a suspicious mind. Don’t take anything at face value.

This is especially true as AI and deep fakes become better. I discussed the challenge of AI in a previous column. Here, let’s consider deep fakes.

Two recent deep-fake stories are worth considering. In one, students made a deep-fake video of a school principal uttering a racist rant that included threats of violence. In another, actress Emma Watson’s face was turned into a sexualized ad for an app that could be used to, you guessed it, make deep fakes.

In the first case, it is easy to see how deep fakes could be weaponized, as a fake video could be used to discredit an enemy. In the second case, the goal appears to be to allow for customized pornography, where any face could be “swapped” into a porn video. In the first case, yikes. In the second case, yuck.

One solution to this problem takes us back to the ethics teams at big tech corporations. Now is the time to build these teams up — not tear them down. These groups should be monitoring content and establishing norms and guidelines for the use of technology. Beyond that, we need a full-fledged movement for better education about media literacy, critical internet usage, and respectful community standards for the online world. And lawyers and legislators need to regulate and litigate.

Someone said recently that the internet broke our democracy. It is also possible to imagine how deep-fake technology can break people’s hearts. But this kind of damage can be prevented with ethical guidance, wise legislation, and human ingenuity.

I look forward to reading future stories about the expansion of ethics teams at tech companies. Maybe someday there will be college majors and high school classes in critical thinking and the internet. Of course, when I run across these stories, I’ll double and triple check them to make sure they are not fake news.

Read more at: https://www.fresnobee.com/opinion/readers-opinion/article273252600.html#storylink=cpy