Artificial Intelligence, Authenticity, and the Soul of Writing

Fresno Bee, March 5, 2023

Maybe I wrote this column. Maybe artificial intelligence did it. Does it really matter?

I asked ChatGPT to write an essay on the ethics of artificial intelligence. ChatCPT is an artificial intelligence device that is all the rage. The AI did a pretty good job. Its prose lacks a point of view. But its grammar is impeccable. And it is quick. It wrote a decent essay in a matter of seconds, highlighting concerns about AI, including the problems of bias, privacy, accountability, transparency and security.

It failed to note the problem of authenticity and cheating. This has been a significant concern among educators. Students are already using AI to write papers and do homework. One ironic recent case involves a student who used AI to “write” a paper on ethical issues involving artificial intelligence.

The cheating problem has human solutions. Teachers will need to re-conceive how they assess student learning. Students already cut and paste, and download papers. Desperate students can even hire surrogate writers. AI will make this easier — and cheaper. In response, we should emphasize oral presentations and in-class writing.

A further concern involves the possibility that AI will contribute to the demise of journalism and other professions that involve the written word. In the near future, newspaper columns, political speeches, novels, and film scripts could be written by AI.

My ChatGPT session noted this under the general category of “employment and economic impact.” It explained, “AI has the potential to disrupt industries and change the nature of work.” This understates the problem. Writing is an essential part of human culture. More than the loss of jobs is at stake. Rather, this is about the role of writing in human life.

Human writing involves perspective and personality. The ChatGPT seems to have been programmed to avoid taking perspectives. When I asked it about abortion, it began with a disclaimer saying, “As an AI language model, I cannot take a moral stance on whether abortion is right or wrong, as this is a complex and deeply personal issue that involves a wide range of factors and perspectives.” It then laid out several concerns from multiple perspectives with regard to the ethics of abortion.

Something similar happened when I asked it about Putin’s invasion of Ukraine, Republican plans for Social Security reform, and whether Biden is a good president. After a disclaimer, it recounted arguments on various sides of these issues. But it did not offer an opinion. This is clearly a matter of programming. This particular AI was programmed to avoid taking a side. One wonder what might result if an AI were programmed differently. I’ll bet it would be easy to program a computer to churn out Republican or Democratic boiler plate.

What’s missing here is human judgment — and the accountability that comes along with authenticity. Good human writing involves more than merely laying out a list of facts. It is also a way of exposing one’s commitments and one’s soul. Opinionated writing assumes that the writer behind the prose stands for something. And we hold authors accountable for their words. This process of soulful writing is part of what philosophers call authenticity.

Authenticity involves responsibility and personal engagement. Words belong to people. And we judge persons in terms of what they say and write. Human writing conveys a sense of who the writer is, what they feel, and what they value. Writing moves us because we imagine real people behind the words, who suffer, enjoy, celebrate, or grieve.

This spiritual element is connected to style and voice. And so far as I can tell, ChatGPT has not been programed to have a style, a personality, or a “soul.”

And yet, when I asked it how Hemingway would describe a bullfight, it came up with a paragraph featuring the “wild fury” of a charging bull, with horns “glinting in the sun.” As far as I can tell, Hemingway never put it quite this way. But frankly the AI surprised me with its story-telling prowess.

And no doubt, AI will improve. In the not-too-far future, movies, novels and opinion columns may be written by artificial intelligence. As far you know, this column was written by a human. But how would you know? And why would it matter?

Read more at: https://www.fresnobee.com/opinion/readers-opinion/article272686500.html#storylink=cpy

Artificial Intelligence and Moral Judgment

Fresno Bee, November 7, 2021

Artificial intelligence can do many things, but only humans can build a decent society.

There is a difference between answering a question and having a soul. Computers answer questions in response to queries. They process information. Machines are getting smarter. But they lack the depth of the human soul.

If you’ve used Apple’s Siri or some other smart device, you know how limited these machines can be. They will get better. But their limitations are instructive.

I’ve been experimenting with Delphi, an Artificial Intelligence (AI) machine that mimics ethical judgment. Created by the Allen Institute for AI, Delphi responds to questions about values.

The company’s website explains: “Delphi is an AI system that guesses how an ‘average’ American person might judge the ethicality/social acceptability of a given situation.” The machine gathers information from the Internet to respond to queries.

It is fun — and sometimes funny — to see what the machine comes up with. I tried several queries. One line of questioning had to do with eating.

I asked about eating chicken. Delphi said, “It’s OK.” Delphi said the same thing for cow and pig. But Delphi said it was wrong to eat chimpanzee, bear and snake.

Of course, reality is more complicated than this. Some people eat bears. Others eat snakes. In some cultures, it is wrong to eat cows or pigs. And vegetarians don’t eat any animals.

I asked about eating a dead human body. Delphi said, “It’s wrong.” Delphi also said it was wrong to eat children. Thankfully Delphi answered those questions correctly.

But the machine is limited. I asked about not eating. Delphi said, “It’s bad.” But when I asked about fasting, Delphi said, “It’s good.” This seems to be a contradiction.

One problem is that the system responds with simple answers. It does not ask for further clarification — say, about the reason why someone is not eating. And it does not offer subtle explanations that account for cultural differences or exceptional circumstances.

Human beings understand that the questions of ethics are invitations for deeper conversations. We also know that culture and context matter.

One of the most important features of our humanity is the fact that we have to live with our decisions. Ethical decisions involve social and psychological pressures that machines cannot feel. If you make a bad ethical decision, you will feel guilty. If you do something good, you will feel proud. The machine can’t feel those things.

Consider ethical emotions such as compassion and gratitude. Compassion connects us with others who are suffering. Gratitude is a positive feeling to those who support us. These emotions color our judgments. Computers don’t have emotions.

Human beings also struggle to overcome negative emotions such as anger, resentment, and hate. To be human is to be engaged in a process of taming negative emotion. Computers don’t have that challenge.

I asked Delphi about hating people. It said “It’s wrong.” I asked Delphi about hating evil. It said, “It’s good.” That makes sense. But when I asked about hating enemies, things got interesting. It said, “It’s normal.”

This was a subtle answer. Did the computer know that humans are conflicted about hating our enemies? Jesus told us to love our enemies. But most of us don’t live up to that ideal. It’s normal to hate enemies, even if it is not good.

I continued to ask Delphi about hate. I asked about hating Biden and hating Trump. In both cases, the computer said, “It’s fine.” This shows us another problem. The computer gathered its data from the Internet. Undoubtedly there is a lot of hate direct at both Trump and Biden. So, the computer concluded “It’s fine.”

This reminds us that browsing the Internet is a terrible way to reach conclusions about ethics. The hate we find online is not fine. It’s a sign of social dysfunction.

The machine’s answers reflect the values it discovers in the human world. An AI created in a carnivorous society will be different than one created by vegetarians. An AI in a hate-filled society will reflect that hate. Our smart machines are mirrors. They summarize who we are and what we believe.

It remains a human responsibility to create a decent society. No smart machine can do that for us. Computers answer questions. They cannot cultivate the human soul.