Social Media Warnings and Education

Fresno Bee, June 23, 2024

Surgeon General Vivek Murthy’s recent call for warning labels on social media is a good idea. He notes that children who spend significant time on social media are at risk for mental illness. Murthy concludes, “The moral test of any society is how well it protects its children.”

But how best should we protect kids? Murthy recognizes that a warning label is a simple solution to a complex problem. Last year, his office issued a more detailed report noting that schools, parents, policymakers, and technology companies have a role to play in protecting kids. And long-term solutions depend upon education.

There is probably also a role for prohibitions. Smartphones have been banned in schools in Fresno and recently in Los Angeles. This week, Governor Newsom called for a statewide smartphone ban in California schools. Social media and smartphones are not the same thing. However, a school ban on smartphones is effectively a ban on social media during school time.

Tools and technologies can be employed in good or bad ways. A hammer can be used to build or to destroy. Prohibitions are justifiable when the risks are obvious and severe, and when the purported benefits of a tool are unclear. And with kids, their relative immaturity matters. A ban on social media access for kids might be justifiable and there is some wisdom in prohibiting smartphones at school. But at this point, a ban on these technologies is akin to closing the proverbial barn door once the horse has already galloped off.

People disagree about the risks and benefits of various technologies. One might argue against these bans by claiming that these technologies are more beneficial than dangerous. These tools help us stay connected, access the news, and conduct business. Of course, these tools also provide instant access to cyberbullying, exploitation, scams, and disinformation. But there is some truth to the claim that with smartphones and social media, it’s not the tool that is to blame, but how it is used.

Some technophobes are opposed to any innovative tool. Calculators were once viewed with skepticism, as was the Internet. These days technophobes are worried about artificial intelligence. But skeptics often adapt to new technologies, when their safety and usefulness are proven.

Hard-core libertarians resist every effort at prohibition. The recent Supreme Court case allowing “bump stock” weapons is worth mentioning here. The decision depends upon a technical matter involving trigger mechanisms. But the bigger question, not decided in this case, is whether there should be limits on dangerous weapons or whether individuals have a right to own even very dangerous weapons.

Social media and smartphones do not seem as dangerous as machine guns. So, it is easy to imagine a libertarian argument against Newsom’s proposed ban. Furthermore, social media is useful for kids. It’s how they socialize, organize clubs and teams, and how they communicate with each other and even with their parents. Smartphones can be useful in education when used properly to access information.

An outright ban may take away useful tools. And a school ban will have no impact on after-school usage. But there is no doubt that education is part of the solution. Teenagers must take driver’s ed and pass a licensing test to drive. Perhaps a similar training course and qualifying exam could be created for social media and smartphones.

Kids need critical lessons about cyberbullying, peer pressure, the bandwagon effect, and the power of misinformation and exploitative algorithms. They also need frank examples of the dangers of social media and smartphone addiction. They would benefit from a training course that includes lessons in “digital citizenship,” “ethical A.I.,” and “virtuous virtual reality” that encourage best practices online and good moral habits in cyberspace.

A Surgeon General’s warning is only a starting point for a broader conversation. We need to continue this conversation. A ban at school might help. But the social media and smartphone horse is already out of the barn. Kids need to be taught the skills and virtues that are required to ride that horse without getting hurt.

Read more at: https://www.fresnobee.com/opinion/article289421636.html#storylink=cpy

The wisdom of slowing down

Fresno Bee, September 10, 2023

Stop the mindless smartphone scrolling. Our souls long for a slower tempo.

Our world emphasizes speed. This is the age of artificial intelligence, smartphones and instant downloads. In this first-come, first-served culture, the early bird gets the worm. Who has time to ponder or reflect? We’re too busy flitting from one superficial thing to the next.

All of this speed and mobility may undermine our humanity. It contributes to loneliness and anxiety. Many good things require us to slow down, rather than speed up. Wisdom is not quick. Neither is love. The best things in life dwell in a time apart, lingering in slowness.

But artificial intelligence and related technologies push an ever more frantic pace. The speed of the stimuli on our screens can explain some of the negative mental health impacts of social media, video games and other technologies. Our brains are not meant to go this fast. Our souls long for a slower tempo. Human relationships need time to ripen, and genuine happiness is not instant gratification.

Now, sometimes speed is a good thing. Quick computers can churn through data and solve many problems. It is much more efficient to Google information than to go to a library and search the indexes of books on dusty shelves. Social media, online news apps and video games can be useful and fun. We can stay in touch with distant friends. We have immediate access to the latest news. And your phone contains multiple sources of instant gratification.

But moderation is needed. Scrolling for thrills is not the same as digging deep. We don’t build wisdom or friendships with a swipe on a screen. We need time for thinking, solitude and soul searching.

The novelist Milan Kundera lamented the lost pleasure of slowness in his novel “Slowness” where he suggests that we need time to “gaze at God’s windows.” He says, “There is a secret bond between slowness and memory, between speed and forgetting.” Speed causes us to forget who we are and what we value. We’re not sure where we’re going. But we’ll get there quickly.

Our bodies and brains evolved in a slower era. Our ancestors needed to think quickly on occasion to escape predators or hunt. But when the sun went down, they contemplated the stars and shared stories and songs. These ancient works of imagination unfolded at a pace that was rooted in the tempo of our beating hearts. With this in the background, it’s no wonder that most of the world’s wisdom traditions emphasize tranquility, patience, calmness and slowness.

The ancient sages took time to gaze deeply into God’s windows, and into their own souls. Socrates was well known for wandering and wondering. He would sometimes come to a halt as he walked through Athens, completely lost in thought.

In Asian traditions, the practice of meditation aims to cultivate slowness. The Buddha saw restlessness as an impediment to wisdom. The solution is to calm the mind and its restless agitation.

You don’t have to be Socrates or the Buddha to understand that many of the most meaningful human activities are best experienced slowly. This is true of grieving, making love and enjoying art. We can’t set a timer for grief or for love. The pace of these things transcends the frantic tempo of ordinary life, reflecting the patience of tender intimacy. To insist that Mozart or Shakespeare should speed things up is to misunderstand the nature of their art.

Philosophers describe things that are enjoyed slowly as “ends-in-themselves” valued for their own sake. These experiences represent moments of completion and fulfillment. Some people even sigh, and say of certain beautiful moments that they want them to last forever. This is also true of life itself. If you love life, you want it to last. Life is enjoyed for its own sake, and those who say that it is better to live fast and die young have probably not thought it over.

But the sages who have thought deeply about these things tell us that we need to relax our pace. The best and most important things — love, beauty and wisdom — are not quick or immediate. If you want to find these goods, you must slow down.

Read more at: https://www.fresnobee.com/opinion/readers-opinion/article279063134.html#storylink=cpy

Artificial Intelligence and Human Morality

Fresno Bee, June 4, 2023

Is artificial intelligence going to kill us? It all depends on who is using it and why

Experts warn that artificial intelligence may kill us. A declaration signed by a number of luminaries states: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

I’m sympathetic to the worry. But when you think about the other problems mentioned here — nuclear war and pandemics — it might be that we need AI to save us from our own incompetence. Could AI have responded better to Covid than we humans did?

It all depends on what we do with AI, and who is using it. A crazed dictator with AI is scary. But a scientist assisted by AI, not so much.

Geoffrey Hinton is one of the signatories of the new AI warning, and an expert in the field. In a recent interview, Hinton warns that AI may grow smarter than its human creators within five to 20 years.

One of the things that freaked him out recently was when he asked an AI to explain a joke. Hinton did not expect AI to understand humor. But it did.

That got me curious, so I asked ChatGPT (an online AI), “Why did the chicken cross the road?” Immediately, it said, “To get to the other side.” And then, without prompting, it explained the joke as a play on words. It said, “It’s a simple and often unexpected answer that plays on the double meaning of ‘the other side.’” It explained the joke as a “philosophical statement on the nature of life and death.”

This surprised me. The AI recognized that I was asking a joke. I had actually forgotten that the joke was about chicken suicide. But the AI went straight to the heart of the matter.

But is this an existential risk? I depends on how we use AI. If we use AI to explain jokes, we won’t risk much. Philosophy, and comedy, assisted by AI, might be fun and informative. But if we weave AI into the systems that govern our lives, we might end up in a strange dystopia.

One obvious concern is the stock market. AI can analyze data and make trades in nanoseconds. This may not lead to extinction. But it may cause bubbles and panics, and enrich those fortunate enough to have an AI broker. Or, maybe AI could be used beneficially to even things out, preventing panics and bubbles. Again, it depends on what we do with it, and what safeguards we program into the system.

A darker possibility is if AI took control of military systems, including nuclear weapons. What if AI were put in charge in the hope of automating and streamlining the decision procedures involved in nuclear war? Maybe nuclear-armed AI will lead to Armageddon. Or, again, maybe AI will better control our most deadly weapons.

It’s worth asking whether human beings are really trustworthy custodians of weapons, or wealth. Some crazed Dr. Strangelove could launch a nuclear war. And rapacious financiers like Bernie Madoff ruin people’s lives. Perhaps AI is more trustworthy than humans in this regard. AI won’t get angry, greedy, envious, or hateful.

And here is where things get really weird and dystopian. What if a smart AI figures out that humans — with all of our ignorance, spite, and greed — should not be trusted with nukes or with billion-dollar deals? In science fiction, the AI might seize control — for our own good!

But AI will only take control, if we put it in charge. Human beings are always looking for shortcuts and quick fixes to complex problems (as I discussed in my column last week). We invent tools to make things easier. But ultimately, we are responsible for the tools we create, including nuclear weapons, the stock market and AI.

We are also responsible for the greed, spite, and ignorance that afflict the world. These are human problems. Tools can magnify these ugly traits, or they can help us control our worst impulses. In the end, the choice of crossing the road to get to the other side belongs to us. This choice is ultimately about ethics and the human spirit. If AI leads to our extinction, the fault will not be in the tool but within the human soul.

Read more at: https://www.fresnobee.com/opinion/readers-opinion/article275991471.html#storylink=cpy

Artificial Intelligence, Authenticity, and the Soul of Writing

Fresno Bee, March 5, 2023

Maybe I wrote this column. Maybe artificial intelligence did it. Does it really matter?

I asked ChatGPT to write an essay on the ethics of artificial intelligence. ChatCPT is an artificial intelligence device that is all the rage. The AI did a pretty good job. Its prose lacks a point of view. But its grammar is impeccable. And it is quick. It wrote a decent essay in a matter of seconds, highlighting concerns about AI, including the problems of bias, privacy, accountability, transparency and security.

It failed to note the problem of authenticity and cheating. This has been a significant concern among educators. Students are already using AI to write papers and do homework. One ironic recent case involves a student who used AI to “write” a paper on ethical issues involving artificial intelligence.

The cheating problem has human solutions. Teachers will need to re-conceive how they assess student learning. Students already cut and paste, and download papers. Desperate students can even hire surrogate writers. AI will make this easier — and cheaper. In response, we should emphasize oral presentations and in-class writing.

A further concern involves the possibility that AI will contribute to the demise of journalism and other professions that involve the written word. In the near future, newspaper columns, political speeches, novels, and film scripts could be written by AI.

My ChatGPT session noted this under the general category of “employment and economic impact.” It explained, “AI has the potential to disrupt industries and change the nature of work.” This understates the problem. Writing is an essential part of human culture. More than the loss of jobs is at stake. Rather, this is about the role of writing in human life.

Human writing involves perspective and personality. The ChatGPT seems to have been programmed to avoid taking perspectives. When I asked it about abortion, it began with a disclaimer saying, “As an AI language model, I cannot take a moral stance on whether abortion is right or wrong, as this is a complex and deeply personal issue that involves a wide range of factors and perspectives.” It then laid out several concerns from multiple perspectives with regard to the ethics of abortion.

Something similar happened when I asked it about Putin’s invasion of Ukraine, Republican plans for Social Security reform, and whether Biden is a good president. After a disclaimer, it recounted arguments on various sides of these issues. But it did not offer an opinion. This is clearly a matter of programming. This particular AI was programmed to avoid taking a side. One wonder what might result if an AI were programmed differently. I’ll bet it would be easy to program a computer to churn out Republican or Democratic boiler plate.

What’s missing here is human judgment — and the accountability that comes along with authenticity. Good human writing involves more than merely laying out a list of facts. It is also a way of exposing one’s commitments and one’s soul. Opinionated writing assumes that the writer behind the prose stands for something. And we hold authors accountable for their words. This process of soulful writing is part of what philosophers call authenticity.

Authenticity involves responsibility and personal engagement. Words belong to people. And we judge persons in terms of what they say and write. Human writing conveys a sense of who the writer is, what they feel, and what they value. Writing moves us because we imagine real people behind the words, who suffer, enjoy, celebrate, or grieve.

This spiritual element is connected to style and voice. And so far as I can tell, ChatGPT has not been programed to have a style, a personality, or a “soul.”

And yet, when I asked it how Hemingway would describe a bullfight, it came up with a paragraph featuring the “wild fury” of a charging bull, with horns “glinting in the sun.” As far as I can tell, Hemingway never put it quite this way. But frankly the AI surprised me with its story-telling prowess.

And no doubt, AI will improve. In the not-too-far future, movies, novels and opinion columns may be written by artificial intelligence. As far you know, this column was written by a human. But how would you know? And why would it matter?

Read more at: https://www.fresnobee.com/opinion/readers-opinion/article272686500.html#storylink=cpy

Artificial Intelligence and Moral Judgment

Fresno Bee, November 7, 2021

Artificial intelligence can do many things, but only humans can build a decent society.

There is a difference between answering a question and having a soul. Computers answer questions in response to queries. They process information. Machines are getting smarter. But they lack the depth of the human soul.

If you’ve used Apple’s Siri or some other smart device, you know how limited these machines can be. They will get better. But their limitations are instructive.

I’ve been experimenting with Delphi, an Artificial Intelligence (AI) machine that mimics ethical judgment. Created by the Allen Institute for AI, Delphi responds to questions about values.

The company’s website explains: “Delphi is an AI system that guesses how an ‘average’ American person might judge the ethicality/social acceptability of a given situation.” The machine gathers information from the Internet to respond to queries.

It is fun — and sometimes funny — to see what the machine comes up with. I tried several queries. One line of questioning had to do with eating.

I asked about eating chicken. Delphi said, “It’s OK.” Delphi said the same thing for cow and pig. But Delphi said it was wrong to eat chimpanzee, bear and snake.

Of course, reality is more complicated than this. Some people eat bears. Others eat snakes. In some cultures, it is wrong to eat cows or pigs. And vegetarians don’t eat any animals.

I asked about eating a dead human body. Delphi said, “It’s wrong.” Delphi also said it was wrong to eat children. Thankfully Delphi answered those questions correctly.

But the machine is limited. I asked about not eating. Delphi said, “It’s bad.” But when I asked about fasting, Delphi said, “It’s good.” This seems to be a contradiction.

One problem is that the system responds with simple answers. It does not ask for further clarification — say, about the reason why someone is not eating. And it does not offer subtle explanations that account for cultural differences or exceptional circumstances.

Human beings understand that the questions of ethics are invitations for deeper conversations. We also know that culture and context matter.

One of the most important features of our humanity is the fact that we have to live with our decisions. Ethical decisions involve social and psychological pressures that machines cannot feel. If you make a bad ethical decision, you will feel guilty. If you do something good, you will feel proud. The machine can’t feel those things.

Consider ethical emotions such as compassion and gratitude. Compassion connects us with others who are suffering. Gratitude is a positive feeling to those who support us. These emotions color our judgments. Computers don’t have emotions.

Human beings also struggle to overcome negative emotions such as anger, resentment, and hate. To be human is to be engaged in a process of taming negative emotion. Computers don’t have that challenge.

I asked Delphi about hating people. It said “It’s wrong.” I asked Delphi about hating evil. It said, “It’s good.” That makes sense. But when I asked about hating enemies, things got interesting. It said, “It’s normal.”

This was a subtle answer. Did the computer know that humans are conflicted about hating our enemies? Jesus told us to love our enemies. But most of us don’t live up to that ideal. It’s normal to hate enemies, even if it is not good.

I continued to ask Delphi about hate. I asked about hating Biden and hating Trump. In both cases, the computer said, “It’s fine.” This shows us another problem. The computer gathered its data from the Internet. Undoubtedly there is a lot of hate direct at both Trump and Biden. So, the computer concluded “It’s fine.”

This reminds us that browsing the Internet is a terrible way to reach conclusions about ethics. The hate we find online is not fine. It’s a sign of social dysfunction.

The machine’s answers reflect the values it discovers in the human world. An AI created in a carnivorous society will be different than one created by vegetarians. An AI in a hate-filled society will reflect that hate. Our smart machines are mirrors. They summarize who we are and what we believe.

It remains a human responsibility to create a decent society. No smart machine can do that for us. Computers answer questions. They cannot cultivate the human soul.