Is artificial intelligence going to kill us? It all depends on who is using it and why
Experts warn that artificial intelligence may kill us. A declaration signed by a number of luminaries states: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
I’m sympathetic to the worry. But when you think about the other problems mentioned here — nuclear war and pandemics — it might be that we need AI to save us from our own incompetence. Could AI have responded better to Covid than we humans did?
It all depends on what we do with AI, and who is using it. A crazed dictator with AI is scary. But a scientist assisted by AI, not so much.
Geoffrey Hinton is one of the signatories of the new AI warning, and an expert in the field. In a recent interview, Hinton warns that AI may grow smarter than its human creators within five to 20 years.
One of the things that freaked him out recently was when he asked an AI to explain a joke. Hinton did not expect AI to understand humor. But it did.
That got me curious, so I asked ChatGPT (an online AI), “Why did the chicken cross the road?” Immediately, it said, “To get to the other side.” And then, without prompting, it explained the joke as a play on words. It said, “It’s a simple and often unexpected answer that plays on the double meaning of ‘the other side.’” It explained the joke as a “philosophical statement on the nature of life and death.”
This surprised me. The AI recognized that I was asking a joke. I had actually forgotten that the joke was about chicken suicide. But the AI went straight to the heart of the matter.
But is this an existential risk? I depends on how we use AI. If we use AI to explain jokes, we won’t risk much. Philosophy, and comedy, assisted by AI, might be fun and informative. But if we weave AI into the systems that govern our lives, we might end up in a strange dystopia.
One obvious concern is the stock market. AI can analyze data and make trades in nanoseconds. This may not lead to extinction. But it may cause bubbles and panics, and enrich those fortunate enough to have an AI broker. Or, maybe AI could be used beneficially to even things out, preventing panics and bubbles. Again, it depends on what we do with it, and what safeguards we program into the system.
A darker possibility is if AI took control of military systems, including nuclear weapons. What if AI were put in charge in the hope of automating and streamlining the decision procedures involved in nuclear war? Maybe nuclear-armed AI will lead to Armageddon. Or, again, maybe AI will better control our most deadly weapons.
It’s worth asking whether human beings are really trustworthy custodians of weapons, or wealth. Some crazed Dr. Strangelove could launch a nuclear war. And rapacious financiers like Bernie Madoff ruin people’s lives. Perhaps AI is more trustworthy than humans in this regard. AI won’t get angry, greedy, envious, or hateful.
And here is where things get really weird and dystopian. What if a smart AI figures out that humans — with all of our ignorance, spite, and greed — should not be trusted with nukes or with billion-dollar deals? In science fiction, the AI might seize control — for our own good!
But AI will only take control, if we put it in charge. Human beings are always looking for shortcuts and quick fixes to complex problems (as I discussed in my column last week). We invent tools to make things easier. But ultimately, we are responsible for the tools we create, including nuclear weapons, the stock market and AI.
We are also responsible for the greed, spite, and ignorance that afflict the world. These are human problems. Tools can magnify these ugly traits, or they can help us control our worst impulses. In the end, the choice of crossing the road to get to the other side belongs to us. This choice is ultimately about ethics and the human spirit. If AI leads to our extinction, the fault will not be in the tool but within the human soul.
Read more at: https://www.fresnobee.com/opinion/readers-opinion/article275991471.html#storylink=cpy