Not long ago, the artificial intelligence (AI) bot ChatGPT sent me a copy as a “courtesy” of my abbreviated biography, which it had written.
ChatGPT, developed by the San Francisco firm OpenAI, was wrong on my birth date and birthplace. It listed the wrong college as my alma mater. I had not won a single award it said I did, but it ignored those I actually won. Yet it got enough facts right to assure this was no mere phishing expedition but a version of the new real thing.
Attempts at correction were ignored. All along, I knew it could be dicey to provide information that — had it been used for corrections — could have led to identity theft or, worse, directed criminals to my door.
The experience recalled the science fiction stories and novels of Isaac Asimov, who prophetically devised a generally recognized set of major laws governing intelligent robots (in Asimov’s fictional future). In his 1942 short story “Runaround,” Asimov first put forward these three laws, which would become staples in his later works:
“The first law is that a robot shall not harm a human or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to harm itself.”
These fictitious laws were reminiscent of the U.S. Constitution, which is open to constant reinterpretation: New questions arose on what is harm and whether sentient robots should be condemned to perpetual second-class servant status.
It took more than 30 years, but eventually others tried to improve on Asimov’s laws. Altogether, four authors proposed more such “laws” between 1974 and 2013. All sought ways to prevent robots from conspiring to dominate or eliminate the human race.
The same threat was perceived this past May by more than 100 technology leaders, corporate chief executive officers and scientists who warned that “AI poses an existential threat to humanity.” Their 22-word statement warned that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” President Biden joined in during a California trip, calling for AI safety regulations.
As difficult as getting international cooperation has been against those other serious threats of pandemics and nuclear weapons, no one can assume AI will ever be regulated worldwide, which would be the only way to make such rules or laws effective.
This makes a pause — not a permanent halt — in AI’s advancement needed right now. AI has already permeated essentials of human society, with it now being used in college admissions, hiring decisions, generating fake literature and art and in police work, plus driving cars and trucks.
An old truism suggests that “anything we can conceive of is probably occurring right now someplace in the universe.” The AI corollary may be that if anyone can imagine an AI robot doing something, then someday a robot will do it.
So without active prevention, someone somewhere will create a machine capable of murdering humans at its own whim. It also means that someday, without regulation, robots able to conspire against human dominance on Earth could be built, maybe by other robots.
Asimov, of course, imagined all this. His novels featured a few renegade robots but also noble ones like R. Daneel Olivaw, which created and nurtured a (fictitious) benevolent Galactic Empire.
In part, Asimov reacted to events of his day, which saw some humans exterminate other types of humans on a huge, industrial scale. He witnessed the rise and fall of vicious dictatorships, more despotic than any of today’s. Postulating that robots would advance to stages far beyond even today’s AI, he conceived a system in which they would co-exist peacefully with humans on a large scale.
No one is controlling AI development now, though, leaving it free to go in any direction, good or evil. Human survival demands limits on this, as Asimov foresaw. If we don’t demand it today, not even a modern Asimov could predict the possible consequences.
Thomas Elias can be reached at [email protected], and more of his columns are online at californiafocus.net.