AI brings fears that ‘human beings are soon going to be eclipsed’

Recently I stumbled across an essay by Douglas Hofstadter that made me happy. Hofstadter is an eminent cognitive scientist and the author of books like “Gödel, Escher, Bach” and “I Am a Strange Loop.” The essay that pleased me so much, called “The Shallowness of Google Translate,” was published in The Atlantic in January of 2018.

Back then, Hofstadter argued that AI translation tools might be really good at some pedestrian tasks, but they weren’t close to replicating the creative and subtle abilities of a human translator. “It’s all about ultrarapid processing of pieces of text, not about thinking or imagining or remembering or understanding. It doesn’t even know that words stand for things,” he wrote.

The article made me happy because here was a scientist I greatly admire arguing for a point of view I’ve been coming to myself. Over the past few months, I’ve become an AI limitationist. That is, I believe that while AI will be an amazing tool for, say, tutoring children all around the world, or summarizing meetings, it is no match for human intelligence.

Hofstadter’s 2018 essay suggested that he’s a limitationist too, and reinforced my sense that this view is right.

So I was startled this month to see the following headline in one of the AI newsletters I subscribe to: “Douglas Hofstadter changes his mind on Deep Learning & AI Risk.” I followed the link to a podcast and heard Hofstadter say, “It’s a very traumatic experience when some of your most core beliefs about the world start collapsing. And especially when you think that human beings are soon going to be eclipsed.”

I called Hofstadter to ask him what was going on. He shared his genuine alarm about humanity’s future. He said that ChatGPT was “jumping through hoops I would never have imagined it could. It’s just scaring the daylights out of me.”

Hofstadter has long argued that intelligence is the ability to look at a complex situation and find its essence.

Two years ago, Hofstadter says, AI could not reliably perform this kind of thinking. But now it is performing this kind of thinking all the time. And if it can perform these tasks in ways that make sense, Hofstadter says, then how can we say it lacks understanding, or that it’s not thinking?

And if AI can do all this kind of thinking, Hofstadter concludes, then it is developing consciousness. He has long argued that consciousness comes in degrees and that if there’s thinking, there’s consciousness. A bee has one level of consciousness, a dog a higher level, an infant a higher level, and an adult a higher level still.

“We’re approaching the stage when we’re going to have a hard time saying that this machine is totally unconscious. We’re going to have to grant it some degree of consciousness, some degree of aliveness,” he says.

His words carry weight. They shook me.

But so far he has not fully converted me. I still see these things as inanimate tools. I’d still argue, the machine is not having anything like a human learning experience.

I think I still believe this limitationist view. But I confess I believe it a lot less fervently than I did last week.

Hofstadter is essentially asking: If AI cogently solves intellectual problems, then who are you to say it’s not thinking?  As Hofstadter points out, these artificial brains are not constrained by the factors that limit human brains — like having to fit inside a skull. And, he emphasizes, they are improving at an astounding rate, while human intelligence isn’t.

It’s hard to dismiss that argument.

I don’t know about you, but this is what life has been like for me since ChatGPT 3 was released. I find myself surrounded by radical uncertainty — uncertainty not only about where humanity is going but about what being human is.

Beset by unknowns, I get defensive and assertive. I find myself clinging to the deepest core of my being —  the subjective part of the human spirit that makes each of us ineluctably who we are. I want to build a wall around this sacred region and say: “This is essence of being human. It is never going to be replicated by machine.”

But then some technologist whispers, “Nope, it’s just neural nets all the way down. There’s nothing special in there. There’s nothing about you that can’t be surpassed.”

Some of the technologists seem oddly sanguine as they talk this way. At least Hofstadter is enough of a humanist to be horrified.

David Brooks is a New York Times columnist.


Read original article here

Denial of responsibility! Web Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – The content will be deleted within 24 hours.

Leave a Comment