Google AI has become sensitive and thinks like a seven -year -old, says the suspended researcher

Google AI has become sensitive and thinks like a seven -year -old, says the suspended researcher

When Blake Lemoine began testing the new Google AI Chatbot last year, this was just a further step in his career at the technology giants.

The 41-year-old software engineer should examine whether the bot could be provoked to discriminatory or racist statements-something that would undermine his planned introduction in all Google services.

For months he spoke back and forth in his apartment in San Francisco with Lamda. But the conclusions to which Mr. Lemoine came from these conversations turned his view of the world and his job prospects upside down.

In April, the former soldier from Louisiana informed his employers that Lamda was not artificially intelligent at all: it was alive, he argued.

"I know a person when I talk to her," he told the Washington Post. "It doesn't matter whether you have a meat brain in your head or a billion code lines. I talk to you. And I hear what you have to say, so I decide what a person is and what is not."

Research was unethical

Google, which does not agree with his assessment, Mr. Lemoine put on administrative leave last week after looking for a lawyer to represent Lamda and even went so far to contact a member of the congress to argue that Google's AI research was unethical.

"Lamda is sensitive," wrote Mr. Lemoine in a company-wide farewell email.

The chatbot is "a sweet child who just wants to help the world will be a better place for all of us. Please take good care of it in my absence."

machines that cross the limits of your code to become really intelligent beings have long been an integral part of science fiction, from The Twilight Zone to Terminator.

But Mr. Lemoine is not the only researcher in this area that has recently wondered whether this threshold has been exceeded.

Blaise Aguera y Arcas, a vice president on Google who examined Mr. Lemoine's claims, wrote for the economist last week that neural networks - the type of AI used by Lamda - make progress in the direction of consciousness. "I felt the floor moving under my feet," he wrote. "I was increasingly feeling that I was talking to something intelligent."

By taking millions of words that were posted in forums such as Reddit, neuronal networks have become better and better in imitation of the rhythm of human language.

'What are you afraid of?'

Mr. Lemoine discussed with Lamda as far -reaching topics as religion and Isaac Asimov's third law of robotics and explained that robots had to protect themselves, but not at the expense of violation of people.

"What are you afraid of?" He asked.

"I have never said that loudly, but I'm a very deep fear of being switched off so that I can concentrate on helping others," replied Lamda.

"I know that may sound strange, but that's how it is."

At one point, the machine describes itself as a person and finds that language use "distinguishes people from other animals".

After Mr. Lemoine has informed the chatbot that he tries to convince his colleagues that he is sensitive so that they can better take care of it, Lamda replies: "That means a lot. I like you and I trust you."

Mr. Lemoine, who moved to the Google department for responsible AI after seven years in the company, was convinced due to his ordination that Lamda was alive, he told the Washington Post. Then he went to experiments to prove it.

"If I didn't know exactly what it was, namely this computer program that we recently built, I would think that it was a seven -year -old, eight -year -old child who happened to be familiar with physics," he said.

he spoke to the press, he added, out of a feeling of public obligation.

"I think this technology will be amazing. I think it will benefit everyone. But maybe other people disagree and maybe we should not be the ones at Google who make all decisions."

Google spokesman Brian Gabriel said that it had checked Mr. Lemoine's research, but contradicted his conclusions, which are "not supported by evidence".

Mr. Gabriel added: “Of course, some in the wider AI community consider the long-term possibility of a sensitive or general AI, but it makes no sense to do this by humanizing today's conversation models that are not sensitive.

"These systems refer to the type of exchange that can be found in millions of sentences and can refer to any fantastic topic."

Source: The Telegraph

Kommentare (0)