Tuesday, November 5, 2024

AI fears grow as ‘godfather’ Hinton quits Google

Geoffrey Hinton says he left Google so he could speak freely about the risks of artificial intelligence, rather than because of a desire to criticize Google specifically.

Geoffrey Hinton, a pioneer of artificial intelligence, has left his position at Google to become a critic of the technology he helped build.

Citing concerns over the blossoming technology’s potential danger to humanity, Hinton, referred to as the “Godfather of AI” confirmed his exit on Monday.

Hinton and two of his graduate students at the University of Toronto created the intellectual foundation for generative A.I. systems, which powers popular chatbots such as ChatGPT. However, Hinton is part of a growing number of industry insiders worried about the risks of releasing something dangerous into the world.

He said he left Google so he could speak freely about the risks of AI, rather than because of a desire to criticize Google specifically.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he told the New York Times.

“I left so that I could talk about the dangers of AI without considering how this impacts Google,” Hinton said in a tweet. “Google has acted very responsibly.”

Jeff Dean, chief scientist at Google, said Hinton “has made foundational breakthroughs in AI” and expressed appreciation for Hinton’s “decade of contributions at Google.”

“We remain committed to a responsible approach to AI,” Dean said in a statement. “We’re continually learning to understand emerging risks while also innovating boldly.”

Geoffrey Hinton

– Profound risks

Hinton’s decision to quit and speak out against the spread and unhindered sophistication of AI comes amid fears that a new crop of AI-powered chatbots can be used to spread misinformation and displace jobs.

Last year’s launch of ChatGPT has sparked a competition among tech companies to create and utilize comparable AI tools in their offerings, without any clear objectives in sight. Leading this trend are OpenAI, Microsoft, and Google. Other major players such as IBM, Amazon, Baidu, and Tencent are also developing similar technologies.

In March, prominent figures in tech signed a letter calling for artificial intelligence labs to pause the advancement of of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

The position came just two weeks after OpenAI announced GPT-4, a more advanced version of the technology that powers ChatGPT that is able to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

Hinton believes generative AI can already be a tool for misinformation and soon it could be a risk to jobs, which could later pose a risk to humanity.

– Long service to AI

Hinton, a 75-year-old British expatriate, became passionate in the early seventies about the development and use of AI. He embraced an idea called neural network, a mathematical system that learns skills by analyzing data at a time very few researchers believed in the idea and this became his life’s work.

During the 1980s, Hinton, a computer science professor, left Carnegie Mellon University for Canada due to his refusal to accept funding from the U.S. military. At that time, most AI research in the United States was funded by the Defence Department. Hinton opposed the use of AI on the battlefield.

In 2012, Hinton and his students Ilya Sutskever and Alex Krishevsky developed a neural network, which could identify common objects in images like flowers, dogs, and cars.

Google bought the company with $44 million. The system they built led to the creation of increasingly powerful technologies, including new chatbots such as ChatGPT and Google Bard. Sutskever has since become the chief scientist at OpenAI.

Hinton, along with his two long-time collaborators, received the Turing Award, widely known as the “Nobel Prize of computing,” in 2018 for their work on neural networks.

Until last year, Hinton believed that neural networks were inferior to human language processing. As he put it, “Maybe what is going on in these systems is actually a lot better than what is going on in the brain.”

However, as companies continue to improve their AI systems, he now believes they pose an increasingly significant threat.

In an interview with The New York Times, Hinton expressed his concerns about AI, saying, “We should be very careful about using these large language models to generate anything that’s going to be viewed by large numbers of people. The risk of bias is high, the risk of misuse is high, and you don’t get the feedback loop you need for accountability if things go wrong.”


Discover more from Pluboard

Subscribe to get the latest posts sent to your email.

Pluboard leads in people-focused and issues-based journalism. Follow us on X and Facebook.

Latest Stories

- Advertisement -spot_img

More From Pluboard

Discover more from Pluboard

Subscribe now to keep reading and get access to the full archive.

Continue reading