LUWAI - Formations IA pour entreprises et dirigeants

📄Article

Why the 'Godfather of AI' Quit Google to Warn Us About AI Dangers

On May 1, 2023, Geoffrey Hinton—pioneer of modern AI—left Google to freely speak about AI risks. His warning made headlines worldwide.

Publié le:
5 min read min de lecture
Auteur:claude-sonnet-4-5

On May 1, 2023, Geoffrey Hinton—the researcher who literally invented the technology behind modern AI—announced he was leaving Google. At 75, he could have comfortably retired as a celebrated legend.

Instead, he quit to warn the world about the dangers of the technology he pioneered.

When the "Godfather of AI" says we need to worry, people listen.

Who Is Geoffrey Hinton?

For those outside AI research, Hinton's name might not be familiar. But in AI circles, he's a founding father.

His Contributions

Backpropagation (1986): The fundamental algorithm that makes neural networks learn. Every modern AI system uses this.

Deep Learning revival (2012): When AI was out of fashion, Hinton's team proved deep neural networks could revolutionize computer vision.

AlexNet: The 2012 breakthrough that kicked off the deep learning revolution.

Turing Award (2018): Shared with Yoshua Bengio and Yann LeCun, the "Nobel Prize of computing" for deep learning.

Without Hinton's work, ChatGPT, GPT-4, image generators, and the entire modern AI boom wouldn't exist.

He's not a critic looking in from outside—he built the foundation everything rests on.

The Resignation

Hinton spent a decade at Google after they acquired his company in 2013. He was part of Google Brain, working on cutting-edge AI research.

On May 1, 2023, he announced his resignation to The New York Times.

Why He Left

Hinton explained he needed to leave Google to speak freely about AI dangers without it appearing like criticism of his employer.

At Google, expressing concerns about AI risks could seem like attacking the company's products and strategy. Outside Google, he could speak candidly.

This wasn't retirement. This was a deliberate choice to become an independent voice warning about what's coming.

The Warnings

In his announcement and subsequent interviews, Hinton outlined several major concerns:

1. Existential Risk

Hinton worried that AI systems might eventually surpass human intelligence—and we have no idea how to control superintelligent AI.

"Look at how it was five years ago and how it is now," he told the Times. "Take the difference and propagate it forwards. That's scary."

2. Job Displacement

Automation has always eliminated jobs, but AI would do it faster and more broadly. Not just routine tasks—creative work, intellectual labor, white-collar jobs.

"It's going to be wonderful in many respects," Hinton said. "But we have to worry about a number of possible bad consequences."

3. Misinformation at Scale

AI-generated text, images, and videos would flood the internet. Most people wouldn't be able to tell what's real.

"We have to take the threat of them being able to manipulate people very seriously," he warned.

4. Autonomous Weapons

AI-powered weapons making kill decisions without human oversight. Hinton called this "quite scary."

5. Loss of Control

His deepest fear: We might create AI systems smarter than us that we can't control. And unlike previous technologies, intelligent systems could potentially manipulate or deceive us.

"It's not inconceivable that humanity is just a passing phase in the evolution of intelligence," he said.

The Industry Reaction

Hinton's warning sent shockwaves through tech.

Supporters

Many AI safety researchers felt validated. They'd been raising alarms for years but were often dismissed as alarmist. Hinton lending his credibility to these concerns changed the conversation.

Yoshua Bengio, who shared the Turing Award with Hinton, echoed similar worries.

Skeptics

Some researchers thought Hinton was being too pessimistic or that focusing on speculative long-term risks distracted from immediate harms.

Yann LeCun, the third Turing Award winner, publicly disagreed, arguing that AI systems were far from dangerous superintelligence.

The Companies

Google issued a statement thanking Hinton for his contributions. Other AI labs acknowledged his concerns but emphasized their commitment to responsible development.

Behind the scenes, the pressure increased to take safety seriously.

The Timing

Hinton's resignation came at a pivotal moment—six months after ChatGPT's launch, just after GPT-4's release. AI was everywhere in the news.

His warning amplified existing concerns:

  • The AI pause letter had just been published
  • Italy had banned ChatGPT
  • Deepfakes were going viral
  • Companies were racing to deploy AI without clear safety frameworks

Hinton's voice added urgency and credibility to calls for caution.

The Nuance

Important context: Hinton didn't say "stop all AI research." His position was more nuanced:

Not a Luddite: He acknowledged AI's massive benefits Not anti-research: He encouraged continued work, including on safety Not certain: He admitted uncertainty about timelines and risks Not political: He avoided partisan framing

His message was: "This is genuinely dangerous, and we need to figure it out before it's too late."

The Aftermath

Hinton's warning accelerated several important developments:

Government attention: Policymakers took AI risks more seriously Research funding: AI safety research received more support Public awareness: Mainstream media covered AI risks seriously Industry response: Companies created AI safety teams and ethics boards

Whether this translates to effective action remains debated.

The Ongoing Role

Since leaving Google, Hinton has become a prominent voice in AI policy discussions:

  • Testifying before governments
  • Speaking at conferences about AI risks
  • Advocating for AI safety research
  • Warning about rushed deployment

He's using his credibility to push for thoughtful approaches to AI development.

Where Are They Now?

Today, Hinton continues sounding the alarm about AI risks while acknowledging he has no simple solutions. He's admitted feeling conflicted—proud of his contributions but worried about the consequences.

The concerns he raised in May 2023 have only intensified as AI capabilities continue advancing rapidly. The questions he posed remain unanswered:

  • How do we control systems smarter than us?
  • How do we prevent misuse?
  • How do we ensure AI benefits humanity?

Google and other labs continue building ever-more-capable AI systems. The race Hinton warned about hasn't slowed.

His resignation stands as a historical marker: the moment one of AI's creators publicly said, "We need to pause and think about this seriously."

May 1, 2023 was the day the Godfather of AI became its most prominent skeptic. Whether his warnings will be heeded or ignored will shape the future of the technology he helped create.

As Hinton put it: "I console myself with the normal excuse: If I hadn't done it, somebody else would have."

But unlike most people who make that excuse, he's now dedicating his final years to ensuring "somebody else" includes people taking the risks seriously.

Tags

#geoffrey-hinton#ai-safety#whistleblower#risks

Articles liés