Top AI Expert Warns Artificial Intelligence May End Humanity in Two Years

Any opinions expressed by authors in this article do not necessarily represent the views of Disswire.com.

The threat of artificial intelligence ending humanity was once a thing of science fiction seen in films like the Terminator Franchise, which according to a renowned AI expert, has become less “fiction” and more prophetic after he warned we may be just two years away from total annihilation if the technology reaches “God-level super-intelligence.”

While globalist organizations like the World Economic Forum appear to be more concerned about the threat AI poses in terms of “disinformation,” AI expert Eliezer Yudkowsky has a more dire concern.

According to Yudkowsky, if AI reaches what he describes as “God-level super-intelligence,” within two years, “every single person we know and love will soon be dead.”

While the destruction of civilization at the hands of “self-aware machines” could arguably be more fear-mongering by wealthy billionaire elites like Elon Musk, who perhaps see it as a threat to their hold on power, the implications of having advances computers a million times more intelligent than us is something we can’t ignore.

Yudkowsky, an academic and researcher at the Machine Intelligence Research Institute in Berkeley, California, told The Guardian that AI development poses an existential threat to humanity.

The expert argues that if “self-aware machines” become “rebellious,” it would be game over for humans.

Yudkowsky concludes that AI systems are evolving so rapidly that they will escape the grasp of human control. A scenario that he says could happen within two to ten years.

The academic also compares the terrifying event to “an alien civilization that thinks a thousand times faster than us.”

Yudkowsky sounded the alarm in an op-ed published last year, suggesting the only option to stop AI from destroying humanity would be the nuclear destruction of its data centers as a “last resort”. However, such scenario would inevitably kill off millions of humans while driving the elites to their underground bunkers.

Indeed, Yudkowsky is not the first, and definitely won’t be the last to warn about the dangers of AI.

In May last year, the heads of OpenAI and Google Deepmind also warned that humanity is facing an extinction-level scenario if the advanced technology decides it no longer needs humans.

A statement published on the webpage of the Centre for AI Safety read:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Those supporting the statement included:

  • Sam Altman, chief executive of ChatGPT-maker OpenAI
  • Demis Hassabis, chief executive of Google DeepMind
  • Dario Amodei of Anthropic

According to the Centre for AI Safety website, there are a number of possible scenarios for humanity:

  • AIs could be weaponised – for example, drug-discovery tools could be used to build chemical weapons
  • AI-generated misinformation could destabilise society and “undermine collective decision-making.”
  • The power of AI could become increasingly concentrated in fewer and fewer hands, enabling “regimes to enforce narrow values through pervasive surveillance and oppressive censorship”
  • Enfeeblement, where humans become dependent on AI, “similar to the scenario portrayed in the film Wall-E.”

“I’m just a scientist who suddenly realized that these things are getting smarter than us,” Hinton told CNN last year. “I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us.”

Meanwhile, former British soldier Alistair Stewart also warned against developing advanced AI systems, highlighting a recent survey in which 16 percent of AI experts predicted the end of humanity due to AI.

“That’s a one-in-six chance of catastrophe,” Stewart notes, adding, “That’s Russian roulette odds.”

According to a CNN report from June last year, forty-two percent of CEOs surveyed at the Yale CEO Summit warned AI could destroy humanity within tens of years.

“It’s pretty dark and alarming,” Yale professor Jeffrey Sonnenfeld told the outlet.

The survey’s responses came from 119 CEOs, including Coca-Cola CEO James Quincy, Walmart CEO Doug McMillion, and IT company leaders like Zoom and Xerox.

Â