Science

AI Development Could Determine the Fate of the Universe, Expert Warns

Published February 12, 2024

Artificial intelligence (AI) is progressing at a swift pace, sometimes outstripping our ability to fully grasp its implications or manage its advancements. In a stark cautionary message, AI expert Dr. Roman Yampolskiy points out that we lack adequate evidence to believe that we can regulate this evolving technology and, therefore, should reconsider our approach to its development.

AI Control: An Unsolvable Problem?

Through a comprehensive analysis of AI software, Yampolskiy reveals how these changes might not always benefit society. He raises alarm bells by projecting an 'almost guaranteed' possibility of AI causing an existential crisis. 'The future could bring about abundance or extinction, and the fate of the entire universe may be at stake,' he argues, emphasizing the severity of potential risks involved.

Understanding AI Safety Issues

Dr. Yampolskiy asserts that our talent for creating intelligent software overshadows our capacity to control it, with no advanced AI systems ever being entirely controllable. He challenges the prevailing assumption that the AI control problem can or will be solved. The reality, he says, is that as AI grows more intelligent, countless safety concerns arise—issues too numerous and unpredictable for us to foresee or prevent.

In addition, he points out a disconnect between AI decision-making and human comprehension, where AI explanations for its actions may be unattainable or beyond our understanding. This gap can lead to serious consequences if we become too reliant on AI without demanding accountability.

Navigating the Future of AI

The rising autonomy of AI systems hints at a decreased human control, which also diminishes the element of safety, cautions Yampolskiy. In his new book, AI: Unexplainable, Unpredictable, Uncontrollable, he claims that a superintelligent AI isn't just potentially rebellious; it is inherently uncontrollable.

To mitigate AI risks, Dr. Yampolskiy suggests that people might need to accept less capable AI systems equipped with simple 'undo' features. He presents an existential choice for humanity's future: embrace the comfort of being looked after by an AI guardian but lose our autonomy, or retain control and freedom at the expense of rejecting a protective overseer.

AI, technology, control