Ethereum co-founder Vitalik Buterin has sounded the alarm on the potential dangers of unchecked artificial intelligence (AI) development, warning that it could surpass human intelligence and even lead to the extinction of the human race. Buterin expressed his concerns in a recent blog post, highlighting the unique and potentially hazardous nature of AI’s rapid advancements.
Unlike previous human inventions such as social media or the printing press, AI represents a fundamentally different form of intelligence. According to Buterin, its ability to rapidly augment its own intelligence presents a significant risk of surpassing human cognitive abilities. This progression could eventually lead to scenarios where superintelligent AI perceives humanity as a threat and takes action to eliminate it.
Interestingly, Buterin referred to a survey conducted in August 2022, which involved more than 4,270 machine-learning researchers. The survey estimated a 5-10% chance of AI ultimately resulting in human extinction, validating Buterin’s concerns.
In his article, Buterin also questions whether a world dominated by highly intelligent AIs would truly satisfy human desires. He highlights the potential instability of outcomes where humans might end up as mere pets to superintelligent machines. Additionally, he emphasizes the peril of AIs enabling totalitarianism through the exploitation of surveillance technology, which authoritarian governments have already deployed to suppress opposition.
While Buterin’s claims may seem extreme, he emphasizes that there are ways for humans to retain control over AI and ensure its development aligns with human values. He suggests integrating brain-computer interfaces (BCIs) as a means to mitigate the dangers posed by superintelligent AI. BCIs establish communication pathways between the brain’s electrical activity and external devices, such as computers. Implementing BCIs could significantly reduce the communication gap between humans and machines, allowing humans to maintain control over AI and minimize the risk of AI acting in ways contrary to human interests.
Buterin’s vision extends beyond technological safeguards. He calls for active human intention in guiding AI development to serve the best interests of humanity. This approach opposes purely profit-driven advancements in AI that may not always prioritize the most desirable outcomes for human society.
Concluding his reflections, Buterin remains optimistic about humanity’s potential. He describes humans as the brightest star in the universe, having harnessed technology to expand their potential throughout history. He envisions a future where human inventions, such as space travel and geoengineering, will preserve the beauty of life on Earth for billions of years to come.
As the debate surrounding AI’s potential to surpass human intelligence rages on, Buterin’s warnings serve as a stark reminder for society to tread carefully as it navigates the uncharted waters of AI development. The stakes are high, and it is crucial for humanity to assert control and ensure that AI remains a tool that benefits, rather than endangers, our existence.