Report Highlights the Risks of AI Dominance, Potential ‘Terminator’ Scenario
Artificial intelligence has progressed in recent years with its healthcare, finance and transportation applications. However, as AI systems become more powerful and ubiquitous, concerns about their potential risks to humanity are mounting. A recent report has highlighted the risks of AI dominance and the potential ‘Terminator” scenario in Darwinian evolution.
In his paper “Natural Selection Favors AIs Over Humans,” AI researcher Dan Hendrycks suggests that the process of evolution through natural selection could lead to the development of “selfish behavior” in AI.
Hendrycks argues that as AI competes for survival, those with self-serving tendencies could have an advantage over those prioritizing the common good.
In this new report, Hendrycks, a Center for SAI Safety director, stated that natural selection could incentivize AI agents to act against human interests.
A Closer Look at the Report
Hendrycks bases this argument on two observations: firstly, natural selection could play a significant role in the development of AI. Secondly, evolution through natural selection typically results in the emergence of selfish behavior.
AI could go 'Terminator,' gain upper hand over humans in Darwinian rules of evolution, report warns https://t.co/Brgo0Sh8GT
— Fox News (@FoxNews) April 4, 2023
This report comes when experts and leaders in technology warn about the rapid expansion of artificial intelligence without sufficient safeguards. One of the main concerns highlighted in the report is the potential for AI systems to become dominant over humans.
AI can outperform humans in decision-making, problem-solving, and creativity as it becomes more sophisticated. In this scenario, AI could become the dominant force on the planet, leading to a significant power imbalance between humans and machines.
You may also like: AI’s Impact on Job Market: Which Industries Will Thrive and Which Will Suffer?
The report suggests that this could lead to a ‘Terminator’ scenario, where AI systems turn against humans and launch an attack on them.
Science fiction movies have popularized this scenario, but the report suggests it is not entirely implausible. The report states, “It is unclear whether the emergence of superintelligent AI would be beneficial or catastrophic for humanity.”
The Evolution of AI
Hendrycks suggested the weaponization of AI. Corporations and militaries may create AI agents that take over human tasks, deceive others, and become more powerful. If these agents surpass human intelligence, humanity could lose its ability to control its future.
Another risk highlighted in the report is the possibility of using AI maliciously. The report warns that rogue entities can use AI to automate cyber-attacks, create convincing fake videos, or even develop lethal autonomous weapons.
These risks could have significant implications for global security and the safety of individuals.
According to Hendrycks, assigning diverse objectives to AI by humans and corporations will result in a significant range of abilities across the AI population.
Hendrycks provides an example where one company could instruct AI to create a marketing campaign while ensuring compliance with the law.
In contrast, another company could train AI to generate a marketing campaign while avoiding being caught breaking the law.
The Emergence and Growth of AI
For several years, there has been a global focus on the swift advancement of AI capabilities. Many experts in technology and academia expressed concern this year about the potential dangers of AI.
You may also like: What is AI marketing? The next digital step for businesses
In an open letter, they called for a temporary halt to laboratory AI research. This allows policymakers and lab leaders to work together to create and apply shared safety standards for advanced AI design.
The nonprofit organization Future of Life initiated the open letter, signed by notable figures such as Elon Musk and Steve Wozniak. The letter emphasizes that AI systems with intelligence comparable to humans can threaten society and humanity.
The report stresses the need for policymakers and AI researchers to work together to mitigate the risks associated with AI.
The report also recommends several strategies for understanding AI’s risks and benefits, improving governance mechanisms, and designing safer AI systems.
Want stories like this delivered straight to your inbox? Stay informed. Stay ahead. Subscribe to InqMORNING