Ethereum co-founder Vitalik Buterin has raised concerns regarding superintelligent AI. The renowned mastermind behind the second-largest cryptocurrency by market capitalization argues that uncontrolled development of Artificial Intelligence (AI) models could cause irreversible harm to humanity.
As per the blog post published on Jan. 5, 2024, Buterin argued that there is only half a decade to go before the world experiences superintelligent AI systems. He also highly champions ‘d/acc’ or rather ‘defensive accelerationism’ of these systems.
Buterin’s Defensive Acceleration Plan
Following his opinions on a November 2023 post regarding Artificial general intelligence (AGI) and superintelligence, Buterin highlights clear strategies to prevent adverse effects of uncontrolled AI. Buterin views superintelligent AI (AI beyond human intelligence) as a double-sided sword with potential to affect humanity in both good and bad ways.
Because of this, he believes that we cannot only focus on accelerating the good side of Artificial Intelligence, but also have a way to slow down the bad. “…if we don’t want the world to be destroyed or otherwise fall into an irreversible trap, we can’t just accelerate the good, we also have to slow down the bad,” Buterin stated.
With his defensive acceleration (d/acc) idea, Buterin proposes viable strategies to reduce the adverse effects of superintelligence. These include using liability rules to hold users, deployers and developers liable for harm caused by AI models and having “soft pause” buttons on industrial scale AI hardware.
Liability Rules As a Way to Reduce Misuse of AI
According to Buterin, the introduction of liability rules could be a major way of reducing dangers associated with AI. Putting liability on users for instance would force them to use the Artificial Intelligence for the intended purpose. Buterin wrote, “liability on users creates a strong pressure to do AI in what I consider the right way.”
On the other hand, deployers and developers have absolute influence on how unsecure an AI model can be. Because of this, making them liable for any harm caused by the AI systems can lead to development and deployment of safer Artificial Intelligence.
“Soft Pause” Buttons in Fighting Uncontrolled AI
In a scenario where the liability rules are not as effective, Buterin proposes a “soft pause” button for industrial-scale AI hardware as a more ‘muscular’ way to contain AI. This mechanism would focus on slowing down the development of Artificial Intelligence models by reducing the global computing power by 90-99%.
Implementing “soft pause” would buy human-kind more time to prepare for risks emerging from superintelligent AI. Some of the ways to implement this mechanism include using cryptographic trickery and regulatory frameworks.
For cryptographic trickery, the AI model would be fitted with a trusted hardware chip. The chip would then require 3/3 signatures once a week from international bodies (including one non-military body) for the AI model to work.
On the other hand, the regulatory framework (which is already under consideration) would allow for hardware export controls hence slowing down development of ‘destructive’ AI systems. However, Buterin argues that this strategy could be risky as it can be used exploitatively against some countries.
Gaps in Buterin’s d/acc Strategies
While Buterin’s proposed strategies could work out in facilitating defensive accelerationism, there are some gaps in the strategies. “Both of these strategies (liability and the hardware pause button) have holes in them, and it’s clear that they are only temporary stopgaps,” admitted Buterin.
What this means is that the strategies are not permanent solutions but rather mechanisms that could buy humanity more time to solve the real problems related to superintelligent AI. As per Buterin’s argument, developers should develop Artificial Intelligence models in an effective accelerationism (e/acc) manner that explicitly focuses on its adoption without considering the dangers it poses to humanity.
This news is republished from another source. You can check the original article here