Pope Francis called for an international treaty on the regulation of AI in the Vatican’s annual World Day of Peace Message on Dec. 14.
The pope stated in his message:
“The global scale of artificial intelligence makes it clear that … international organizations can play a decisive role in reaching multilateral agreements … I urge the global community of nations to work together in order to adopt a binding international treaty that regulates the development and use of artificial intelligence in its many forms.”
He noted that this regulation should not merely restrict harmful practices involving AI but should also encourage best practices and stimulate new developments.
Pope Francis added that new rules and guidance around artificial intelligence should consider ethical considerations and the needs of all stakeholders, including the “poor, the powerless, and others who often go unheard.”
Elsewhere in the message, the pope described a balance “between promise and risk” and called science and technology “brilliant products of [human intelligence’s] creative potential.” The pope added that the AI specifically offers freedom from drudgery (i.e., menial or unfulfilling work), greater efficiency in manufacturing, improvements to transportation and markets, and better data management.
However, the pontiff also spoke to the limitations of AI. He noted that there is no single definition of AI and asserted that all forms of AI are “fragmentary,” only capable of performing certain human intelligence functions in limited contexts. He also emphasized that AI models are known to hallucinate in a way that can reduce accuracy and carry biases.
Pope Francis also acknowledged specific areas of concern, such as using AI and automated technologies in surveillance and social credit systems. He additionally addressed concerns about AI’s use in war and weapons development, in education and communications, and the possibility of job loss resulting from AI.
International AI regulations are in early stages
The pope’s call for AI regulation comes just days after EU lawmakers agreed to a law restricting harmful AI practices. That law will ban manipulative applications of AI and AI-powered facial recognition in public places, among other things.
Individual countries have also taken steps to regulate artificial intelligence, some of which include a partial focus on international regulation and collaboration.
The U.S. introduced an executive order on AI in late October, which in part addresses national security and the creation of international frameworks. The UK, meanwhile, hosted an international AI Safety Summit in Bletchley Park in September and described international efforts in its AI policy.
This news is republished from another source. You can check the original article here