The United States, the European Union, and the United Kingdom are expected on Thursday to sign a groundbreaking treaty emphasizing human rights and democratic values in regulating both public and private AI systems.Β
Called the Council of Europe Framework Convention on Artificial Intelligence, this move marks the first legally binding international treaty on Artificial Intelligence (AI).Β Its main goal is to hold countries accountable for any harm or discrimination caused by AI systems, especially those that violate citizens’ rights to equality and privacy.Β
This convention will offer legal recourse for victims of AI-related violations, and make sure that the outputs of AI models are in tune with basic human rights principles. Peter Kyle, the minister for science, innovation, and technology for the U.K., referred to the treaty as a significant global step forward.Β
According to Kyle, nations have showcased a global commitment to addressing AI challenges through this decision. The convention saw the contributions from over 50 countries, including Canada, Japan, Israel, and Australia.Β
Related: No Humans Allowed: Moltbook is a New Social Platform Exclusive for AI Bots
In the summer, the European Union (EU) emerged as the first region that implemented sweeping rules to help develop and deploy AI models. These were mostly the high-level ones that had the capacity to withstand large amounts of computing power.Β
The European Commission first proposed the EU regulatory framework for AI, saying that “AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.β
A few developers argued that the EUβs stringent regulations could stop innovation, however. Companies like Meta halted their rollout of products like Llama2 in the EU following these restrictions. Several tech firms had also written letters to EU leaders requesting more time to cope with the new guidelines.Β
Related: First AI Rivalry, Now Altman Targets Elon’s X
On the other hand, the United States is yet to implement a nationwide framework for AI regulation, although the current government already started creating task forces and committees to ensure AI safety.Β California has taken the lead in drafting AI rules, with the state assembly passing two bills to-date: one on the creation of unauthorized AI-generated replicas of deceased persons, and the other on safety testing for advanced AI models, including provisions for a “kill switch.”
These regulatory actions in California are pivotal since the city is home to AI developers and houses tech companies like Apple, Meta, Google, and OpenAI.
