The United States, the European Union, and the United Kingdom are expected on Thursday to sign a groundbreaking treaty emphasizing human rights and democratic values in regulating both public and private AI systems.
Called the Council of Europe Framework Convention on Artificial Intelligence, this move marks the first legally binding international treaty on Artificial Intelligence (AI). Its main goal is to hold countries accountable for any harm or discrimination caused by AI systems, especially those that violate citizens’ rights to equality and privacy.
This convention will offer legal recourse for victims of AI-related violations, and make sure that the outputs of AI models are in tune with basic human rights principles. Peter Kyle, the minister for science, innovation, and technology for the U.K., referred to the treaty as a significant global step forward.
According to Kyle, nations have showcased a global commitment to addressing AI challenges through this decision. The convention saw the contributions from over 50 countries, including Canada, Japan, Israel, and Australia.
In the summer, the European Union (EU) emerged as the first region that implemented sweeping rules to help develop and deploy AI models. These were mostly the high-level ones that had the capacity to withstand large amounts of computing power.
The European Commission first proposed the EU regulatory framework for AI, saying that “AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.”
A few developers argued that the EU’s stringent regulations could stop innovation, however. Companies like Meta halted their rollout of products like Llama2 in the EU following these restrictions. Several tech firms had also written letters to EU leaders requesting more time to cope with the new guidelines.
On the other hand, the United States is yet to implement a nationwide framework for AI regulation, although the current government already started creating task forces and committees to ensure AI safety. California has taken the lead in drafting AI rules, with the state assembly passing two bills to-date: one on the creation of unauthorized AI-generated replicas of deceased persons, and the other on safety testing for advanced AI models, including provisions for a “kill switch.”
These regulatory actions in California are pivotal since the city is home to AI developers and houses tech companies like Apple, Meta, Google, and OpenAI.
Read More
- CFTC Fines Uniswap Labs $175,000 Over Alleged Unlawful Leveraged Trading
- Robinhood Settles $3.9M for Crypto Withdrawal Restrictions and Misleading Practices
- Trump’s Crypto Venture: World Liberty Financial Taps Blockchain with DeFi Twist
Gairika holds positions in BTC. This article is provided for informational purposes only and should not be construed as financial advice. The Shib Magazine and The Shib Daily are the official media and publications of the Shiba Inu cryptocurrency project. Readers are encouraged to conduct their own research and consult with a qualified financial adviser before making any investment decisions.