Here’s How the EU Will Regulate Advanced AI Models Like ChatGPT
The European Union (EU) has taken a significant step towards regulating advanced AI models such as OpenAI’s GPT-4 with the release of a new document outlining the proposed rules. According to the document, models that are deemed to pose a systemic risk would be subject to additional regulations, with the determining factor being the amount of computing power used to train the model. The threshold set by the EU is models that utilize more than 10 trillion trillion (or septillion) operations per second.
Experts suggest that currently, the only model that automatically meets this threshold is OpenAI’s GPT-4. However, the document states that the EU’s executive arm has the authority to designate other models as posing a systemic risk based on various criteria. These criteria include the size of the data set, the presence of at least 10,000 registered business users in the EU, the number of registered end-users, and other potential metrics.
To ensure accountability and compliance with regulations, highly capable AI models are expected to sign a code of conduct while the European Commission develops more comprehensive and harmonized controls. Models that refuse to sign the code of conduct will need to provide evidence to the commission that they are adhering to the provisions outlined in the AI Act. It’s important to note that the exemption for open-source models does not apply to those classified as posing a systemic risk.
The proposed regulations aim to strike a balance between fostering innovation and ensuring the responsible use of AI technologies. By addressing potential risks associated with advanced AI models, the EU seeks to protect the privacy and rights of its citizens while promoting the beneficial aspects of AI.
The implementation of these regulations reflects a growing global concern regarding the ethical and transparent development of AI technologies. The EU’s groundbreaking efforts in regulation will likely influence similar initiatives around the world, ultimately contributing to the development of a cohesive and globally accepted framework for AI governance.
As the EU continues to navigate the complexities surrounding AI regulation, stakeholders from various sectors are closely monitoring the process. Industry leaders, ethicists, and policymakers are actively engaged in discussions to provide valuable insights into the development of effective and fair AI regulations.
With the pace of technological advancements, the EU’s efforts to regulate advanced AI models like ChatGPT are both timely and crucial. As AI continues to evolve, the need for responsible and accountable development and deployment of AI technologies becomes increasingly apparent. The proposed regulations hold the potential to shape the future of AI governance, ensuring that it aligns with societal values and safeguards the interests of individuals and businesses alike.
In the coming months, stakeholders will eagerly await further details and updates regarding the implementation of these regulations. As the EU works towards developing more comprehensive guidelines, experts and interested parties will continue to engage in constructive dialogue to ensure a balanced and effective approach to AI regulation.
Europe’s groundbreaking pact to regulate AI sends a clear signal that the EU is committed to addressing the potential risks and challenges associated with advanced AI models. By taking this first step, the EU strengthens its position as a global leader in AI governance, inspiring other nations to prioritize the development and implementation of responsible and transparent AI regulations.
Read more: Europe Puts Stake in the Ground With First Pact to Regulate AI