Generative artificial intelligence raises major regulatory issues, particularly with regard to copyright and model transparency. Faced with these challenges, the world's major powers are adopting very different approaches.
The European Union: a pioneer in regulation
The European Union is the first to have established a clear legal framework for generative AI. In March 2024, it adopted the AI Act, a law that classifies AI technologies according to their level of risk. The aim is to protect users while promoting innovation.
Models such as ChatGPT and Midjourney must now comply with strict transparency rules: they are required to detail how they work and the data used to train them. The AI Act, which came into force in August 2024, provides for gradual implementation over several years.
When it comes to copyright, the EU insists on respect for the works used to train these intelligences. The idea is to ensure fair remuneration for creators. Companies developing these technologies could be required to publish a summary of the training data to ensure better control by rights holders.
The United States: regulation still unclear
In the United States, AI regulation remains less structured. In October 2023, a presidential executive order set guidelines on AI, emphasizing security, data protection, and respect for civil rights. However, this order is not legally binding, leaving each sector to regulate itself.
In terms of copyright, the US courts have ruled in favor of a traditional approach: only works created by humans can be protected. This means that images or texts generated by AI do not enjoy the same protections. This position has sparked debate, particularly regarding the use of existing works to train these models.
China: strict and centralized regulation
China, meanwhile, has opted for strict regulation. Since August 2023, specific regulations require AI providers to comply with "fundamental socialist values" and avoid any content that could undermine national security.
Chinese regulations also require complete transparency regarding the algorithms and databases used. In terms of copyright, companies must obtain official licenses before using protected works to train their models. This approach aims to protect local creators while strengthening state control over AI.
Three visions, three strategies
Faced with the rapid rise of generative AI, Europe, the United States, and China are adopting very different strategies:
- Europe is focusing on detailed regulation and strong protection for rights holders.
- The United States favors a more flexible approach, with recommendations rather than obligations.
- China, for its part, imposes strict restrictions to maintain total control over these technologies.
The African position: the Union
Africa, aware of the challenges associated with generative artificial intelligence (AI), is adopting measures to regulate its development while protecting copyright and promoting the responsible use of these technologies.
Continental initiatives: towards a unified strategy
In June 2024, African ministers of Information and Communication Technology approved a continental strategy on AI. This initiative aims to harmonize AI policies across the continent, with a focus on training young people, supporting innovators, and creating an ethical framework for AI development. The goal is to position Africa as a major player in the global AI landscape.
Copyright protection: a major concern
Faced with the rise of generative AI applications, African collective copyright management organizations have expressed their concerns. In September 2024, a joint statement was published, emphasizing the need to adopt laws adapted to technological developments and to create an ethical charter for the use of AI. This statement emphasizes the protection of literary and artistic works and fair remuneration for creators in a constantly changing digital environment.
Specific case: South Africa in search of regulation
In October 2024, South Africa unveiled a roadmap to regulate the use of AI. This initiative follows the entry into force of the European AI Act in July of the same year. The South African government recognizes the importance of realistic regulation to exploit the opportunities offered by AI while protecting citizens' rights. Public consultations have been launched to develop inclusive policies that are adapted to the local context.
Challenges and prospects
Despite these advances, Africa faces several challenges, particularly in terms of digital infrastructure, specialized training, and data protection. Collaboration between member states, the private sector, and civil society is essential to create an ecosystem conducive to innovation while ensuring that creators' rights are respected. The recently launched African Observatory for Responsible AI aims to position the continent as an influential voice in global debates on AI and to promote evidence-based policies.
