[Image source: Midjourney] |
European Union (EU) lawmakers enacted the world’s first laws to regulate artificial intelligence (AI) on Friday as the number of AI led major tech companies in the United States increases rapidly.
The AI Act “aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI, while booting innovation and making Europe a leader in the field,” the European Parliament said in a statement. “The rules establish obligations for AI based on its potential risks and level of impact.”
The AI law focuses on prohibiting AI that could threaten citizens’ rights and democracy, with exceptions for law enforcement agencies, setting guardrails for general purpose AI (GPAI), and supporting innovation for small and mid-size enterprises.
Guardrails for GPAI are particularly expected to act as significant constraints on big tech, requiring compliance with transparency requirements.
“These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training,” the European Parliament said.
For high-impact GPAI models with systemic risks, the EU demanded even stronger compliance measures, requiring adherence to model evaluation, system evaluation, risk mitigation measures and security testing, on top of reporting serious incidents to the EU Commission, ensuring cybersecurity, and reporting on energy efficiency.
For its part, the EU promised extensive support for small AI companies operating in Europe.
It “wanted to ensure that businesses, especially SMEs, can develop AI solutions without undue pressure from industry giants controlling the value chain,” and “to this end, the agreement promotes so-called regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before placement on the market.”
The law will affect big tech companies in the meantime, as they will likely face significant challenges in their businesses.
Companies like Microsoft Corp., which is developing its Copilot service by integrating OpenAI GPT, and Google LLC, which is enhancing AI-based services with Gemini’s release, may need to create separate services tailored for the EU, resulting in additional development costs.
The United States and United Kingdom are investigating Microsoft and OpenAI for potential anti-trust violations.
Open AI, the developer of leading AI models including ChatGPT, had been outside the regulatory scope until now but the recent ousting and return Sam Altman as CEO has shifted the perception, placing the company under regulatory scrutiny.
The European and U.S. governments have already announced investigations into potential antitrust law violations related to Microsoft’s investment in OpenAI.
The United States signed an executive order in October 2023 that focuses on AI learning, reporting to the federal government, and providing guidelines for watermarks on AI content.
Korea is viewed as lagging behind the United States, but ahead of the EU technologically, and aims to establish minimal regulations necessary without impeding growth. But legislation for promoting AI industries and ensuring safety has faced challenges at the National Assembly.
Content industry demands immediate protection of intellectual property rights, while the AI industry fears a loss of growth momentum.
“Korea should promote AI industry growth based on corporate self-regulation and address arising issues for improvement,” Naver Cloud head of AI Innovation Ha Jung-woo said.
Experts noted that Korea should still prioritize industry promotion although regulations for high-risk areas need to be considered.
The United Nations is actively working to come up with a broad framework for regulating AI by August 2024, according to Ko Hak-soo, chairman of the Personal Information Protection Commission and a member of the United Nations high-level advisory body on AI.
Earlier this month, Ko visited the UN headquarters in New York to attend the first offline meeting of the UN AI body, which was established at the end of October 2023 as per the proposal of UN Secretary General António Guterres.
Ko is the only member from Korea among the 39 AI experts.
“There are recent AI regulation documents from major G7 countries, such as the Hiroshima AI Process and the White House AI Executive Order,” he said, stating that “red teaming” and “watermark” were notable regulatory proposals in both documents.
Red teaming involves AI service development companies verifying and reporting issues on their own before launching and watermark is a system where AI creates a mark indicating that the image was created by AI, allowing it to be visually or algorithmically filtered.
“The term watermark has been mentioned but detailed discussions have yet to take place,” Ko said.
이 기사의 카테고리는 언론사의 분류를 따릅니다.
기사가 속한 카테고리는 언론사가 분류합니다.
언론사는 한 기사를 두 개 이상의 카테고리로 분류할 수 있습니다.
언론사는 한 기사를 두 개 이상의 카테고리로 분류할 수 있습니다.