컨텐츠 바로가기

12.04 (수)

이슈 인공지능 시대가 열린다

Korea officially launches AI Safety Institute

댓글 첫 댓글을 작성해보세요
주소복사가 완료되었습니다
매일경제

On the 27th, attendees pose for a commemorative photo at the opening ceremony of the AI Safety Institute held at the Global R&D Center in Seongnam, Gyeonggi Province. (Ministry of Science and ICT)

<이미지를 클릭하시면 크게 보실 수 있습니다>


South Korea launched an AI Safety Institute on Wednesday to ensure the safe use of artificial intelligence (AI) technology. This is the sixth such national research institute globally, with the others located in places including the United Kingdom and the United States.

The initiative to research safe AI technologies emerged as a key agenda at the AI Seoul Summit in May, which brought together representatives from 10 countries, making it a prominent global topic.

At the opening ceremony held at the Global R&D Center in Seongnam, Gyeonggi Province, Kim Myung-joo, the institute’s inaugural chief, emphasized that the institute would not function as a regulatory body but as a collaborative organization.

Its goal is to help domestic companies minimize risks that hinder their global competitiveness, he added.

The AI Safety Institute is dedicated to addressing AI-related risks, such as technical limitations, misuse, and loss of control over AI systems. Its primary objective is to preemptively identify and minimize these risks while enhancing the reliability of AI technologies.

The government aims to reduce the side effects of AI technologies, like deepfake misuse, while providing AI guidelines to the private sector.

This effort is part of a broader strategy to establish Korea as one of the top three global AI leaders. The international landscape currently has disparities in AI safety standards, creating uncertainties for businesses during R&D, experts noted.

The European Union’s General Data Protection Regulation (GDPR), which imposes strict AI usage guidelines, is a prime example of these challenges for global tech companies.

“There is a need for Korean companies to address regulatory and certification procedures from the R&D stage to overcome similar barriers,” Kim said. The global competition to establish AI safety norms is intensifying, with major tech companies like Microsoft and Google working closely with their respective national AI safety institutes to mitigate investment uncertainties.

The U.S. Department of Commerce launched the AI Safety Institute Consortium (AISIC) in February 2024, involving over 200 organizations, including Google, OpenAI, and Meta, as well as universities and financial institutions. OpenAI and Anthropic also signed an MoU with the U.S. AI Safety Institute in August of that year, allowing pre-evaluation of their AI models and receiving feedback on the findings.

The initiative reflects a strategic effort to evaluate potential risks of AI technologies and establish global AI standards under U.S. leadership.

With the launch of Korea’s AI Safety Institute, Korean companies are actively working to establish a secure research environment for AI technologies.

During the inauguration ceremony, a Korea AI Safety Consortium agreement was also signed, involving 24 prominent domestic industry-academia-research organizations. This drew attention to the collective efforts of Korean ICT companies, which have been conducting consistent research on AI safety to address investment uncertainties.

Naver established the Future AI Center in January to oversee and implement AI ethics and safety policies, as well as to foster collaborations on safety research. LG AI Research Institute formed an AI Ethics Committee and AI Ethics Office to create a structured framework for ethical AI practices while SK telecom, along with other telecommunication companies, established dedicated teams to manage potential risks associated with AI advancements.
기사가 속한 카테고리는 언론사가 분류합니다.
언론사는 한 기사를 두 개 이상의 카테고리로 분류할 수 있습니다.