[Courtesy of each company] |
<이미지를 클릭하시면 크게 보실 수 있습니다> |
OpenAI Inc., the developer behind generative artificial intelligence (AI) ChatGPT, plans to invest $100 billion to build the world’s largest data center in partnership with Microsoft Corp. The center that is envisioned will be equipped with a supercomputer that will be used in developing artificial general Intelligence (AGI). The investment alone is expected to be 100 times the size of the current largest data center.
According to information technology (IT)-focused business publication ‘The Information’ on Friday (local time), the management of both companies is working on a project named ‘Stargate’ to build a data center equipped with special AI chips to power OpenAI services.
The core of the envisioned data center is a supercomputer, which will house millions of specially designed server chips to power OpenAI‘s AI models. Given that Microsoft and OpenAI are reportedly planning a data center with an initial cost of $100 billion, they could make significant investments moving forward to ensure sufficient AI computing capacity.
“This is an essential step in building AGI,” Chris Sharp, chief technology officer (CTO) at data center operator Digital Realty Trust Inc., said. “(The investment) may seem unimaginably large by current standards, but it won’t appear so once the supercomputer is actually built.”
The Stargate project is part of a five-stage AI infrastructure construction project. OpenAI and Microsoft are currently midway through stage three, and aiming to unveil the OpenAI supercomputer at stage four in 2026, which will cost $10 billion. A significant portion of the costs in stages four and five will be related to the purchase of AI chips., according to the publication.
Former OpenAI CEO Sam Altman and and Microsoft CEO Satya Nadella at OpenAI’s DevDay in San Francisco on Nov. 6, 2023. [Photo by Yonhap] |
Building a $100 billion data center will require millions of semiconductors and although Nvidia Corp.‘s AI chips, such as graphic processing units (GPUs), are likely candidates, it is also possible that Microsoft will use its own AI chips. There could also be a surge in demand for high-bandwidth memory (HBM) from SK hynix Inc. and Samsung Electronics Co., which will be used alongside GPUs. Demand for Intel or Arm-based CPUs for AI servers will also increase significantly.
“We are always planning for the next generation of infrastructure innovation necessary to expand AI capabilities,” a Microsoft spokesperson said in response to a Reuters request for comment.
OpenAI also unveiled Voice Engine, an AI tool that learns human speech and generates imitation voices, on Friday. OpenAI shared preliminary experimental results of the Voice Engine on its blog, which showed it can generate a voice similar to the original speaker using a 15-second voice sample.
However, given the potential risks of this AI tool, the company will give careful thought on whether to release it in full.
이 기사의 카테고리는 언론사의 분류를 따릅니다.
기사가 속한 카테고리는 언론사가 분류합니다.
언론사는 한 기사를 두 개 이상의 카테고리로 분류할 수 있습니다.
언론사는 한 기사를 두 개 이상의 카테고리로 분류할 수 있습니다.