컨텐츠로 건너뛰기
뉴스
서울
맑음 / -1.1 °
매일경제 언론사 이미지

Google develops next-generation AI chip with enhanced performance

매일경제
원문보기
Google’s 7th-generation AI accelerator chip, ‘Ironwood’. (Google)

Google’s 7th-generation AI accelerator chip, ‘Ironwood’. (Google)


Google LLC unveiled a new artificial intelligence (AI) model and AI semiconductor on Wednesday, a move aimed at outpacing competitors and reducing its reliance on Nvidia Corp. by strengthening its own chips.

Google Cloud, the company’s cloud computing division, held its annual event, Next 2025 in Las Vegas, the United States, that day.

At the event, the company introduced Gemini 2.5 Flash, a more accessible version of its latest large language model (LLM), Gemini 2.5, which was unveiled in March 2025.

According to Google, Gemini 2.5 Flash automatically adjusts processing time based on the complexity of the question, enabling quicker responses for simpler queries. This allows for fast service at a lower cost.

Google described Gemini 2.5 Flash as a balanced model suited for large-scale scenarios like customer service or real-time information processing and ideal for virtual assistants where efficiency at scale is critical.

Google also unveiled its next-generation AI accelerator chip, Ironwood, which is its seventh-generation Tensor Processing Unit (TPU).


Ironwood is optimized for inference tasks and designed to power LLM services for a large customer base. It effectively supports recent LLM features such as Mixture of Experts (MoE) and advanced reasoning capabilities.

According to Google, Ironwood delivers over 10 times the performance of its predecessor, the TPU v5p, which was released in 2023.

It is equipped with 198GB of high bandwidth memory (HBM), allowing it to handle larger models and datasets, reducing the need for frequent data transfers and enhancing performance.


Samsung is reportedly supplying the HBM to Google via Broadcom, Google’s chip developer.

With supercomputers based on Ironwood, Gemini 2.5 Flash is expected to deliver inference services at competitive costs.

As Nvidia, which dominates more than 80 percent of the AI accelerator market, shifts its focus from training to inference, Google appears to be strategically developing inference-optimized chips to reduce its dependence on Nvidia, according to sources.


Google also introduced a new communication protocol agent called Agent2Agent for interactions between AI agents.

It also announced support for the increasingly popular open-source protocol MCP.

“We shipped more than 3,000 product advances across Google Cloud and Workspace in 2024,” Google Cloud CEO Thomas Kurian said.

He added that AI adoption is accelerating and more than 4 million developers now use Gemini.

info icon이 기사의 카테고리는 언론사의 분류를 따릅니다.

AI 이슈 트렌드

실시간
  1. 1장영란 홍현희 이지혜
    장영란 홍현희 이지혜
  2. 2손흥민 토트넘 잔류
    손흥민 토트넘 잔류
  3. 3김소니아 더블더블
    김소니아 더블더블
  4. 4심형탁 하루 매니저
    심형탁 하루 매니저
  5. 5김설 영재원 수료
    김설 영재원 수료

매일경제 하이라이트

파워링크

광고
링크등록

당신만의 뉴스 Pick

쇼핑 핫아이템

AD