The Santa Clara, California-based company said the new chip, which has up to 192 gigabytes of internal memory, will hit the market in the third quarter, before starting large-scale production three months later.
AMD CEO Lisa Su said the new product could help technology companies balance costs when providing services similar to ChatGPT.
“The more memory, the bigger the chip model can handle,” Su said. “We saw workloads being processed faster, and that made a difference.”
AMD, however, did not mention which major customers were ready to pay for the MI300X, the company’s latest AI graphics chip. Nor did it disclose pricing details or how it would boost sales.
AMD shares have doubled since the start of the year, hitting a 16-month high on June 13, but closed down 3.6% after the company's presentation on its AI strategy.
“The lack of any major customers confirming MI300A/X may have disappointed Wall Street investors,” said Kevin Krewell, chief analyst at TIRIAS Research.
Nvidia is the first chipmaker with a market capitalization of more than a trillion dollars, and it has the power to overwhelm its competitors on a massive scale. Although Intel and startups like Cerebras Systems and SambaNova Systems have launched competing products, the biggest threat to Nvidia’s sales comes from internal chipmaking efforts by Google, Alphabet and Amazon.
In addition to the AI market, AMD said it has begun shipping large shipments of its “Bergamo” central processing chips to companies like Meta Platforms.
On the same day, June 13, the chipmaker announced a Rocm software update to compete with Nvidia's Cuda software platform.
(According to Reuters)
Source
Comment (0)