Meta Platforms (NASDAQ:META) on Wednesday introduced four internally developed chips designed to handle artificial intelligence workloads as part of its broader data center growth strategy.
The processors belong to the company’s Meta Training and Inference Accelerator (MTIA) family, first introduced in 2023 and followed by a second-generation model in 2024. Meta said it plans to design and roll out four additional generations of MTIA chips over the next two years to support ranking systems, recommendation engines and generative AI tasks—an unusually rapid development cycle compared with the typical semiconductor timeline.
According to Yee Jiun Song, speaking to CNBC, creating proprietary chips that are manufactured by Taiwan Semiconductor Manufacturing Company allows Meta to improve price-to-performance efficiency across its data center infrastructure rather than relying solely on external suppliers.
“This also provides us with, with more diversity in terms of silicon supply, and insulates us from price changes to some extent,” Song said. “This is a little bit more leverage.”
The first of the new processors, MTIA 300, was deployed several weeks ago. Song said it is designed to help train smaller AI models that power key ranking and recommendation systems used to determine which posts, videos and advertisements appear across Meta’s platforms such as Facebook and Instagram.
The next generation of chips will focus on advanced inference tasks related to generative AI, including creating images and videos from user prompts. However, Song noted that the processors will not be used to train very large language models.
In a blog post, Meta said testing for the MTIA 400 chip has been completed and that the company is “on the path to deploying it in our data centers.” The remaining two chips are expected to become operational in 2027.
“It’s unusual for any silicon company or team to be releasing a new chip every six months. It’s a very quick cadence,” Song said. “And the big reason for this is that we find ourselves building out capacity so quickly at the moment, and spending so much on CapEX, that at any given time we want to have the state-of-the-art chip to deploy.”
Song added that the chips are expected to have a “standard five-plus years of useful lifetime.”
Meta’s growing AI infrastructure investments include a new data center in Louisiana and additional facilities in Ohio and Indiana. According to Bloomberg, the company is also exploring leasing capacity at the Stargate AI data center site in Texas after OpenAI and Oracle abandoned plans to expand the project.
