Meta has initiated testing of its first proprietary chip designed for AI training, according to Reuters. The tech giant now pivots towards custom silicon solutions, with the overarching goal of diminishing its reliance on external suppliers like Nvidia and slashing burgeoning AI infrastructure costs.
The in-house chip represents a dedicated accelerator, meticulously crafted for AI-specific tasks, offering enhanced energy efficiency compared to general-purpose graphics processing units (GPUs). Meta is collaborating with Taiwan-based TSMC for manufacturing. Following the successful completion of its first tape-out, Meta has commenced a limited deployment of the chip, with plans to ramp up production if initial trials prove successful.
This endeavour forms part of Meta’s strategy to manage costs while heavily investing in AI tools to drive future growth. The company has projected expenses of $114 billion to $119 billion for 2025, with up to $65 billion earmarked for capital expenditures, primarily directed towards AI infrastructure. By designing its own AI chips, Meta aims to optimise performance, improve power efficiency, and gain tighter control over its AI hardware needs as it scales its AI ambitions.
Meta’s journey into custom chip development has not been without its challenges. An earlier chip was scrapped during development. Despite this setback, Meta successfully deployed an MTIA chip last year for inference tasks—the process of running AI models as users interact with them. This chip now powers the recommendation systems that determine which content appears on Facebook and Instagram feeds.
Looking ahead, Meta executives aim to use in-house chips for training by 2026. The initial focus will be on recommendation systems, expanding later to generative AI products like Meta’s chatbot.
“We’re working on how we would do training for recommender systems and then eventually how we think about training and inference for gen AI,” said Meta’s chief product officer, Chris Cox, at the Morgan Stanley Technology, Media & Telecom Conference last week.
Nowadays, companies seek to optimise AI performance without solely relying on increased computing power. This approach allows Meta to customise the instruction set without paying royalties to third parties, as well as optimise die size, power consumption, and performance according to its specific needs. As AI models become more sophisticated, there is a growing need for specialised hardware solutions that can deliver enhanced efficiency and performance.