Today, we’re announcing a multi-year agreement with AMD to power our AI infrastructure with up to 6GW of AMD Instinct GPUs, the silicon computing technology used to support modern AI models.
At Meta, we’re working to build the next generation of AI and enable personal superintelligence for all. To do this, we need massive, scalable compute power that can handle the growing demands of our AI workloads. Our partnership with AMD, which builds on our existing collaboration, will help us meet those needs.
Working With an Industry Leader
Under our new agreement, we will also be working with AMD on alignment with our roadmaps across silicon, systems and software enabling vertical integration across our infrastructure stack. This collaboration across both software and hardware will enable us to innovate quickly and at scale.
“We are proud to expand our strategic partnership with Meta as they push the boundaries of AI at unprecedented scale,” said Dr. Lisa Su, chair and CEO, AMD. “This multi-year, multi-generation collaboration across Instinct GPUs, EPYC CPUs and rack-scale AI systems aligns our roadmaps to deliver high-performance, energy-efficient infrastructure optimized for Meta’s workloads, accelerating one of the industry’s largest AI deployments and placing AMD at the center of the global AI buildout.”
Shipments to support the first GPU deployments will begin in the second half of 2026, and will be built on the Helios rack-scale architecture, a rack that we developed and announced at last year’s Open Compute Project Global Summit in collaboration with AMD.
“We’re excited to form a long-term partnership with AMD to deploy efficient inference compute and deliver personal superintelligence,” said Mark Zuckerberg, Founder and CEO of Meta. “This is an important step for Meta as we diversify our compute. I expect AMD to be an important partner for many years to come.”
Our Portfolio-Based Approach
Our agreement with AMD is part of our Meta Compute initiative, an effort to massively scale our infrastructure for the era of personal superintelligence, future-proofing our leadership in AI. By diversifying our partnerships and technology stack, we’re building a more resilient and flexible infrastructure. We’re combining hardware sourced from a range of partners with our own rapidly advancing Meta Training and Inference Accelerator (MTIA) silicon program.
We believe this portfolio approach will enable us to advance and innovate at an unmatched pace, rolling out powerful, efficient new hardware co-designed with our software stack to handle massive growth. We look forward to working with AMD to power our AI innovations and secure our ability to deliver world-class AI experiences to billions of people globally.
This post contains forward-looking statements, including about Meta’s business.You should not rely on these statements as predictions of future events. Additional information regarding potential risks and uncertainties can be found in our most recent Form 10-K filed with the Securities and Exchange Commission. Meta undertakes no obligation to update these statements as a result of new information or future events.
First Appeared on
Source link
Leave feedback about this