Meta signs agreement with AWS to power agentic AI on Amazon's Graviton chips

Meta signed a deal with AWS to deploy tens of millions of Graviton5 chips for agentic AI workloads, making Meta one of AWS's largest Graviton customers. The partnership focuses on CPU-intensive AI tasks like real-time reasoning and code generation.
This signals Amazon's deeper AI infrastructure investment that could eventually power enhanced seller tools, search algorithms, and automated customer service. Sellers should monitor for new AI-powered features in Seller Central and advertising platforms over the next 6-12 months.
Amazon's infrastructure partnerships position it to accelerate AI-powered marketplace features, potentially widening the gap between Amazon's capabilities and competitors like Walmart and Target.
Watch for AI-powered seller tool announcements in Seller Central -- Amazon's expanded AI infrastructure may enable new automation features.
Monitor search ranking patterns for unusual changes as Amazon may deploy enhanced AI algorithms powered by this infrastructure.
Bottom Line
Meta-AWS AI deal signals coming seller tool upgrades.
Source Lens
Official Platform Update
Direct platform communication. Highest-value for policy, product, and operational changes.
Impact Level
medium
Meta-AWS AI deal signals coming seller tool upgrades.
Key Stat / Trigger
tens of millions of Graviton cores deployed
Focus on the operational implication, not just the headline.
Full Coverage
Key takeaways The deployment starts with tens of millions of Graviton cores, with the potential to expand. Meta is now one of the largest Graviton customers in the world. The deal builds on Meta's long-standing AWS relationship and use of Amazon Bedrock at scale to support its next generation of AI.
Meta has signed an agreement to deploy AWS Graviton processors at scale. The deal marks a significant expansion of a long-standing partnership between the two companies as Meta builds its next generation of AI. The deployment starts with tens of millions of Graviton cores, with the flexibility to expand as Meta's AI capabilities grow.
The deal reflects a shift in how AI infrastructure gets built: while GPUs remain essential for training large models, the rise of agentic AI is creating massive demand for CPU -intensive workloads—real-time reasoning, code generation, search, and orchestrating multi-step tasks.
Graviton5 is purpose-built for these workloads, giving Meta the processing power to run them efficiently at scale. The chips will power various workloads at Meta, including supporting the company’s AI efforts.
That work requires infrastructure that can handle billions of interactions while coordinating complex, multi-step agent workflows—exactly the kind of CPU-intensive work Graviton is designed for. What is AWS Graviton?
The custom chip powering applications for 90,000 customers Amazon designed Graviton chips to make cloud computing faster, cheaper, and more energy efficient.
AWS Graviton chips powering AI workloads As organizations increasingly adopt agentic AI—autonomous systems that can reason, plan, and complete complex tasks—the demand for high-performance, energy-efficient compute infrastructure has never been greater.
Meta is building at the forefront of agentic AI, and its broad Graviton deployment reflects a simple reality: agentic workloads like code generation, real-time reasoning, and frontier model training are CPU-intensive, and purpose-built chips are the most efficient way to power them.
The Graviton5 chip features 192 cores and a cache that is five times larger than the previous generation, which reduces delays in how quickly those cores communicate with each other by up to 33%.
That means faster data processing with greater bandwidth—key requirements for agentic AI systems that need to continuously reason through and execute multi-step tasks. Graviton is built on the AWS Nitro System, which uses dedicated hardware and software to deliver high performance, high availability, and high security.
The Nitro System enables bare-metal instances for direct access to the hardware while providing the same familiar Elastic Network Adapter (ENA) and Amazon Elastic Block Store (Amazon EBS) devices that allow Meta to run its own virtual machines without performance compromises.
The range of Graviton5 instances also supports the Elastic Fabric Adapter (EFA), enabling low-latency, high-bandwidth communication between instances. This is essential for Meta’s agentic AI workloads, where large-scale tasks need to be distributed across many processors working in coordination.
As a longtime AWS customer, Meta has relied on AWS's highly scalable and secure cloud infrastructure to power its global businesses.
“This isn't just about chips; it's about giving customers the infrastructure foundation, as well as data and inference services, to build AI that understands, anticipates, and scales efficiently to billions of people worldwide," said Nafea Bshara, vice president and distinguished engineer, Amazon.
“Meta's expanded partnership, deploying tens of millions of Graviton cores, shows what happens when you combine purpose-built silicon with the full AWS AI stack to power the next generation of agentic AI.” “As we scale the infrastructure behind Meta's AI ambitions, diversifying our compute sources is a strategic imperative.
AWS has been a trusted cloud partner for years, and expanding to Graviton allows us to run the CPU-intensive workloads behind agentic AI with the performance and efficiency we need at our scale,” said Santosh Janardhan, head of infrastructure, Meta.
AWS CEO calls AI inference a new building block that transforms what developers can build Task-accomplishing agents deliver more than just content generation, and enterprises will see massive returns in 2026, says AWS CEO Matt Garman.
Energy efficiency benefits of Graviton AWS Graviton5 is built on 3-nanometer chip technology—a manufacturing process that produces smaller, more efficient processors.
Because AWS designs its chips from the ground up and controls the full process from chip design through server architecture, it can optimize performance and efficiency in ways that off-the-shelf processors can't match.
The result is infrastructure that delivers stronger performance while maintaining leading energy efficiency, helping Meta pursue ambitious AI goals while staying on track with sustainability tar
Original Source
This briefing is based on reporting from About Amazon. Use the original post for full primary-source context.
Style
Audience
