AI chips
AMD Lands $60 Billion, 6 GW Deal With Meta — and Cracks Nvidia's Grip
Advanced Micro Devices closed the largest single AI infrastructure deal in its history on 24 February 2026: a multi-year, six-gigawatt agreement with Meta Platforms, valued at roughly $60 billion over five years. The arrangement deploys AMD's next-generation Instinct MI450 GPUs and 6th-generation EPYC "Venice" CPUs at scale starting in the second half of 2026.
Why six gigawatts is the number that matters
In the AI build-out, capacity is increasingly measured in power, not chips. Six gigawatts of dedicated AI compute is comparable to peak demand of a mid-sized European country. It implies multiple new data-centre campuses, dedicated transmission upgrades and, in many cases, behind-the-meter generation. Meta's broader commitment — Nvidia GPUs at expanded scale plus the AMD deal — places the company on a trajectory that is hard to distinguish from a sovereign-scale infrastructure programme.
What the MI450 changes
AMD has been credible at the silicon level for two generations, with the MI300X and MI325X. What it lacked was customer-scale deployment by a hyperscaler making a long-term commitment, which is the only way to amortise the software-stack work and validate at production-scale. Meta's order does both. AMD now has an anchor design partner whose internal ML platforms — PyTorch, MTIA bridge, Llama family — drive the kind of optimisation that historically only Nvidia got from its CUDA monopoly.
What it does not change
Nvidia. The same week as the Meta-AMD announcement, Meta also expanded its commitments to deploy millions more Nvidia GPUs. The story is not displacement — it is multi-vendor optionality at hyperscaler scale, which has been the missing ingredient in the AI compute market. Nvidia's market capitalisation continues to set records; AMD's gains do not come at Nvidia's expense in absolute terms in 2026.
The follow-on
Watch three things. Whether other hyperscalers — Microsoft, Google, Amazon — announce comparable AMD commitments in 2026. Whether AMD's ROCm software stack converges with hyperscaler ML platforms in a way that lowers switching costs. And whether the European hyperscaler footprint — emerging through Schwarz Group, OVH and the EuroHPC consortium — adopts a similar two-vendor stance.
Why it lands in Europe
The European AI compute story is entering its first build phase. Decisions about which silicon to anchor on are being made now and will compound for a decade. Meta's two-vendor stance is a useful reference point. So is the Brussels Economic Forum's parallel focus on the EU's strategic role in AI: there is no European AI sovereignty without European decisions about chip vendors, and the Meta-AMD precedent makes those decisions more interesting.
Frequently asked
- When does deployment start?
- Second half of 2026, at scale, with 6 GW of capacity built out over the five-year term.
- Does this displace Nvidia?
- No. Meta expanded Nvidia commitments the same week. The picture is multi-vendor scaling at hyperscaler level.
- What about Europe?
- The two-vendor precedent matters as European hyperscalers and EuroHPC consortia make their own silicon-anchor decisions.
Around Tech & Science
A look at recent reporting on tech & science from the Étude newsroom.
Trending at Étude
Tech event Nexus Luxembourg 2026: 10,000 Attendees, 150 Speakers, 500 Startups Across Two Days at Luxexpo
Housing Luxembourg's Housing Tax Aids Have Ended — Frieden Pivots to Permitting Reform
Economy Commission Sees Luxembourg GDP at 1.9% in 2026, 2.0% in 2027 — But the Recovery Is Conditional
Insurance Lombard International Rebrands as Utmost Luxembourg After Cross-Border Merger