AI News Bureau
Written by: CDO Magazine Bureau
Updated 5:21 PM UTC, Wed November 5, 2025

Amazon Web Services and OpenAI have signed a multi-year, $38 billion strategic partnership that will see OpenAI run and scale its core AI workloads on AWS infrastructure, the companies announced on November 3.
Under the deal — which begins immediately and will expand over the next seven years — OpenAI will tap into hundreds of thousands of NVIDIA GPUs on AWS, with capacity to scale to tens of millions of CPUs as demand for frontier AI workloads accelerates. All targeted capacity is expected to be deployed before the end of 2026, with room to grow further through 2027 and beyond.
“Scaling frontier AI requires massive, reliable compute,” said Sam Altman, OpenAI co-founder and CEO. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
AWS CEO Matt Garman said, “As OpenAI continues to push the boundaries of what’s possible, AWS’s best-in-class infrastructure will serve as a backbone for their AI ambitions.”
AWS says the specialized GPU clusters — based on NVIDIA GB200s and GB300s, linked through EC2 UltraServers — are architected to support everything from ChatGPT inference to training future generations of models.