The companies stated that this agreement would enable them to expand to tens of millions of CPUs, allowing for rapid scaling of agentic workloads.
Amazon Web Services (AWS) and OpenAI announced on Monday a multi-year, strategic partnership that provides AWS’s infrastructure to run and scale the ChatGPT maker’s core artificial intelligence (AI) workloads, effective immediately.

The companies said that under the $38 billion agreement, which is expected to continue growing over the next seven years, OpenAI is accessing AWS compute, comprising hundreds of thousands of state-of-the-art Nvidia (NVDA) Graphic Processing Units (GPUs).
This would help with the ability to expand to tens of millions of CPUs to rapidly scale agentic workloads. “Clustering the NVIDIA GPUs—both GB200s and GB300s—via Amazon EC2 UltraServers on the same network enables low-latency performance across interconnected systems, allowing OpenAI to efficiently run workloads with optimal performance,” Amazon said.
Get updates to this developing story directly on Stocktwits.<
For updates and corrections, email newsroom[at]stocktwits[dot]com.<
