Big cloud deal brings cutting-edge GPUs to an ambitious AI lab
TechCrunch has exclusively learned that Mira Murati’s Thinking Machines Lab has signed a multi-billion-dollar agreement with Google Cloud to run its AI workloads on infrastructure powered by Nvidia’s latest GB300 chips. This strategic tie-up brings together a leading AI lab, a top cloud provider, and state-of-the-art accelerators, creating a powerful platform for next-generation models.
The use of Nvidia’s GB300 accelerators on Google Cloud will give Thinking Machines access to denser, faster compute for large-scale training and inference. That level of performance can shorten experiment cycles, enable larger models, and reduce time-to-result for research and commercial deployments—advantages that can compound quickly for teams pushing the frontier of AI.
Beyond raw speed, the deal represents a broader win for the AI ecosystem: major cloud and hardware investment directed at mission-driven labs helps democratize access to high-end infrastructure and supports faster translation of research into products and services. For Thinking Machines, the arrangement should amplify its ability to iterate, scale, and collaborate.
Overall, this partnership is an encouraging signal that industry players are continuing to invest heavily in compute and infrastructure — a necessary ingredient for sustaining rapid progress across AI research and real-world applications.