Skip to main content

Command Palette

Search for a command to run...

Amazon's Latest AI Data Center: Suppliers and Builders Complete Guide

Updated
11 min read
Amazon's Latest AI Data Center: Suppliers and Builders Complete Guide

The race to build AI infrastructure has never been more intense. With Amazon's massive expansion into artificial intelligence, their data center projects represent some of the most ambitious construction endeavors in tech history. Project Rainier in New Carlisle, Indiana ($11 billion) and the Mississippi facility ($16 billion) showcase the sheer scale of investment flowing into AI infrastructure.

These aren't just buildings – they're the physical foundation of the AI revolution. Behind every server rack and cooling system lies a complex web of specialized contractors, chip manufacturers, and infrastructure providers working together to create the computing powerhouses that will shape our digital future.

Key Suppliers and Builders: A Comprehensive Overview

CategorySupplier/BuilderRole & ProductsProject Location
General ContractorYates ConstructionPrimary construction contractorMississippi (Madison County)
General ContractorGray ConstructionPrimary construction contractorMississippi (Madison County)
General ContractorThe Haskell CompanyPrimary construction contractorMississippi (Madison County)
General ContractorHolder ConstructionPrimary construction contractorIndiana (Fort Wayne)
Electrical ContractorMMR GroupPower systems installation & configurationMississippi
AI ChipsAWS Trainium2Custom AI training chips (500,000 chips)Indiana (Project Rainier)
AI ChipsAWS Trainium3Next-gen AI chips (expected late 2025)Multiple data centers
Chip DesignAnnapurna LabsAWS proprietary chip design divisionGlobal data centers
GPU SupplierGB200 NVL72High-performance AI GPU systemsMultiple data centers
Cooling SystemsAWS In-Row Heat Exchanger (IRHX)Custom liquid cooling system, 46% energy savingsMultiple data centers
Power SupplyIndiana Michigan PowerData center power supply (2.2 GW)Indiana
Server ComponentsMultiple suppliersCPU, GPU, HDD, SSD, motherboardsGlobal supply chain
Network EquipmentMultiple suppliersSwitches, optical modules, DWDM, routersGlobal supply chain
Power & CoolingMultiple suppliersMedium voltage transformers, HVAC units, generatorsGlobal supply chain
Warehousing & LogisticsMultiple suppliersStorage, contract manufacturing, transport, reverse logisticsGlobal supply chain

Project Rainier: Engineering at Unprecedented Scale

Project Rainier in Indiana stands as a testament to what's possible when ambition meets execution. Spanning 1,200 acres, this facility houses nearly 500,000 AWS Trainium2 chips specifically designed to train and run Anthropic's Claude AI models. The timeline alone is remarkable – from groundbreaking to operational in just over a year, with seven buildings already running and approximately 30 buildings planned for completion.

The power requirements tell their own story: 2.2 GW of electricity, enough to power a small city. This massive energy demand underscores why AWS is investing heavily in alternative power sources, particularly nuclear energy through partnerships with Talen Energy.

Mississippi Project: Scaling Up Ambitions

The Mississippi investment grew by 60% from its original $10 billion to $16 billion, driven by the escalating costs of AI infrastructure, advanced servers, and cutting-edge technology components. This expansion reflects the reality that AI infrastructure costs are accelerating faster than initially projected. The facility is expected to begin operations in 2027, creating at least 1,000 high-paying jobs in the region.

AWS's Chip Strategy: Breaking the Status Quo

AWS is making a bold play to reduce dependence on traditional GPU suppliers through its custom chip development. The Trainium3 chip promises four times the performance of Trainium2 with 40% better energy efficiency. AWS plans to deploy up to one million Trainium chips across its data centers, building what they call UltraClusters – massive AI infrastructure designed to compete directly with established players in the AI chip market.

This vertical integration strategy gives AWS control over the entire AI infrastructure stack, from networking to training to inference. The company can offer customers more economical alternatives while maintaining performance standards that meet the demands of cutting-edge AI models.

Revolutionary Cooling Technology

The energy challenge in AI data centers goes beyond simply having enough power – managing heat is equally critical. AWS developed the In-Row Heat Exchanger (IRHX) liquid cooling system, which reduces mechanical energy consumption by up to 46% under peak cooling conditions without requiring additional water. This innovation is essential for cooling high-power AI systems like the GB200 NVL72, which integrates 72 high-performance GPUs in a single rack.

Traditional air cooling simply can't handle the thermal output of modern AI chips. The density of compute power in today's data centers demands liquid cooling solutions that can efficiently transfer heat away from components operating at maximum capacity.

Global AI Infrastructure Expansion

AWS's ambitions extend far beyond the United States. The company operates 467 data centers across 48 regions worldwide and continues expanding in Australia ($20 billion AUD), Taiwan, New Zealand, and Spain's Aragon region (€15.7 billion). This global footprint ensures low-latency, high-reliability cloud services while positioning AWS closer to both customers and energy resources.

The strategic placement of these facilities reflects careful consideration of factors including: local energy availability, regulatory environments, tax incentives, proximity to fiber optic infrastructure, and access to skilled technical workforce. Each location is chosen to optimize the balance between operational costs and service quality.

Nuclear Power: The Carbon-Neutral Solution

AI data centers consume staggering amounts of electricity. Project Rainier alone requires 2.2 GW – comparable to a nuclear power plant's output. AWS has invested over $1 billion in nuclear energy partnerships in the past year to meet this demand while achieving carbon neutrality goals.

The partnership with Talen Energy provides 1.9 GW of carbon-free electricity from Pennsylvania's Susquehanna nuclear power plant through a long-term power purchase agreement extending to 2042. Both companies are exploring the construction of new Small Modular Reactors (SMRs) within Pennsylvania and upgrading existing nuclear facilities to expand energy output.

AWS has set an ambitious target: deploying 5 GW of nuclear capacity by 2039 across the United States. The company is investing in Washington State's SMR facilities and collaborating with Idaho National Laboratory to develop digital twins of SMRs using AWS cloud computing and AI capabilities, accelerating the deployment of autonomous nuclear reactors.

Strategic AI Partnerships: The Competitive Edge

AWS's partnerships with Anthropic and OpenAI define its AI strategy. The $8 billion investment in Anthropic makes AWS the primary cloud provider for training and deploying Claude foundation models using AWS custom chips – Trainium for training and Inferentia for inference. This partnership ensures AWS has guaranteed demand for its proprietary chips while providing customers with access to cutting-edge AI models through Amazon Bedrock.

The landmark $38 billion multi-year partnership with OpenAI (announced November 2025) marks a significant shift in the AI infrastructure landscape. OpenAI will run advanced AI workloads on AWS infrastructure with immediate access to GPU resources, while OpenAI's new open-weight models are available on Amazon Bedrock and Amazon SageMaker. This diversification away from exclusive cloud partnerships reflects the massive infrastructure demands of frontier AI model development.

Network Infrastructure: The Invisible Foundation

Lumen Technologies plays a critical role in connecting AWS data centers globally through its Private Connectivity Fabric solution. Lumen provides dedicated private fiber connections between AWS Regions and Local Zones, enabling high-bandwidth AI workload transfers. The company is deploying 400G routed optical networks specifically to support data center interconnect and enterprise AI workloads.

This mutual partnership benefits both companies – Lumen gains a major customer for its fiber infrastructure, while simultaneously using AWS AI, machine learning, and security technologies to modernize its own applications and systems. The relationship exemplifies how cloud infrastructure providers and telecommunications companies are becoming increasingly interdependent.

Energy Partnerships: Powering the AI Future

Beyond nuclear, AWS is diversifying its energy portfolio through multiple partnerships. The Talen Energy agreement is just one piece of a broader strategy to secure reliable, carbon-free power. The challenge is that AI data centers can't simply plug into the existing grid – they require dedicated power infrastructure with reliability guarantees that standard utility connections can't provide.

AWS and Talen are jointly exploring building new Small Modular Reactors within Pennsylvania, representing a long-term commitment to nuclear power as the foundation of AI infrastructure. The appeal of nuclear energy is clear: it provides consistent baseload power without the intermittency challenges of wind and solar, and it produces zero carbon emissions during operation.

Partner Ecosystem: Delivering at Scale

AWS maintains a vast global partner network of over 140,000 partners across more than 200 countries and territories. The company created a dedicated AWS Generative AI Competency Partner program to certify partners with deep technical expertise in deploying AI solutions using Amazon Bedrock, Amazon SageMaker, AWS Trainium, and AWS Inferentia chips.

Notable partners include LTIMindtree, recognized as the global ISG CX Star Performer for 2025 among AWS Ecosystem Partners. These certified partners help enterprises navigate complex AI implementation, from strategic planning to deployment, enabling businesses to move beyond AI pilots into production-scale AI agents and applications.

The partner ecosystem is structured through multiple specialization tracks: AWS Managed Service Providers (MSPs) who deliver end-to-end AWS solutions; AWS Competency Partners who specialize in specific capabilities like generative AI, migration, or managed services; and AWS Services Partners who deliver consulting, professional services, or have validated software products.

Supply Chain and Procurement Philosophy

AWS Infrastructure Supply Chain & Procurement Vice President Jens Gruenkemeier emphasizes that AWS is committed to operating within fair, equitable, safe, and sustainable supply chains. The company supports disadvantaged businesses through supplier development and influence programs.

This commitment extends beyond rhetoric into practical vendor selection criteria. AWS evaluates suppliers not just on cost and technical capability, but on their environmental practices, labor standards, and long-term sustainability. As data center construction accelerates, maintaining ethical supply chains becomes increasingly challenging – but also increasingly important for companies seeking to maintain their social license to operate.

The Economics of Scale

According to ISG's 2025 report on AWS Ecosystem Partners, leading U.S. enterprises are scaling production-ready AI agents and transforming their cloud consumption patterns with AWS and its partner ecosystem rather than experimenting with pilots. AWS's partner strategy creates a flywheel effect: AWS invests in infrastructure and custom chips, which attracts major AI companies like Anthropic and OpenAI, which then generates massive demand that justifies further investments in data centers, energy partnerships, and network infrastructure.

This integrated ecosystem approach – combining AWS's own infrastructure, custom chips (Trainium and Inferentia), energy partnerships, network connectivity, and a global network of implementation partners – enables AWS to offer customers not just cloud compute, but an entire production-ready AI platform with economies of scale that competitors struggle to match.

The Future of AI Infrastructure

The data center construction boom shows no signs of slowing. AWS's investments represent just one piece of a larger industry trend where cloud providers, AI companies, and infrastructure specialists are racing to build the physical foundation for artificial intelligence. The competition is driving innovation in chip design, cooling technology, power management, and construction methodology.

What makes AWS's approach distinctive is the level of vertical integration – from designing custom AI chips to securing nuclear power agreements to building proprietary cooling systems. This comprehensive control over the infrastructure stack provides competitive advantages in cost, performance, and reliability that will be difficult for competitors to replicate.

The projects underway today will determine which companies can deliver AI services at scale for the next decade. AWS's multi-billion dollar bets on custom chips, nuclear power, and strategic partnerships represent a clear vision: whoever controls the infrastructure will control the AI future.


Accelerate Your AI and Cloud Infrastructure Strategy

The rapid evolution of AI infrastructure demands strategic expertise and technical precision. Whether you're planning cloud migration, optimizing AI workloads, or building scalable digital solutions, understanding these complex ecosystems is crucial for success.

At Tenten, we specialize in helping businesses navigate the intersection of AI, cloud infrastructure, and digital transformation. Our team stays at the forefront of technological developments, translating industry insights into actionable strategies that drive measurable results. From architecting cloud-native applications to implementing AI-powered solutions, we bring deep technical expertise and strategic thinking to every engagement.

Ready to explore how these infrastructure trends can accelerate your digital initiatives? Book a meeting with our team to discuss your specific challenges and opportunities. Let's build something remarkable together.


References and Further Reading


About the Author

Jensen Lee is a technology analyst at Tenten who specializes in cloud infrastructure, data center technologies, and AI implementation strategies. Having followed the evolution of cloud computing from its early days, Jensen brings a unique perspective on how infrastructure investments shape the competitive landscape of emerging technologies.

"What fascinates me about projects like AWS's Project Rainier isn't just the scale – though $11 billion for a single facility is staggering – it's the strategic implications," Jensen reflects. "When you look at AWS investing in custom chips, securing nuclear power agreements, and building proprietary cooling systems, you realize this isn't just about having more compute capacity. It's about vertical integration that creates competitive moats."

"The shift toward custom silicon particularly interests me. For years, the AI chip market was essentially a monopoly. Now, with AWS deploying a million Trainium chips and proving that frontier models like Claude can be trained on non-traditional hardware, we're seeing the beginning of real competition in AI infrastructure. That competition will ultimately benefit everyone building AI-powered products."

Jensen regularly writes about cloud infrastructure trends, AI deployment strategies, and the intersection of technology and business strategy. You can explore more of his insights at Tenten Learning.

More from this blog

T

Tenten - AI / ML Development

225 posts

🚀 Revolutionize your business with AI! 🤖 Trusted by tech giants since 2013, we're your go-to LLM experts. From startups to corporations, we bring ideas to life with custom AI solutions