Nvidia started the Data Center World 2025 event this week in Washington, DC, with a bold vision for the future of AI infrastructure.
In his keynote speaker, Wade Vinson, Nvidia’s Chief Data Center, introduced the engineer, the concept of AI-scale data centers; These massive, energy -efficient facilities would meet the increasing demand for accelerated computing. Nvidia seems to be spreading “AI factories” powered by Blackwell GPUs and DGX SuperPods, supported by advanced cooling and power systems of vertive and schneider electric.
“There is no doubt that AI factories are an important trend in the data center world,” Vinson said.
Completion of phase one of an AI -factory in Texas
Vinson pointed to Lancium Clean Campus that Crusoe Energy Systems builds near the Abilene, Texas. As he explained:
- The first phase of this AI factory is largely complete: 200 MW in two buildings.
- The second phase will expand it to 1.2 GW. It will be completed in mid -2026.
- The design includes fluid cooling directly to chip, back door heat exchangers and air cooling.
- It will include six additional buildings that bring the plant to four million square meters.
- 10 gas turbines will be inserted on site to provide power on site.
In addition, each building operates up to 50,000 NVIDIA GB200 NVL72S GPUs on a single integrated network substance that promotes the limit of data center design and scale for AI training and inference workload.
Vinson said some AI factories will take advantage of power on site, while others will benefit from places where the power is already available. He pointed to old mills, production sites and retail facilities already connected to the web.
For example, an old shopping center in San Francisco can be converted to an AI factory for months rather than the many years required to implement new construction construction and obtain the use of tools and permits. Such places often have large roofs that can be used for solar energy -arrays.
Re -configuration of existing data centers to AI -factories
What about existing data centers? Aging structures can fight to accommodate NVIDIA gear and AI applications. Vinson believes that many Colocation facilities (Colo’s) are in a good position to be transferred to AI factories.
“Any colo built in the last 10 years has enough power and cooling to become an AI factory,” he said. “AI factories must be viewed as a revenue opportunity rather than an expense.”
He estimates that AI could increase business and personal productivity by 10% or more and add $ 100 trillion to the global economy.
“It represents a greater productivity change than happened because of the wave of electrification around the world that started about 100 years ago,” Vinson said.
Planning is the key to AI -Factory Success
Vinson warned those who were interested in building or operating their own AI factories about the importance of planning. It is important to consider the different factors involved and modeling is crucial.
He proclaimed Nvidia’s Omniverse -Simulation Tool as a way of planning an AI factory correctly. It uses digital twin technology to enable comprehensive modeling of data center infrastructure and design optimization. Lack of model in advance and simulate many possible scenarios can lead to inefficiency in areas such as energy consumption and can extend construction time lines.
“Simulations authorize data centers to improve operational efficiency through holistic energy management,” Vinson said.
SEE: Data centers can cut energy consumption by up to 30% by nearly 30 code lines
For example, many data center veterans may find it challenging to switch from traditional concepts with racks, walking and servers for GPU gear surrounded by fluid cooling and with sufficient power and power distribution equipment.
AI Factory designs will have far more power and cooling equipment inside than server racks; Therefore, layouts will be radically different. After all, the amount of heat generated by GPU-driven super pods is more than the one generated by typical data centers.
“Expect significant consolidation of racks,” Vinson said. “Eight old racks may well become a future rack with GPUs inside. It is important to develop a simplified effect and cooling configuration for the racks inside AI factories, as these will be very different from what most data centers are used to.”