Why do AI servers need "high voltage DC HVDC"? A major upgrade of the power structure of a data center is taking place

The rapid growth of AI demand is changing the operating mode of data centers, especially power supply systems. Taking NVIDIA's latest generation Blackwell GPU as an example, the power consumption of a single GPU has increased from hundreds of watts to more than 1,000 watts, and the total power consumption of a whole server cabinet has also exceeded 100kW. In the future, the Rubin Ultra will directly increase to more than 600kW. This unprecedented power density is making the traditional power supply structure extremely limited.
High Voltage Direct Current (HVDC) is regarded as the power solution for the next generation of data centers, whether it is NVIDIA, Meta and Google are actively promoting it.
What is high pressure DC? Why is it more suitable for AI servers?In the current architecture, most data center servers use low-pressure DC current transfer busbar (such as 48V or 54V) for power supply. And the lower the voltage, the greater the current is required to provide the same power, which leads to two problems:
A thicker copper wire is needed to transmit power, resulting in increased space and costs. Energy consumption (commonly known as wire removal) increases because the greater the current, the more severe the heat is.The "transfer strip" here refers to the conductive system responsible for transmitting power in the server cabinet, usually a copper bar or a thick cable. They are like highways of power, responsible for distributing stable voltage and current to various components or server modules.
In contrast, "high voltage DC" increases the power tank voltage to 400V or even 800V, and the current can naturally be reduced, the heat consumption of the line will also be reduced, and the overall power efficiency will be significantly improved. This architecture has been widely used in long-distance transmission, such as offshore wind, cross-border transmission lines, etc., and is now being introduced into AI servers and data centers.
Power consumption evolution of data centers: from kW to MWAccording to TrendForce's latest report "Transformation of Power Supply Architecture and Future Trends", NVIDIA's AI server cabinet power consumption has been increased from 10~30kW of the H100 era to 600kW of the new generation Rubin Ultra platform. In the future, the server cabinet may even move towards the MW (million watts) level.
Such power consumption pressure has forced the industry to rethink the overall distribution architecture, otherwise no matter how much the server is stacked, it will be killed by power supply and heat dissipation restrictions.
What is the difference in traditional vs HVDC architecture?Before starting a comparison between traditional and next-generation data center electronic solutions, you must first understand the role of constant power systems (UPS) in data centers.
According to Taiwan Power's official website, data centers are the key to many organizations' daily operations. Downtime costs due to key load failures are expensive, which may range from $4,000 to $6,000 per minute and may exceed this range, so using a UPS system will be the key to maintaining the continuous operation of the data center.
UPS system automatically switches the power supply to a built-in battery when a power shutdown occurs or power supply is unstable, and maintains the device's normal operation in a short period of time. Because the UPS system can stabilize the voltage, this can protect the electronic device and avoid internal components from being damaged due to unstable power supply. Then, we returned to the power supply system in the data center.
▲ The traditional AC data center power supply structure mentioned by Tel.com in the COMPUTEX 2025 lecture
From the traditional AC data center power supply structure (see the above picture) you can see that after the municipal transformer reduces the pressure, the UPS system first maintains a 400/480V AC distribution (the red circle location), then the distribution unit and the cabinet power module, and then to the server side, and reduces the 50V switching discharge to 0.65 V by DC-DC conversion (the orange circle location at the upper picture). However, due to the use of a long multi-level conversion and low-voltage high-current conductor, not only increases copper consumption, but also gives end-to-end efficiency only 87.6%.
Next, let's take a look at the innovative power structure: High Voltage DC (HVDC) data center. According to Taiwan's lecture at C OMPUTEX, it can be seen that the current HVDC solution is divided into two ways.
▲ This is HVDC, but the UPS system transition scheme is still retained
The first type is that the front-end block module has not been changed. It is converted into 800V HVDC power distribution in the independent power cabinet (the red circle in the picture above), and then transferred to the server, that is, the 800V DC power is transmitted using the DC distribution unit at the back-end, and the current is reduced to 50V (the orange circle in the picture above). However, this solution still needs to go through the multi-level conversion of UPS, and is still a transitional solution of HVDC, with energy efficiency of 89.1%, an increase of 1.5 percentage points from the 87.6% of the traditional solution.

▲ This is HVDC and adopts SST, the most energy-efficient solution
The second solution is to directly rectify the 800V DC power using solid-state transformer (SST, upper red circle) to replace the multiple current conversion of UPS. Finally, the 800V is directly fed into the 50V switch. This not only reduces the power conversion and wire swing, but also achieves more than 92% (结子子子子), which can significantly reduce the cost and heat dissipation costs in the long run..
Taking a 100 MW data center as an example, using HVDC can save more than 43 million degrees of electricity per year, which is equivalent to a $3.6 million fee saving and significantly reduce the cost of heat dissipation and wiring materials.
Next step: Decentralized backup system launchIn addition to high-pressure DC power supply, the demand for power supply stability of AI servers has also promoted the upgrade of the backup architecture.
BBU (Battery Backup Unit): Similar to a steel battery module, built into each server cabinet, can detect voltage changes in real time and supply power in milliseconds, replacing traditional UPS backup. Supercapacitor: is responsible for handling microsecond-level power fluctuations, and can be stable and stable when the GPU is drawn in a large amount of power or sudden drop in an instant, maintaining power supply stability.These aid combinations can form a layered defense line from microseconds to minutes, effectively ensuring high availability of AI server sets.
The fundamental transformation from power supply logic to industrial versionThe rise of generative AI is accelerating the transformation of energy logic and architecture of data centers. High pressure DC combined with dispersed power aid system provides a more efficient and expandable power solution.
Although HVDC initially has high capital expenditure and DC safety regulations are also strict, as AI server power consumption is developing towards MW level, HVDC's advantages in energy efficiency, space utilization and operating cost control will become increasingly obvious.
In the future, with the joint investment of chip designers, cloud service providers and system manufacturers, this "Data Center Power Supply Revolution" is expected to achieve full transparency in several years. (First image source: Hitachi Energy)