Power Deviations and Breakers for Data Centers

Data Center Power Requirements

Power requirements for today’s data centers and server rooms have never been more intense. The constant need for power continues to grow dramatically as more and more companies take advantage of the latest server technologies available. It also means that IT professionals are working to deploy power management technologies in servers and communication equipment in order to control and better understand power utilization. Ultimately, all this new technology means IT professionals must learn to better manage what they have and ensure all that equipment stays online safely and is available for use.

Power Deviations

According to data collected by power protection company ADP, a perfect example of how vendors are working to extend power to equipment is the evolution of the laptop computer. The laptop is all about extending battery run time. As a result, power management technology enabled power consumption of laptop processors to be reduced by up to 90 percent when “lightly loaded.” Over time, this technology has been adapted for utilization in server design. Ultimately, this has resulted in new server technologies with a power consumption total that can vary based on the changes of the workload.

While this new technology can save power, other challenges can develop as the envelope is pushed. One issue is related to the design and management of data centers and server rooms coupled with how to properly manage the current that flows through it. As power fluctuates, breakers for data centers can trip unexpectedly and systems can overheat, ultimately creating a potential redundancy loss for a variety of different systems. All these new power issues create new and evolving challenges for data center designers.

Dynamic Power Use and Breakers for Data Centers

Over the last two decades, server rooms have shifted from a consistent amount of power for disk drives to spin-up and change speeds, turn on fans to cool equipment and other automated events. Now, power consumption can vary greatly based on the events taking place in the BIOS, chipset, processor and OS of the server, resulting in a “power managed system.”

For example, whenever a processor is at less than 100 percent utilization, the OS will execute an “idle thread” which will cause the processors to enter a “low power state.” The specific computation associated with the lower power state will vary, based on the equipment’s specific specifications. Additionally, the techniques used to trigger lower power states can vary greatly from vendor to vendor, as well as the type of processor in use. Some of the most common techniques to reach a low power state include reducing voltages applied to part of a processor, chipset or memory.

Recently, processor vendors have introduced different ways to conserve power while the CPU is still actively at work. This can involve changing the frequency of the clocks and the magnitude of the voltages applied to the processor.

As equipment vendors continue to work to conserve power, those charged with keeping equipment online must stay up to date on how these hardware design changes affect not only the different types of equipment housed with a sever room, but how they also affect the power distribution and safety systems, including the breakers for data centers intermingle. It is important to understand the power limitations of a data center so equipment can function safely within a data center’s power grid. It is also important to know and periodically test the breakers for data centers that are in use and can safely handle and over current that may occur.