Using a Power Supply Manufacturer to Increase Energy Efficiency of Data Centers within Computing ‘Clouds’

Power supply manufacturers need cloud computing, which is one of the latest technological advances in the Information Technology (IT) industry, is based around the concept of user access to applications or data via an information “Cloud”, which may be achieved regardless of the users’ locality. Essentially, clouds are a series of data centers in various geographical locations, made up of basic physical components such as servers and storage devices, and as such, they are susceptible to the same issues faced by other data center installations.

East Coast Power Systems a Power Supply Manufacturer

One of the main problems currently plaguing IT infrastructures, including cloud computing facilities, is elevated energy consumption, which results in undesirably high energy costs. IT facilities in the United States alone are believed to face annual energy costs well in excess of 4.5 billion dollars. Cloud installations are therefore frequently caught in a situation where they must attempt to effectively balance profitability and environmental sustainability (usually defined by their carbon emissions).

Numerous methods are currently being developed to make cloud computing facilities more energy efficient whilst maintaining their profitability. This is however, new ground, as traditional techniques for improving data center energy efficiency have been focused upon cutting energy consumption at one particular site, and have therefore pinpointed center specific design and component changes.

Any processes utilized regarding cloud computing must be, (in contrast to conventional methods), applied differently across the various data centers utilized by the cloud even including which power supply manufacturer provides electrical systems. This is due to site specific: energy costs; carbon emission rates; cooling technology; local environmental temperature fluctuation; workload; and CPU power efficiency (Garg et al., 2009). This therefore makes the concept of data center energy efficiency within a cloud system extremely complex. Cloud computing providers are however, developing new ways of improving energy efficiency across their various data centers, whilst maintaining productivity.

The Role of the Power Supply Manufacturer

The first method being explored by cloud computing facilities is Dynamic Voltage Scaling (DVS). DVS involves, as the name suggests, software controlled scaling of the voltage of the Central Processing Units (CPU’s) within clouds’ data centers in relation to the computational workload present. This is beneficial regarding energy consumption, as it reduces the temperature of the components involved, and therefore the amount of cooling they require.

The second process being tested by some cloud installations involves the manipulation of the meta-scheduler, or software which allocates the computational workload across the range of a clouds’ data centers. Garg et al. (2009) outlines a method of meta-scheduler manipulation, by which centers can maximize their profitability and efficiency. These processes however, are still in their infancy and involve an inherent loss of data center performance in most cases except where the power supply manufacturer has developed means to compensate.

In conclusion, it seems that the best case scenario for cloud computing installations would be the combination of conventional center specific alterations, coupled with newer approaches such as DVS and meta-scheduler manipulation. It is obvious though, that such installations will be constantly faced with new challenges regarding the reduction of their energy consumption. Advances in technology, in combination with new out-of-the-box approaches are however, predicted to aid cloud centers in finding a happy medium between efficiency and profitability.