Minosyants, Darya (University of Illinois-Chicago)

About data centers:

When it comes to data centers, people seem to be uninterested or unaware of them. The problem is that data centers are usually located in secluded areas where energy prices are relatively low. Also they typically do not carry any architectural appeal because they are only concerned with their performance and stability of operations.

What people do not realize, however, is how much impact these energy-hungry “server farms” have on our environment. In the United States alone there are thousands of data centers which hold 10 million computer servers with the numbers predicted to increase by 50% in the next four years. These servers operate 24 hours a day, seven days a week. This relatively new but constantly evolving and growing industry annually consumes about 20-30 billion kilowatt hours of electricity which is enough to power the entire state of Utah. A large data center consumes on average of 10 megawatts of electricity, while Google alone consumes more than 100 megawatts per year. With the rate that the data centers are growing it will not be long until they become one of the major energy consumers. Fortunately there are many ways to help reduce energy consumptions in data centers which is demonstrated in the component comparison below.

One interesting fact is that more than half of energy within data centers (over 60%) is consumed by power and distribution equipment and cooling systems. IT equipment consumes only about 30% of the total data canter energy. And finally, all the processed energy becomes excess heat that has to be removed from data centers.


Data Center Operations Management:

Traditional Operations:

  • Bad initial decisions of a data center operating crew could result in increased energy consumption over the life time of a data center. Since all other components rely on this one it is essential to the overall operations efficiency of a data center.

  • In traditional data centers operations crew does not deal directly with energy bills and that is why they seldom become aware of how efficient their equipment is.

  • In traditional data centers the roles are clearly defined. IT staff does not usually get involved in facility operation of data centers. Therefore, lack of knowledge , division of power and miscommunication between staff in data centers makes energy-efficient facility upgrades a very painful and lengthy process.

  • Vendors do not show initiative to sell more energy-efficient technologies to their clients.

  • Vendors do not understand the needs of data centers and how they function.

New Operations :

  • Energy-conscious decisions and awareness of life-cycle costs versus initial costs could reduce energy consumption in half.

  • In energy-smart data centers, the operations crew is fully aware of their energy expenses and of the energy efficiency of the equipment (performance per watt). They are able to discover the downfalls and try to resolve them or turn to other specialists.

  • Data center personnel understand both IT systems and physical infrastructure systems (no separation of roles as in traditional data centers). Therefore, a new group of specialists under IT is created, who directly address issues specific to hardware planning, electrical systems, heat removal, and physical data center monitoring. This would prepare a data center for a much faster and smoother upgrade.

  • Vendors should also be involved in the process and should educate the owner about energy savings and life-time cost benefits versus initial costs. A good example of that would be Hewlett-Packard's air conditioning sensors, which adjust to specific data center needs.

  • Vendors should become systems and server specialists and develop a full understanding of data center needs and functions.


Server Processors and IT Technology (Including processors, memory and in-box power supplies):

Traditional Servers and Standards:

  • Traditional IT equipment uses up to about 30% of total data center energy.

  • Standard horizontal boxes, which need their own power supplies and networking connections.

  • Only one application per server was used in traditional servers, which employed only 5% of the total server capacity.

  • Before there were no common specifications between manufacturers for servers. Each manufacturer would provide their own specifications which would make it hard to compare equipment.

Upgraded Servers and Standards:

  • Improved servers can increase energy efficiency anywhere from 25% to 80% from traditional servers (depending on a specific condition).

  • Blade servers or vertically stacked boxes, which greatly reduce space requirements (and therefore reduce power and electrical costs). Blade servers also plug in to the common “backplane” structure, where they are able to share power supplies and networking connections.

  • Virtualization” software allows a single server to be divided into several, which means that multiple different applications could be run simultaneously. In other words, one server could do the work of several servers, which would employ about 60%-80% of the server capacity (and increase energy efficiency). Also the number of servers could be greatly reduced, saving space and lowering cooling loads.

  • Recently the EPA started comparing servers using Energy Star efficiency standards. This makes it easy to compare servers across various manufacturers.

  • Implementation of low voltage processors.

  • Smaller low voltage chips made with advanced materials.


Power Supply and Distribution (UPS, switch gear, generators, PDU and batteries):

Traditional Power Supply and Distribution:

  • Traditional components combined consume about 23% of total data center energy.

  • Traditional components are also only about 65% efficient.

  • Power supplies are traditionally a huge generator of waste heat that needs to be removed.

  • Power distribution units operate well below their full load capacity (fear to go over the actual capacity).

  • Underutilization of components – low load losses.

  • Oversizing back-up power generators or uninterrupted power supply systems (UPS) as a safety factor, which results in decreased efficiency of UPS when run at low loads.

  • Unregulated power supplies that cannot be compared to other manufacturers.

Improved Power Supply and Distribution:

  • Energy-smart components can increase efficiency to about 90% (25% reduction in energy consumption).

  • Reducing power losses in power supplies will also cause efficiency improvements in distribution and cooling systems (chain reaction).

  • Power supplies that comply with 80plus performance specifications operate at 80% efficiency at 20, 50, and 100 percent rated capacity (no need to worry about underutilization of components). Compliance with a common rating systems also makes it easy to compare power supply and distribution components across manufacturers.


Cooling Technologies (including air conditioners, fans, humidifiers, pumps, chiller, and cooling tower) and Floor Layout

Common (Traditional) Cooling:

  • Traditional cooling consumes about half (45%) of all data center energy.

  • If factors like plenum height, flow rates of computer-room air handlers, or CRAH, and possible obstructions (underfloor blockage) are not checked against the proper software, the cooling system would need to work harder to provide the needed cooling results (therefore consuming extra power).

  • Insufficient or excessive number of vented floor tiles installed or incorrect placement of them can draw an extra amount of energy.

  • Not taking advantage of air conditioning economizer modes, although they already exist.

  • Common hot aisle/cold aisle arrangements are usually not very efficient because they still allow some mixing of hot and cold air.

  • Traditional data centers that move to high-density server implementations typically attempt to modify existing infrastructure through additional construction. Therefore they attempt to cool equipment by flooding the air with as much cool air as possible. which drive air at high pressures over long distances and therefore consume a lot of extra power.

  • Oversizing cooling systems that already exceed the IT load creates fixed losses that become even a larger part of the electrical bill.

Alternative (Efficient) Cooling:

  • Improved cooling systems have the ability to reduce energy consumption by 25% in real life applications.

  • Computation Fluid Dynamic Modeling (CFD) is performed to predicts air flow and possible obstructions. It can also predict plenum height, optimize positioning of conduits, and predict hot spots.

  • CFD can also provide proper “tuning” of vent tiles by varying their location; regulating the number of vents open at any time; and optimizing positioning of CRAH units.

  • Utilization of air conditioner economizer modes helps save electricity dramatically.

  • In hot/cold isle configurations, ceiling plenum could be used as a return path to reduce the possibility of hot air recirculation into the space. Also all cable and other openings have to be sealed to reduce the possibility of hot and cold air mixture.

  • Closely- coupled cooling systems, such as fan-powered cabinets could be used that draw cold air directly from raised floor and discharge it into the hot isle. By targeting specific areas where cooling is needed as opposed to increasing cooling isles is preferred when expansion of data centers occurs. As a “rule of thumb”: shorter air travel paths equal less fan power and therefore less energy consumed.

  • Liquid cooling (water and bio-oil) of servers with high power densities and large accumulation of heat. Compared to the air cooling systems, chilled-water systems are 30% more efficient.

  • Dynamic smart cooling: sensors measure the temperature of servers in relationship to their workload and send measurements to the central computer. In turn, the central computer adjusts the air conditioning units to supply cooling only to servers that require it. This system cuts electricity from 25 to 40 percent.

  • Proper sizing (or rightsizing) of cooling systems to the IT load has a potential to reduce 50% of electrical load in real world installations. The move to modular, scalable infrastructures is greatly beneficial to energy reduction since it is only added when the increase in IT load demands it (datacenter in a box).


Advanced Technologies (Present and Future):

Access heat- energy conversion:

  • All the processed energy from servers, power supplies and distribution systems, uninterrupted power supplies and lighting becomes excess heat which has to be removed from data centers. This puts a huge load on cooling systems. One way to reduce the cooling load is to find ways to reuse excess heat within data centers or related operations.

  • In 2001, MIT scientists claimed to have developed a device that would convert heat directly into electricity (without the moving parts of a generator), using vacuum thermionics-based technology. This device would be silent, vibration-free and have low maintenance costs. The only downside so far is that it is not very efficient.

  • In 2005, University of Utah physics professors started working on a device that, as they claim, could turn heat into sound and then into electricity (Thermal Acoustic Piezo Energy Conversion project). The project is benchmarked to be complete by 2010.


Other Components:

Lighting:

  • Can compose about 2% of total data center energy usage.

  • Energy-efficient lighting controlled by timers or motion activation can greatly increase energy efficiency and reduce cost.

  • If not monitored, lighting power produces excess heat which in turn must be cooled-doubling the cost.


Challenges to Data center efficiency:

  • Data center owners and managers might not be aware of wastefulness of their data centers. In most instances their data center electrical bill is a part of a larger electrical bill, so they cannot monitor the efficiency directly.

  • Energy costs still remain a relatively small cost for data center clients so they are not forced to deal with them.

  • Owners and managers are skeptical of new inventive technologies which do not have sufficient testing and data to back them up. Liquid cooling, for example may result in huge equipment damages if leakage occurs. On the other side, they feel safe with the traditional systems that they have used before.

  • Owners and managers do not see potential benefits being worthy enough to take the risks. They need information and data from manufacturers to convince them to take the risk.

  • Owners rely on their peers when they make decisions. If they know of another data center owner who has tried a certain technology and it did not work they will not likely be willing to take the chance.

  • Collaboration and interaction between owners, financial managers, data center operators, architects, manufacturers and vendors is the best way to increase awareness of energy issues and provide a sufficient and reliable solution which will not hinder the operations of a data center.


2 comments:

Unknown said...

Nowadays, cooling tower manufacturers have become essential equipment in various industries around the world. Commonly, these towers are associated with the big factories, sugar manufacturing, steel manufacturing, refineries or power plants

Unknown said...

FRP square type Towers are sturdy and provides maintenance free life for the industrial users. cooling tower manufacturers have highly qualified professionals in the areas of mechanical engineering and process engineering and hence the design division always comes out with revolutionary models.