Expert Column

Critical Cooling Solution in a Data Centers

Data Centers

With cooling consuming close to 40% which impacts the overall PUE and with data center growing at 25% CAGR, it is very important to design an energy efficient and sustainable cooling units.

A data center can either be a building, or some dedicated space within a building, or a group of buildings that house computer systems and its associated components, such as telecommunications and storage systems.

The Indian data center began with the Internet boom in 1990s. During those years data storage and processing via Internet was becoming more crucial, resulting in development of the first storage device as a 5 MB disk which was stored in a device as large as 1.5 sqm. Ever since, data storage devices have evolved from large chips to – floppy and CD drives – to USB drive and today data stored on Cloud in a data center. As per Global Web Trends, around 65% use Internet daily with average 2.5 hours spent on social media platform with 1 billion registered on Aadhar users , 1.05 Billion phone users ,0.8 Million on Disha and 1.9 Million on Mygov platform. The Digital India Scheme and the recent COVID-19 pandemic have further raised the number of people connected on devices.

These Data Centers are defined into the below categories:
Telecom / Edge Facilities – up to 100 KW
IT Server Room – 100Kw~500 Kw
Medium and large datacenter – 500 KW~1MW
Hyperscale data center – 1MW onward

Data Centre Ratings
The telecommunication industry associations ANSI / TIA 942 Standard for data center published (R Standards) – electrical, mechanical, telecom and network cable. The uptime institute created the standard Tier classifications systems as a means to effectively evaluate data centre infra (electrical and mechanical) which mentions T1, T2, T3 and T4 which has T1/R1 (lowest rating) (no redundancy) downtime with 28.8 hours and T4/R4 (highest rating) (redundancy 2N + 1) downtime with 26.3 minutes.

Earlier there were only 500,000 data centers worldwide, today there are more than 8 million of them on Earth. According to IDC, by 2030, data centers around the world could be swallowing up 10 to 20% of the world’s electricity production compared to 3% at present. These reports are evident that keeping a check on the power consumption is very crucial.

So how do we measure the energy in a data center?
The answer is Power Usage Effectiveness (PUE) in a data center.
PUE = Total facility power or IT equipment power.
IT Equipment is 50%, Cooling 40%, Electrical 7%, Lighting 3%.
PUE at 3.0 is very inefficient and 1.0 being the best efficient, as per Uptime survey 2020 the average PUE Global and Asia is around 1.59.

With cooling taking away 40% in a PUE, it is crucial to design the right cooling in a data center space. What if temperature and humidity levels are not precisely designed in a data center?
High & Low Temperature: Temperature variations can alter the electrical and physical characteristics of electronic chips, and other board components causing faulty operation or failure.
High Humidity: High humidity can result in condensation, corrosion at server rack level.
Low Humidity: Low humidity greatly increases the possibility of static electric discharges. Such static discharges can corrupt, data and damage hardware.

The precise cooling is achieved by Close Control units (CCU), these systems are designed to precisely control Cooling Temperature in a range of (±) 1 Deg C and Relative Humidity in the range of (±) 5%. To meet critical 24/7 365 days high sensible cooling, applications such as seen in a data centers these CCU are equipped with advanced components and intelligent controls to monitor operate and sequence automatically the CCU which operates in typical vapour compression cycle along with a 5 types process of cooling, heating , humidification, de-humidification and filtration keeping in view of reliability, sensitiveness to changes in temperature and RH within the data center with high SHR = 0.9 ~ 0.99, Sensible Heat Ratio which is Total Cooling = Sensible + Latent and Sensible Heat Ratio = Sensible Cooling / Total Cooling. In technology room the cooling load is only sensible and latent load is low i.e. sensible load is the increase in the temperature and latent load is the change in the phase from liquid to a vapour.

CCUs are designed to meet these high sensible heat ratios. In contrast a typical comfort AC has SHR 0.65 – 0.70 and hence requires 30 to 40%, higher capacity unit in case used for technology room. The excess latent cooling means that too much moisture is continually being removed from the air and an energy-expensive humidifier is required to replace moisture. With CCU designed a High Air Flow Cfm = 500 ~ 600 cfm / Tr which improve air flow to reduce hot spots, made for operation span of 24/7 (8,760) hours equipped with advanced controls and operation methods, integrated packed units, in-built redundancy and to withstand Longer copper piping length (upto 100 rmt).

Components in a CCU
Compressor – fixed, variable type compressor
Heat exchanger coil
Valves – Thermostatic / Electronic Expansion
Filter – G4, washable G4
Gas – R407c / R410 a / R134a
Fans – Electronically commuted fan , axial fans
Heater – strip type heater with stage wise
Humidifier – electrode / bottle type
Controllers – to control, operate, sequence and integrate with sensors, display system to indicate the air flow / operating temperatures, alarms, BMS ports, event logs etc.

Cooling Operational
The room air may be continuously filtered with MERV 8 filters as per ASHRAE standard 127 ASHRAE 2007 and air entering a data centres may be filtered with MERV 11 or MERV 13 as per ASHRAE 2009b.
Supply air as per the ASHRAE 2011 / 2015 at 18 Deg C to 27 Deg C at server inlet and ensure the maximum delta T of 15 deg b/w supply air at server inlet and return air to the CCU unit to have the best COP and efficiency during operation (OPEX).
Server blades are placed horizontally on the rack and are designed to take air in at the front (cold aisle) of the unit and let it out through the back of the unit (hot aisle). Server air flow Q sr = Cooling Air Flow Q pac = Q sr, server air flow direction with front suction / back discharge.

Therefore supply air from the units shall be bottom throw / frontal throw to meet the server air flow direction with cold air being high intensity and hot air being low in density would naturally return to the top of the CCU to ensure efficient performance with lower OPEX.

Mounting of CCU units shall be floor mounting.

The cooling medium of CCU comes with options refrigerant, chilled water, combi cool and free cooling units and for a larger hyperscale the trends go towards computer room air handlers with direct air handles, in direct air handlers, fan walls units which are scalable and in turn connected to a chillers.

With cooling consuming close to 40% which impacts the overall PUE and with data center growing at 25% CAGR, it is very important to design an energy efficient and sustainable cooling units.

 

author

Authored by
Sandeep Vasant,
CDCP, IGBC –AP & GEM-CP
Member – ISHRAE , CII IGBC
Sales Head and Business
Development – PAC (India)
FlaktGroup India Pvt Ltd

Cookie Consent

We use cookies to personalize your experience. By continuing to visit this website you agree to our Terms & Conditions, Privacy Policy and Cookie Policy.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

RECENT POST

To Top