The most important decision you can make has nothing to do with data.
TTI Inc. has submitted this post.
Written by Ryan Wade, VP Strategic Accounts Cloud, Data Center at Molex.
Although they are oftentimes hidden from sight, there can be no debate that our reliance on data centers is increasing exponentially. In their report “Data Age 2025,” analyst firm International Data Corp. (IDC) forecasts that, within four years, society will generate 175 zettabytes of data each year—somewhere between two and three times the amount generated in 2021.
IDC’s report goes on to suggest that, by 2025, six billion connected users will interact with a data center almost 5,000 times per day—three times today’s rate, and ten times the rate just a decade earlier. The number and complexity of cloud-based services is increasing, especially via mobile devices and billions of Internet of Things (IoT) nodes, which further heightens the reliance on data centers. In the past, data centers were used primarily for storing data, and access time was not critical.
Today, however, low latency is vital in order to support high-demand applications and capabilities, such as video streaming and voice assistants. Alongside rapid changes in the data landscape will be a new definition for key attributes of the modern data center—with flexibility, adaptability, scalability and reliability among the critical needs.
Increasing Energy Efficiency
Each of these points a spotlight on the need for efficiency. As the energy consumed by data centers inevitably rises, efficiency in data centers has become not just desirable, but critical.
In November 2018, the International Energy Agency’s “Digitalization and Energy” report estimated that data centers consumed between 1 and 1.5 percent of the world’s energy. While efficiency gains have ensured that energy usage does not rise in exact proportion to data usage, it’s obvious that data centers could become even more significant energy consumers before long.
Organizations building their own data centers, or relying on partner firms to do so, are facing unprecedented energy-related challenges. Many are adopting a range of measures to regularly monitor their performance, with power usage efficiency (PUE) being one of the most common benchmarks. PUE measures how much of the total power consumed is used for powering IT equipment versus how much is used for other infrastructure, including battery management, uninterruptable power supplies and cooling. Essentially, PUE challenges data centers to use as little energy as possible on activities that don’t add value.
Why Location is a Primary Consideration for Cooling
Designing and managing a data center to meet customers’ growing data requirements while being environmentally aware and returning a profit is a challenging, multi-faceted exercise. One of the key decisions is data center location, which involves several factors: the local economy, proximity to customers, availability of sufficiently reliable and affordable electricity, sufficient networking connectivity, and the taxation landscape.
From an environmental and PUE perspective, geographical location is higher on the list than many may think. The location will largely determine the energy costs and, to an extent, whether renewable energy is available. Remember that energy costs are often a very significant proportion of the lifetime costs of a data center.
Given the benefit of a data center to the local economy, astute operators will often consider one location versus another to ultimately obtain the best terms.
However, the location can also be leveraged to reduce running costs. The heat generated by the servers is a very good measure of losses, which leads to inefficient operation and the need for waste heat to be reduced.
In a cool climate, cooling can be achieved for much of the year by filtering ambient air from outside and running it through heat exchangers, saving the not inconsequential cost of running air conditioning. The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) publishes thermal guidelines showing where in the world this “free cooling” is available and for how long each year.
Another source of “free cooling” is the use of seawater for coastal data centers. The use of environmental elements to partially or wholly cool a data center is very attractive, but moisture and other pollutants that will inevitably get inside can degrade lower quality components.
Further, the many connectors that provide transmission of data and power for data centers with ambient assisted cooling require the highest levels of plating plus full testing and qualification to ensure they can operate efficiently under such conditions.
London is the largest data center hub in the European region with 495MW of data center power capacity—more than Paris, Frankfurt and Amsterdam. London also boasts a well-established and abundant fiber network, and many colocations and cloud providers have established themselves in London to be near the financial center.
However, increasing real estate prices and high energy costs mean that the hyperscale operators (Facebook, Microsoft and Amazon) have favored Sweden, Denmark and Ireland, where taxes, land and energy prices are lower. But London’s cool, temperate climate (which enables outside air cooling for most of the year) may prompt these organizations to think again.
Contrast this with Singapore, which views data centers as key to earning its status as an Asian business hub and financial center. The city-state provides 290MW of power for data centers, a figure which is growing rapidly. However, land is scarce (and therefore expensive) and the year-round hot and humid climate precludes any form of evaporative cooling, requiring permanent use of chillers. The cost implications of forced cooling are so great that, in 2021, it prompted the Singaporean government to issue a moratorium on the building of new data centers while more efficient power solutions were explored.
Benefits of Incremental Improvements
The heat generated in a data center is an excellent guide to its efficiency. Engineers have long understood that the best way to eliminate waste heat is not to generate it in the first place. To achieve this, substantial work has been carried out to increase efficiency within server power supplies, particularly in advanced topologies and new semiconductor materials. However, the opportunities for significant ongoing gains within power systems are few and far between, so designers often turn their attention to other aspects of design. Incorporating high-quality and efficient interconnect components is one way energy consumption can be reduced.
While the savings per server may be relatively small, they add up across all servers in a data center to amount to appreciable gains. For instance, advances in heat sink technologies are enabling highly efficient, reliable and resilient thermal management strategies to support higher density in both copper and optical connectivity. From a future perspective, this great signal integrity performance and low insertion loss capability allow designers to eliminate retimers, saving the energy they would have otherwise consumed.
A Commitment to the Environment
In our data-hungry world, the shift to digitization is only going to accelerate. As a result, flexibility, adaptability, scalability and reliability are key attributes of the modern data center. But efficient energy use remains a primary concern. Geographic location is one of the most significant single decisions to be made for a data center.
Once that is settled, partnering with innovators who are committed to environmental stewardship helps to ensure that energy-efficient design is at the heart of the data center to drive system efficiency and sustainability.