Powering down: Researchers reducing energy usage at data centers

Every time you update Facebook, place an order on Amazon or stream a movie via the Internet, a computer server somewhere is processing and providing the data you requested.

To provide that reliable, nearly instantaneous access, companies such as Google, Microsoft and others have invested in massive “server farms” with banks of high-powered computers to handle the growing number of requests submitted via laptops, computers, tablets, smart phones and other internet-enabled devices. Data server stock image

But while these data centers are necessary to keep the steady stream of information flowing, they also consume large amounts of power to run the computers and keep them cool.

“Just in the U.S. alone, these data centers consume close to 100 billion kilowatt hours of energy each year,” said Sudeep Pasricha, an associate professor in the Department of Electrical and Computer Engineering at Colorado State University. “That’s almost twice the electricity needed to power the whole state of Colorado for a year.”

Working smarter

Pasricha should know.  For the past three years, he has been leading a research project funded by the National Science Foundation, as well as collaborating on projects with the U.S. Department of Defense and Oakridge National Laboratory, to find ways to reduce energy consumption at data centers by helping them operate smarter.

Pasricha and his team, which includes CSU professors H.J. Siegel and Tony Maciejewski as well as several graduate students and post-docs, have developed smart algorithms that quickly evaluate data requests as they flow into a data center, analyze these requests and then intelligently send them out to specific servers. The algorithms keep a data center operating in an energy efficient manner by regulating how individual servers are used.

For example, depending on the traffic, the algorithms might distribute requests to just a few computers and keep the rest in sleep mode to reduce power consumption. In most commercial data centers, requests are assigned randomly to several machines, which can result in high energy costs.

“In many data centers today you can have 10 machines running at 10 percent capacity which is extremely wasteful, rather than just having two machines handling all the requests and operating at 50 percent capacity, with the remaining eight machines in hibernation or low-power mode” Pasricha said. “Our algorithms aim to find energy-optimal allocations of requests to each server, and attempt to put servers in hibernate mode whenever possible to save energy.”

Keeping their cool

Pasricha and his team have also considered ways to reduce the energy spent in cooling data centers, which in some cases can be as high as the energy spent in computing on the servers.

They have measured air flow and temperatures in the server farm in the basement of CSU’s Engineering building, to create models of heat flow and cooling infrastructure. These studies have helped refine their algorithms to reduce energy spent for computing in servers as well as the energy to cool these servers, by effectively removing the massive amounts of heat they generate.

Some of these algorithms are also being tested in Oakridge National Laboratory’s supercomputing facilities and also at some undisclosed U.S. Department of Defense sites.

Initial data indicates that adding these smart algorithms to organize and distribute the work load in large-scale computing facilities can reduce their energy consumption by 30 to 40 percent.

“That’s quite substantial when you consider how much energy these data centers consume,” Pasricha said.

Demand for data

Every bit of efficiency helps.  The number of data server farms are growing rapidly – and not just in the United States.  Around the world, companies are adding server farms to handle rising demand for data.

In a 2013 report, Mark Mills, the chief executive officer of Digital Power Group, provided an overview of the amount of electricity consumed by the global digital ecosystem. At that time, he estimated that the world’s digital ecosystem used about 1,500 terawatt hours of electricity – the equivalent of all the electricity generation in Japan and Germany and the same amount used for global illumination in 1985.

And Mills predicts demand will only continue to grow.  In his report, he stated that hourly Internet traffic will soon exceed the annual traffic from the year 2000.

Pasricha believes it. He has worked to make hardware and software technology more energy-efficient and reliable in embedded, mobile, and high performance server computing systems for several years now, and has watched demand for data explode from Internet-capable devices and consumers who are increasingly “connected”  via mobile and wearable devices.

“Technology has been and will continue to advance at a relentless pace. As scientists working on the future of technology, it is our responsibility to ensure that the carbon footprint and environmental impact of this progress does not take our planet down a path that is not sustainable. Fortunately, our team of faculty and students at CSU is well equipped to make a positive impact on this area for many years to come,” Pasricha said.