Data centres supplement: Optimising facilities feature – Working up a sweat
Written by Duncan Jefferies
There is an operational and environmental cost involved in meeting the ever increasing demand from financial institutions’ data centres, whose power surge is largely being driven by the rising amount of customer and regulatory information that needs to be kept and new technologies such as algorithmic trading and blade servers. Optimising data centres to keep costs down is now a key concern, especially as budgets are tight for new builds and the government’s CRC ‘carbon tax’ kicks in. Duncan Jefferies looks at the best ways to ‘sweat’ existing facilities
If recent predictions are to be believed, data centres are on course to overtake several countries in terms of overall energy usage. In March, Greenpeace estimated that data centres will use 1,963 billion kilowatt hours of electricity by 2020, which is more than the power currently consumed by France, Germany, Canada and Brazil combined. All of which is rather ominous for the environment, and according to Rami Rihani, global green IT head, at the Accenture consultancy, makes data centres very much “the bad guy of IT.” However, as everyone with a sweet tooth well knows, the fact that something is bad for us doesn’t stop us wanting more. Some data centres have swelled to gigantic proportions as companies aim to store and supply an increasing amount of information, a trend exacerbated by cloud computing – Microsoft’s new Chicago data centre in the US, for example, will eventually house 300,000 servers.
For many financial institutions, the need to keep more regulatory, customer and company data, along with the growth in algorithmic trading, has put pressure on their existing assets. Failure to decommission old servers and poor management of the network infrastructure has also helped to fill up the racks. This need for more space can be satisfied in one of two ways: by building new efficient data centre facilities or optimising the assets you already have.
Ian Brooks, European head of innovation and sustainable computing, Hewlett Packard, believes companies are looking for a win/win scenario from their data centre projects. “Things that tick the right boxes in terms of carbon emissions, but also give you a fiscal gain.”
He claims CIOs are aware of the need to optimise the data centre, but have been waiting for the attention and support of the board. “It’s on the radar now in many companies; there’s interest from senior management downwards. But the degree to which people can put their plans into action depends somewhat on their financial state.” Many banks are still lacking budget post-crash so optimising existing data centres is the only option.
Ian Blond, senior manager of business strategy, data centre solutions group, Hitachi Europe, says the IT in the data centre often makes up 40 per cent or more of its power consumption. Cooling, meanwhile, accounts for 30 per cent or greater, with the remainder going on electricals, lighting and other elements. “A data centre is at its worst performance when there is no IT in it,” he says. “So as time goes on, and the data centre is filled to capacity with more recent technologies and not legacy IT servers, its efficiency will increase.” The server virtualisation argument in other words.
Pre-existing electrical infrastructures however, may struggle to cope with the jump in power requirements. “If you have a legacy estate,” Blond continues, “fifteen to twenty year old data centres which may have been set up to handle 4 kW per rack or so, these are now starting to go up to eight, 15 and in some cases 20 kW. In order to accommodate that you have to potentially install new electricals into the facility. You also have to provide extra cooling units to address higher heat outputs, so sometimes retrofits can be quite challenging.” Retrofits are still cheaper than new builds though, so again optimisation is the best route for those lacking large upfront investment capital.
Many institutions are keen to sweat their existing assets as much as possible anyway to ensure the most efficient and ‘green’ operation they can. “You need to start thinking about things like information lifecycle management, data deduplication and so forth” says HP’s Brooks. “So rather than ingesting every packet of data you ever get and storing it on ultra high power fast spinning disks, you decide what is needed online, nearline and offline. By implementing policies to achieve that, you can get a much more optimal use of power.” Deleting unnecessary data with deduplication technologies also reduces the overall amount of storage required. As Accenture’s Rihani says: “There are often many different clones of data, and not all of it has to be stored in your primary data centre.”
Virtualisation technologies – which allow financial institutions to consolidate the number of servers in their data centre by running several applications and services on the same one – are another primary means of achieving efficiency. “I think server virtualisation is now becoming the norm, the de-facto approach,” says Rihani. “If you’re not virtualising you’re really behind the game.”
Thin provisioning software, which allocates disc storage space in a flexible manner based on the amount of space required by each user at a given time, can also make for more effective usage of space. It also has the benefit of reducing power consumption and heat generation by reducing the total number of disks required. Changing the layout of a data centre can have a big impact on energy consumption too, as can improving airflow and hot/cold aisle layouts. Brooks says free air-cooling in particular – say in Scotland – “can save up to 40 per cent of energy costs”.
Multiple mergers and acquisitions, a theme amongst banks during the financial crisis, can lead to inefficiency in the data centre, with applications and systems duplicated but not decommissioned. Complex network architectures make it difficult to indentify disks holding redundant data, and risky to shut them down – you may not know who’s screen will go blank, or what information will be lost. “In a lot of IT departments, if you point at a server, they don’t know what assets are on it,” says Rihani. “Sometimes they sit there for years like that.” If you want to sweat your assets allowing this to continue is unacceptable. You need to do an audit, identify what can go and then launch a consolidation project.
Wil Cunningham, presently head of IT control & execution for the Lloyds Banking Group/HBOS integration programme, worked on a data centre optimisation project for a different bank the year before last, and so has experience of how to sweat existing facilities. Forecasts indicated that the well known, top four UK High Street bank was rapidly running out of space in its main Edinburgh data centre, but with the financial crisis in full swing, funds for new facilities were severely restricted. Instead, supplementary capacity was provided by developing new, smarter power management solutions, utilising more efficient technologies and refitting old premises.
When Cunningham began work on the project for the well known High Street bank, he says there was an awareness among the data centre management team that certain systems weren’t used anymore. As a result, he mandated port audits to indentify which ports were being paid for, despite not being active. “And if the port is not active, and something is connected to it, that means the box is not being used either [further saving there],” he adds.
Governance procedures forced any new projects at the bank to identify what would be decommissioned in order to create space in the data centre. This helped close the loop in the information lifecycle. And Cunningham says that while disaster recovery (DR) facilities for all systems made sense years ago when the technologies available were not fit for purpose, nowadays, “you’re lucky if you invoke DR for any sort of system once every five years, and the impact of losing a very minor system is negligible.” In light of this, he advises supplying DR across the board for critical systems, “but for minor systems don’t bother. It’s absolutely pointless.” It wastes power in his opinion and turning off these unimportant basic systems at the bank where he was working significantly reduced the load on the Edinburgh data centre.
Cracking the code and the CRC
The European Code of Conduct for Data Centres was launched in November 2008 in response to increasing energy consumption and the need to reduce the related environmental, economic and energy supply security impacts. The aim was to inform and stimulate data centre operators to reduce energy consumption in a cost-effective manner, without hampering the mission critical function of data centres.
Nearly two years on, has it achieved what it set out to do? HP’s Brooks believes it has been very useful for data centre professionals. “It gives people an external set of recommendations that can be used to ensure you’re learning and adopting best in class practices,” he says. “Has it been the best publicised recommendation ever? There’s probably more we could do there. Equally with the CRC carbon tax, there are a number of people even now, bearing in mind that it became law in April, who still haven’t heard of it.”
The CRC Energy Efficiency Scheme (formerly known as the Carbon Reduction Commitment), is the UK’s mandatory climate change and energy saving scheme. Organisations now have to buy allowances for each tonne of CO2 they emit, and will eventually be placed in a league table that assess their performance. Money raised is then redistributed, with the best performers receiving most and the worst, least.
When asked whether the CRC has made financial institutions aware of the need to make changes to their data centres, LloydsBG’s Cunningham says: “I don’t think the penny has dropped yet. Only when the league table is produced, and companies are compared – until it becomes financially unacceptable to be bottom of the league – will institutions start to do the right thing.”
Measuring which data centres are efficient and which are not is typically done via the Power Utilisation Effectiveness (PUE) metric. A data centre with a PUE rating of 1.5 is classed as having an energy efficient design and demonstrating year-on-year improvements in energy consumption. On the other hand, those with a rating of 2 or above are generally using inefficient legacy systems. “When PUE first came out, it raised a lot of eyebrows,” says Hitachi’s Blond. “A lot of people thought it was a fantastic tool, that it was great we had a benchmark. Then very quickly...you found you could have a hundred engineers in a room and none of them could agree on how to really report on PUE. The formula is quite straightforward, but no data centre is alike.”
Alike or not, it seems obvious that financial institutions will need to do more work in future to optimise their data centres. But as Cunningham says, when they come to make improvements, they must bear in mind that optimisation of the data centre is not a one-dimensional process. “You can make certain inroads by replacing technologies, looking at your policies and standards, your working processes, and putting governance in place. But unless you do all that together, and make it a repeatable process, you will ultimately fail in one area or another. That will become the sole reason why the data centre keeps filling up.” Neither banks, nor the planet, can afford for that to happen. Good management, allied to effective technologies, is what’s needed to sweat existing data centres and stop the globe – and operational running costs – overheating.
CASE STUDY: Nationwide virtualisation project
Nationwide is collaborating with Unisys Corporation to virtualise 500 servers as part of its Data Centre Transformation Programme. The programme is a multi-year effort which aims to enhance the building society’s IT architecture, while also reducing operational and energy costs.
Nationwide estimates it will make a saving of more than £8 million over the course of the programme by removing old hardware, improving service continuity, simplifying disaster recovery and increasing hardware utilisation via server virtualisation. To date, Nationwide has achieved a 12-1 reduction in the number of physical servers. This reduction has not only saved space within the data centre but has also significantly contributed to a decreased carbon footprint through a reduction in power and air conditioning usage; just in time for the CRC.
“Unisys has confirmed our belief in virtualisation and the business benefits it can bring,” says Peter Stafford, IT director at Nationwide. “Unisys worked closely with us to resolve some very complex issues and bring us closer to reaching our efficiency targets. This data centre programme will provide Nationwide with a world class server infrastructure, allowing us to respond to changing business needs more quickly