Gerry Demarco – Managing Director, Morgan Stanley (Chairman) (GD)
Rizwan Ahmed – Datacentre Manager, Fortis (RA)
Harvey Cobbald – Director, Technology Infrastructure, Citigroup (HC)
Rob Coupland – TelecityGroup (RC)
Wil Cunningham – Environments Manager, Lloyds Banking Group (WC)
Bernard Geoghegan – Senior International VP, Digital Realty Trust (BG)
Vijay Mistry – Executive Director, Morgan Stanley (VM)
Glenn Murphy – IT Manager, Rathbone Brothers PLC (GM)
Mike O'Toole – Managing Director, Technology, Morgan Stanley (MO)
Geoff Prudence – Chairman, CIBSE FM Group (GP)
Nick Razey - Chief Executive, Next Generation Data (NR)
Tom Williams – Service Delivery Manager, Datacentres & Disaster Recovery, Financial Services Organisation (TW)
Mark Evans - Publishing Director, FST Magazine (ME)
GD I wouldn't mind starting on virtualisation and the effects of having what seems to be a computer under the desk now being hosted within the datacentre. Is this an optimal strategy for the datacentre, or is this something being driven by cost?
GM From Rathbones’ point of view it’s very much around a well thought out migration towards datacentre, partly on the Citrix environment towards the Xen model. However, within the Windows environment as a test, there have been considerable benefits to virtualisation, though we haven’t gone into the production area yet. We all know the benefits of virtualisation, but for the purposes of datacentre migrations we are keen to keep it separate from that process.
GD So when you do your performance and capacity planning analysis or your datacentre projections, do you take into account your server growth based on future uptake of virtualisation?
GM With the exception of Citrix we’ve not included growth trend analysis on that side. We have only really looked at it as a like-for-like changeover or a migration of services over the next three to five years, but on the Citrix side that’s been quite a strong area for us progress with.
TW We virtualise by default. Virtually nothing goes in that isn’t virtual now on the Windows side.
VM Do you mean production servers?
TW Yes. We went that way four or five years ago and it saves us enormous amounts of money and capacity in the datacentre generally works. We have very few problems with it.
VM Do you have many business lines?
TW Yes, a number, but we have a very centralised technology function that delivers technology to those business lines. I wouldn’t say they don't get a say in what goes on, but they leave it down to us to deliver the service to them, rather than get too involved in is it physical, is it virtual. We will deliver that service to them.
RC Is there anything you do take to the physical environment?
TW We’ve got some older kit; wouldn’t quite say Windows 3.1 servers, but…
RC There’s only so much of the legacy you can do it with!
TW Exactly, and we do have a fair legacy, so there are the odd things, but generally things just get virtualised by default and it has worked for us.
WC Virtualisation in our space was something that was in our technology road map, and then a problem emerged, and the problem was the datacentre had almost run out of available power/space! Therefore the virtualisation became a means to an end if you like; releasing power and space in the datacentre to effectively defer a £200 million spend. So we eventually achieved a point where we didn't install any physical servers at all.
VM What kind of consolidation are you talking?
WC We went 20:1, progressing to 40:1.
GD OK – so the strategy gets better as you upgrade.
RA We are now looking at virtualisation in a big way. We’ll be looking at virtualising a whole load
of infrastructure over the next three months.
GD When I look at virtualisation and the quick jump to desk top virtualisation, it seems we are installing kit in a resource that is already constrained. We’ve gone after desktop virtualisation first. Is there any push back from your teams on desktop virtualisation? If you think about it, the computer is running under the desk, now you're just taking it back to the datacentre which is already a constrained resource.
NR But you are going from 20 per cent utilisation to 90 per cent, aren’t you?
TW Or possibly 110 per cent because you're running something under the desk and running in a datacentre, so the efficiency isn’t there until you come round to your next requirement to refresh PC hardware.
NR I’ve got a question – I’m going to keep dragging this back to the facility side of the discussion, but are you going to have problems with the rack densities, obviously virtualised racks: how much are they drawing in terms of kilowatts? Does that give you problems? Do you run out of power before you run out of space?
GD Well we want an efficient datacentre so we’re going through a programme where we’re pulling out all the rackmount’s power hungry kit and installing more efficient blades – and we’re starting to fill up and kind of give some power back.
WC Virtualisation is only one facet of optimisation and efficiency. If you only do that one thing, you’ll fail ultimately, so what we had to develop effectively was a placement strategy. What should you be putting in your datacentres? Should you be putting test kit in your critical datacentres? Should you be supplying disaster recovery kit from minor systems in your critical datacentres? If you only look at one dimension then it seems nuts if you actually get the entire picture you create a series of placement strategies, and it starts to make sense in the context of the entire problem.
VM You are touching on a point that we were just talking about at Morgan Stanley. Historically we built large highly resilient and expensive datacentres and we put all workloads in there and all of a sudden we soon ran out of space; so I think there is a need for tiering within our datacentre strategy: understanding the different workloads that are going in and building datacentres that align to certain large types of workloads. The other point was about trying to locate where power is cheap and designing and building our facilities there.
WC When you think about all the things you could/want to do…but the reason you can’t do it is that you’ve got all the legacy kit that you don’t want to be there. For us, we used a mothballed datacentre that we could quickly convert in to a global testing development centre, so we converted that and then we moved the testing development kit out and, hey presto, you’ve got space to start moving virtualisation kit in. At the same time we mandated in our life cycle decommissioning; we mandated the funding of the decommissioning of legacy kit in the life cycle and we also mandated “you can’t put anything into our datacentres unless you take something out”. It’s a huge thing. It’s not just technology – you have to change the mindset of all the people and change your processes and standards along with it to actually get what you want out of this.
TW How do you charge for your IT? Do you manage to charge people for individual service for particular business lines, particular business heads?
WC We created a charging model and said that for each individual component you're going to be charged ‘X or Y or whatever’ and we developed that as part of our lean lifecycle. So when a project came along we had components that we wanted to support that we called ‘more of the same’ and they cost considerably less than the legacy components we didn’t want to support. In a similar vein designs for more of the same component were available ‘off the shelf’ and were available faster and therefore at less cost. So we were quickly able to commoditise stuff. Decommissioning was just the reverse of that, if you see what I mean – you just commoditised the removal of it.
TW Do you lease them equipment or do you sell them equipment? I always used to sell – looking after the Windows server team, a project came along and I would say “right, you want the server? That’s three grand. How much storage do you want? That’s X per Gig. There you go. You want production – you need DR and you’re going to have pre-production, probably some DEV, so it’s going to come to that. There you go project – you’re done”. What I wanted to do was say “I'm going to lease you X bits of kit per year and every year you are going to sign off to say “I still want that” and if you don't sign off and say you want it, I’ll decommission it at the end of the year”. Because at the moment, if I sell them a bit of kit at the start of the project, when are they ever going to come to me and say “I don’t need it anymore”?
GD That’s a good segway way to cloud. Is anyone creating private clouds where they are using virtualised servers or just providing a development environment?
VM There’s always going to be a problem with cloud around data – the regulatory requirements around data and how comfortable people are going to feel about where their data is residing, whether it be a public or a private cloud.
GM Just out of interest, has anyone gone down the cloud computing route? If so, which services?
NR Everyone talks about it!
GD I totally agree with that. The barriers of entry now to start your own business and you don’t need this backbone; you just buy the cycles – that’s it! It’s fantastic!
GM Just back to the point raised about regulations and FSA requirements, data security is probably one of the biggest issues around cloud computing, especially in the area of legal jurisdictions. Has anybody looked at this area and then said ‘no’ because of potential failed security controls and where the data is going to reside, which country, and all the those other types of concerns?
ME I spoke to somebody from Google about this and they were very aware of the problems shall we say; so far the lack of interest from the financial sector was because of that very reason.
RA We face that challenge all the time. We have US clients that want to use UK banking facilities, it’s convenient for them or quicker because we’re talking market trading and we’re talking fractions of seconds, but for compliance reasons it’s out of the EU and they are not covered by those regulations. So, yes, I think that is a barrier for us.
GD I think the amount of individuals that have to support it internally for the amount of functionality that 80 per cent use. Gmail is good enough, but when you get down to brass tacks and you start to talk about email discovery and data retention in different regions, it disintegrates to something that’s just not plausible for companies that are regulated.
NR When you talk about data security the alternative is having sort of 5,000 memory sticks. You must be leaking data all the time through memory sticks and people leaving laptops in taxis, so isn’t cloud potentially safer?
TW Not if you encrypt data.
GD Containers – it’s always an interesting topic. Anyone doing it, looking into it, kicked the tyres at all?
VM We looked at it at the beginning of the year, but the cost comparison and the risk of going to this new technology made us stay with what we knew – which was building facilities. My personal view of computing containers is that you’ve also got the physical security aspects to worry about. It fits the requirements of certain industries, but not all.
WC Given the choice, would you stick it in the container? People are normally forced to look at containers on the basis that they think their datacentres are going to run out of space.
VM There’s also the M&E perspective to support the compute containers.
NR The attraction is speed to market. It’s 16 weeks or less to deliver one of these things, but you can build traditional datacentre space quite quickly. You are talking four or five months maybe, but it’s not a big deal.
VM I think what I’m hearing is the next play on that, which is that there are a number of companies now looking at modular datacentres, so it’s a similar approach to containers.
RC I think having modularity on the infrastructure absolutely makes sense, and bringing it on-site so it plugs and gets going – it’s the way we’re going.
VM That’s really attractive to me because at the moment we are using conventional and traditional approaches. It takes us anything from nine to 18 months to bring online space. The sizes we’re talking about are considerable.
WC That’s the issue, isn’t it? Your sizes – you’re building large containers. You’re building 10-20,000 feet at a time.
VM And in this climate of trying to secure Capex up front for large builds is difficult, so I think the modular approach does become very attractive. Historically we’ve built large facilities to cater for all the workloads and I think because of the scale of operations we are operating on, that’s an inefficient way of installing servers. We’re going to fill those facilities up very rapidly, so we’re trying to understand the different type of workloads we have and in particularly location independent workloads. We’re looking at building datacenters where the power can be sourced cheaply.
GD What about our ‘green’ datacentre?
RC Have you worked for any standards – have you looked at things like the EU Code of Conduct?
VM It’s probably nothing different to what we’re doing. We did call in ‘energy efficiency experts’ – we called in a number of parties to help us understand our environment and we got them to train our staff at the same time, so it wasn’t just a case of come in and do the work – we wanted them to teach us what they were doing. So my operators now go away and take that forward and if it becomes normal operating behaviour.
RC It’s interesting because what we did 18 months ago when the Code of Conduct launched was take it as a model, because it very pragmatically sets out all the things you are talking about, which are good basic operational practice. So it’s quite simple really. Have you got the FOs right? Have you got your racks the right way round? Have you got tiles missing where they shouldn’t be missing? All the basic stuff which happens in a real world datacentre over time and getting that sorted out. Blanking panels were absolutely something that came on the agenda for us. We surprised our customers by investing in enough blanketing panels to do the whole of our estate, and that’s something we funded.
NR Have you done that now? Because the ridiculous thing about CRCs is that you have a benchmark here. You’ve got yourself really efficient, and you’d be better off doing that stuff next year!
GD No more rabbits in the hat!
RC CRC is imperfect in our kind of environment, but actually power is very expensive and with the very low hanging fruit, we’ve taken a seven figure sum off our power bill for a lower investment and the payback on that was around 12 months. If you get the air flows right; variable speed drives on pumps and fans; lighting, turning off kit that doesn’t need to be on, so on and so forth and just getting the guys thinking about it. It has paid huge dividends for us, and I think that then comes round to more investment as we come up towards the CRC.
NR Yes, but we’ve got a new datacentre; it’s built with the best kit we can get. This year we’ll do less power than we’ll do next year and in theory we’ll get hammered by the CRC because we’re apparently less efficient. It doesn’t make sense with datacentres. Here’s a question for you: if CRC is payable by you, would you treat that as part of your cost, or would it be part of your business?
GD I think the education on power needs to get driven into the organisations that are testing the servers, certifying the servers, certifying the stores and it’s starting to get there, but the ‘ooh’ and ‘aah’ about the latest piece of kit kind of outweighs how it integrates in to the datacentre, so we are not there, and yet to drive the road map of the engineering team, but then again, by the same token, Intel is introducing power management and shutting certain pieces of the memory off the CPU etc – so the jury’s out.
VM It’s a bizarre one because we are trying to use projections to bring awareness of power to our management as well. We projected out on our current run rate and current growth. If we carried on – just the European datacentres – the utility bill would be £20million a year in 2020 so we’ve got to do something. Having a £20million utility bill is just incredible.
WC What you said about education I think is absolutely crucial because we took the experience of what we did with the datacentre guys and then we challenged the platforms. So the platforms were then challenged, so what’s your ‘top ten’ initiatives, if you like, to actually optimise the use of datacentre via infrastructure efficiencies, and they all came up
with ideas, then we worked out which ones were going to deliver the best datacentre efficiencies and then we gave them the funding. So we got space and power and they got new technologies, and they got to do stuff that they’ve been asking for years
GD So your requirement for the continuity and disaster planning,
with datacentres becoming more and more expensive run; costly to build, has that changed everyone’s approach to business continuity and recovery. What are your thoughts on recovery and, actually, that’s becoming more and more expensive to run. Has anyone thought about low cost areas such as Iceland? The 400lb gorilla in the room, right?
NR What we’ve done with a datacentre in Wales and the reason we did that is because real estate is cheap and there’s plenty of power, and we feel that the London market will move further out for those reasons – productivity is cheaper. It’s interesting how many people were looking at Iceland as a location.
GP Until they couldn’t get there to look, right?
VM My personal view is that some of it is based around legacy decisions as well. You make datacentre decisions based on ten to 20 year investments as well and sometimes as attractive as they look, sometimes the financing doesn’t stack up.
NR Do you think the server hugging aspect is still important, or is that going to fade away?
GD It has to break. It has to stop. I think having high cost metro datacentres has forced us to look at workload tiering. What doesn’t have to be in metro now becomes the question. The reason why we are doing these efficiency plays is because those metro datacentres have to last. We have long leases for these buildings; they are assets on our books that we are not getting out of. We need to make them more efficient.
RC Is that tiering just about location and costs, or is it also about service level within there and standard as well?
VM It’s beyond applications as well; you need to understand how the applications connect together. I think you need to look at the problem more holistically and that may drive certain other decisions, and that’s something we’re looking at internally as well. Once you start getting into tiering, it’s understanding the whole problem, just what it is your business does. If you move all your databases into one type of facility, does that mean everything else has to move as well?
WC We were the same. We ended up developing a placement strategy. Now a placement strategy wasn’t just about what datacentre, it was about what aisle to use and then what box. So if you consolidate, you do not consolidate minor systems with your critical systems because you’ve just given the minor systems the same SLAs for recovery as the critical system. Why would you do that?
GD But are people willing to pay for that level of continuity? So even with our grids where we’ve pushed them out to various low cost areas, how do you back up grids? Do you create another mirror facility? That’s so expensive. The great news is that you took the grid works out and you put it into one place – that’s cheap, but you only have one. Do you create two? This workload needs to run on 5,000 computers. If you lose that site, where is it going to run? You can’t absorb those numbers back in to the existing facilities. What are people’s thoughts on large scale or datacentre backing each other up?
VM I think it’s a business decision, this is where a business has to put its hands up and say “I’m prepared to pay for this level of risk”, or “I’m willing to accept this level of risk”.
TW Yes, but it wouldn’t be efficient if we just said “tell us what you want”. Actually, you’ve got a disaster recovery manager who should have a strategy.
WC It’s not just delivering it, it’s actually testing it. How do you prove a thousand systems within two years to validate they will actually failover? You could end up working every weekend until you reach a point and say the strategy is wrong here and we shouldn’t be doing this. So if you are not testing it, it soon gets out of sync – it becomes useless, so if you can’t test it, don’t buy it. Save yourself some money – save yourself some space.
TW I’m actually trying to encourage them to tie-up Regus, where Regus has got the facilities spread out across the country and the bandwidth on to the internet at these offices and another company we use has got virtual desktops so I can put corporate image into them. The head of that location hates me, but why am I going to send mothers to Bristol? It takes me two hours to get there; it takes me two hours to get back. What are you going to do? And it isn’t Capex!
GD Interesting approach! Let’s face it: if there’s a disaster, most people want to stay home. They’re not going to want to venture out.
TW Generally it’s air con that’s the number one cause of evacuation. Power failure is number two. Between those two they take up something like 85 per cent of all disasters, so it’s not often a ‘building burnt down’ scenario. It’s generally not that interesting.
GP I’d just like to touch on the point
of skills in the UK: especially the maintenance side/operational side. Is it good, is it growing, what are your views?
VM My personal view is that I think we need to be doing more as an industry to attract more talent, in particularly by talking to universities.
I think the skill sets we have in the industry are ageing now – no offence Gerry!
NR We actually had a conversation with the Open University about setting up a course for datacentre engineers and they were quite keen to do it.
GP Today, it’s about people who demonstrate ‘ownership’ of the associated building services and operations, not just the design element. Furthermore, retaining those people once you’ve got them is a challenge.
WC But when you have problems the first thing you do is bring in a consultant, so if you’re going to bring in a consultant, why don’t you bring in a graduate at the same time? Get the graduate to suck the consultant’s brain dry and get trained up like that. No disrespect, but datacentre guys all look and behave in the same way, but the grads were completely different. When we did this, they did actually find it exciting and came up with some great new ideas.
GM Putting aside the PME skills there which is fairly developed and well-known, am I right in understanding that for datacentre specialism only educational BTEC qualifications exist, and that no industry wide level of practical day to day qualifications or certifications so far are available? As this is likely to assist in remediating industry CRC requirements and strengthen awareness.
GP Yes. What we’ve been pushing for is people with operational datacentre experience in design teams, rather than just pure graduate designers.
GD The only thing I don’t think we’ve really touched on is exactly how it affects shareholder value.
WC I think if we’re getting whacked with CRC penalties then, isn’t that a part of it?
VM If you are building a new facility and it’s a large facility and it’s costing millions of pounds, then definitely.
RC I think it’s interesting because, for us, the availability of capital and willingness for investors is the point, and we’re making strong investment, so we’re sensing there’s a little bit more of a trend towards outsourcing. I guess that probably comes into some of your thinking about tiering as well.
NR Well, the interesting thing for us is the mix of Capex and Opex, or is the build versus buy question much deeper than that?
VM I think we are still debating the build versus lease – that depends on how far you are looking out as well. No-one’s looking at building now, but if you go back five years ago, actually a lot of companies were building datacenters then.
NR Is that vanity because they wanted to own their own place in the centre, or was it value?
GD I think it’s also control. I think there’s a sense of having a core centre of competency in it or perceived core competency, though that’s changing.