Business Continuity: Don't stop moving...

Financial services firms are reliant on their IT infrastructure perhaps more than any other sector. Real-time trading, regulatory requirements to record and keep each and every piece of data, complete front to back office functionality, data security and separation are just some of the more obvious issues. It is no surprise, then, that business continuity and its downstream relation, disaster recovery, feature prominently on the
list of things that simply cannot be compromised.

Neil Stephenson, chief executive at solution provider Onyx, explains: “In a real-time market you cannot afford to have any downtime, because if the market moves during an outage you effectively leave your institution massively exposed in a position that you cannot close out of. Consequently there is a very high dependency on IT systems being reliable and a low tolerance for outages. Peace of mind is essential.”

The extent of these downtime consequences is demonstrated in a report by CA Technologies, claiming that downtime due to unplanned outages costs UK financial organisations in the region of £330,000 each per year. The report also says that the UK is one of the worst affected being, as it is, one of the most established financial services sectors with therefore the oldest technology in the form of patchwork legacy systems.

Lyndon Bird, technical director at the Business Continuity Institute (BCI), believes that in 70 per cent of firms in the financial services sector, IT outages have caused real problems. But he says that the cannier IT directors will be looking at systems holistically to work out where any potential outages would be most keenly felt, and directing resources accordingly.

Risk assessment
“A firm needs to look at the amount of tolerance it has to reducing its functionality and what portion of data it can afford to not have access to or lose all together,” he says. “The failure of technology to be available 100 per cent of the time is not a continuity issue, it’s a technology one and the judgement call is the likelihood of an interruption
and how business critical that would be. This is what you spend the money on backing up.”

Indeed, starting at the beginning, so to speak, and working out what is business critical – that which a firm absolutely does need all of the time versus that which a firm could afford to lose a bit of for a short period of time, is essential when it comes to business continuity and disaster recovery planning.

Duncan Ellis, systems engineering director at network specialist Ciena, comments: “Naturally most firms will have tiers of systems and data and the key with business continuity is the recognition that you don’t need all-singing all-dancing 100 per cent back up, the equation is more a cost versus risk one. The recovery point objective is also an important part of the risk assessment. This is how much, if any, data you can afford to lose. For example a company probably needs to keep a record of all its transactions but might be able to stomach losing a few hours’ worth of e-mails.”

What then is the solution? In today’s business environment data needs to be instantly and continuously replicated onto standby infrastructures in remote datacentres on a real-time basis. For many this means a more granular disc-to-disc back up every 15 minutes using snapshotting techniques.

Stephenson explains that these correct back-ups being in place lends “essential agility and flexibility in the event of a disaster. Back-ups used to be via tape which meant an outright data loss or recovery window of hours or days, but the market has now changed and expectations are higher,” he says.

He adds that the advantage of having a physical datacentre is that you can scale up that replica pretty much instantly, copy the data back and, from the end user’s point of view, not even know. He points out that once you get past a certain point with downtime, recovery almost becomes irrelevant as the damage to a firm’s reputation becomes irreparable. He says that many firms are looking at an exclusion zone model, defining what would stop, for example, a bank trading, and making that a priority to protect. “Regulators are also making more demands on data that must be protected as a priority,” he says.

Datacentres, it seems, still have it, but they can only be effective if the company whose data is stored there has enough bandwidth to be able to upscale from passive data replication to active use by the company, should it be needed.

This comes back to having done a risk assessment and thus knowing what a firm needs its back-up datacentre to do. “The key is whether the network is capable of responding and this depends on bandwidth and having a system that is intelligent enough to respond to the parameters that the company sets it. It is easy to have business as usual, more difficult to define what would need to happen in a disaster and what you need to pay for and what you don’t,” says Ellis.

He says that having a thorough service level agreement (SLA) with both a disaster recovery provider and the telecoms company providing the actual fibre optics to supply the bandwidth is crucial. “A dynamic reallocation of bandwidth where the systems ‘talk’ to the network and rebalance available bandwidth within predefined parameters means not paying a premium for extra bandwidth and allows clients to maximise their own systems,” he says.

Into the cloud
In fact, maximising available systems seems to be the issue on everyone’s lips in the form of the ubiquitous cloud. Essentially the cloud allows a company to access applications on a ‘when it needs to’ basis, rather than having to buy in a set amount of hardware and applications. The idea is that because it is unlikely that all companies within a given cloud will experience outages and thus need to upscale at the same time, they can then share the use of the cloud’s resources, leveraging them only when needed. The system also meets regulatory requirements, can be easily monitored, managed and have fallback to recovery sites. This gives comfort to those reliant on instant messaging, trading platforms and the like where the window available for recovery is very small.

Andy Brewerton, spokesperson for business continuity at CA Technologies, states the case for the cloud as having the ability to “switch resources on and off. In a physical datacentre you have the scale to peak but the cloud only ever runs at demand scale so you pay for what you actually use, not what you might need to use.”

Mike Osborne, managing director at ICM, adds: “Cloud virtualisation and storage areas mean you can move between networks very easily rather than sit on a hardware bank. It also allows for instantaneous back-up via virtualisation. The beauty is that you can drag and drop applications between computers as virtualisation is icon-led. This would save a huge amount of time in a disaster and saves relying on lots of staff.”

But although the cloud may seem like an ideal solution, financial services firms have huge issues with data security and privacy and would be loath to hand over data to a public cloud. Many may seek to use a private cloud or mix the two together.

“There is obviously an issue over how secure data within a cloud can be and that should be subject to a rigorous service level agreements and ongoing checks on segregation. The cloud itself would also need to have a business continuity and disaster recovery plan in place,” argues Brewerton.

He thinks most organisations will end up using a hybrid system where they keep some data in-house, perhaps using a private cloud, and outsourcing less sensitive data to a public cloud.

Osbourne agrees: “A private cloud connects a user into the firm's own data centre using virtualisation. It’s basically about replicating an external cloud using a private network between two locations.”

And interestingly demand for cloud top-up services is also now emerging so extra bandwidth can be accessed from a public cloud and applied privately.

The tipping point where a company needs to physically host a datacentre versus using the virtualisation and replication technology to enable whole business continuity and disaster recovery to go into the cloud is clearly approaching – it is both economically and technologically viable.

But Bird is not convinced: “It will create a single point of failure to worry about and thus create a different set of issues. There will be questions around who is managing the cloud and their back up plans. You can outsource responsibility but systems failure is the problem of the company itself and it alone will bear the reputational risk and cost should the worst happen,” he says.

Bringing it together
There are, of course, other options. “One technology that is enabling a smooth transition for businesses into this more mobile world is Unified Communications (UC),” explains Lee Shorten, managing director, UK and Ireland, Avaya. “UC helps employees become more productive by allowing mobile users to connect to company directories and databases remotely, improving collaboration and problem solving in real-time, regardless of their physical location. UC reduces business costs by enabling users to make calls via the internet while in the office. By providing universal access to any and all messages, it ensures that voice, e-mail, or fax messages can be viewed from a PC, phone, or other wireless device and managed from any location. UC reduces the need for physical presence in the office, even to sync email or delete voicemails, but it does so without compromising efficiency.”

    Share Story:

Recent Stories


Safeguarding economies: DNFBPs' role in AML and CTF compliance explained
Join FStech editor Jonathan Easton, NICE Actimize's Adam McLaughlin and Graham Mackenzie of the Law Society of Scotland as they look at the role Designated Non-Financial Businesses and Professions (DNFBPs) play in the financial sector, and the challenges they face in complying with anti-money laundering and counter-terrorist financing regulations.

Ransomware and beyond: Enhancing cyber threat awareness in the financial sector
Join FStech editor Jonathan Easton and Proofpoint cybersecurity strategist Matt Cooke as they discuss the findings of the State of the Phish 2023 report, diving into key topics such as awareness of cyber threats, the sophisticated techniques being used by criminals to target the financial sector, and how financial institutions can take a proactive approach to educating both their employees and their customers.

Click here to read the 2023 State of the Phish report from Proofpoint.

Cracking down on fraud
In this webinar a panel of expert speakers explored the ways in which high-volume PSPs and FinTechs are preventing fraud while providing a seamless customer experience.

Future of Planning, Budgeting, Forecasting, and Reporting
Sage Intacct is excited to present FSN The Modern Finance Forum’s “Future of Planning, Budgeting, Forecasting, and Reporting Global Survey 2022” results. With participation from 450 companies around the globe, the survey results highlight how organisations are developing their core financial processes by 2030.