By Jon Leppard, Founder and Director. Future Facilities
Out of all sectors of business, it’s Finance that has the most unenviable position when it comes to IT. Whether you’re stuck struggling with legacy software in Retail Banking or Insurance, or whether you’re on the cutting edge of tech in the trading scene, you’re facing some unique tech challenges. Specifically, you’re probably struggling to be efficient.
There’s a number of reasons why, but I’m going to focus on one aspect: scale. Finance naturally creates a lot of work for datacentres through transactions – it’s the nature of the beast. Banks and traders have invested heavily into on-premise datacentres to handle this load – and were one of the fastest adopters and greatest beneficiaries of virtualization.
For traders in particular, the complexity of markets and financial instruments only grows – raising workloads (that must be processed quickly) along with it. Whilst Moore’s Law provides significant relief from this effect, the growing workloads mean that finance sector hardware needs to be replaced more often than other sectors, with the obvious effect on running costs and up-front budget requirements.
But the major players in the Finance industry aren’t running out of hardware options, nor are they running out of potential budget (they have one of the best business cases for continual investment in IT imaginable). They are literally running out of physical space.
Scaling when you can’t just go Cloud
Traders have lived and died by a millisecond of latency for over a decade now – Cloud computing on these critical workloads doesn’t really make sense. For them, efficiency is doing as much as possible within this space, without anything catching fire. Running costs mean little against the potential gains of being faster.
For banking and trading, there’s also the major issue of dealing with heavily regulated financial data that is often restricted by the geographies it can be stored in. Cloud providers that could theoretically take the burden from the finance industry have struggled to meet these requirements in practice.
This means that the finance industry must stay on-premise – which heavily incentivises them to be as efficient as possible within the space they have been allocated – but many are running up against their limits. With the only other option being to build a new facility, even those with the deepest of pockets are reluctant to go down this road when there is any more juice that could be squeezed from the facilities they currently have. They’re in what we call “Compute Jail”
With this context, it is small wonder then that we see major mergers proposed between giants such as The London Stock Exchange and Deutsche Borse. They are attempting to scale efficiently without the option of simply going Cloud.
Breaking out of Compute Jail
Relief is on the horizon: new designs for what are called “software-defined”, homogenised facilities look like they will be going a long way towards responding to this extraordinary need for compute efficiency in the finance industry. Like a second wave of virtualization, they promise to bring a level of intelligence to the datacentre’s resource allocation that was previously unimaginable. They will be costly, but the only problem for the finance industry is the software-defined datacentres are still a few years away, even with heavy investment.
That’s bitter news for finance datacentres that are running against the physical limits of the space they are working with. They need to do more, without having a vital server catch fire or fail.
We therefore need to create data centres that can respond to these business needs now, with absolute control and visibility, to remove risk for the equation. The finance industry needs what we call the the ‘Fluid Data Center’.
What’s a Fluid Data Center?
The Fluid Data Center is a concept where capacity and risk can be accurately and quickly snapshotted. A Fluid Data Center can “pour” compute resource towards either end of this spectrum with safe knowledge of the impact this will have on either the capacity or the resilience of the entire facility.
It can do this on a case-by-case basis, and importantly for the finance industry it can do this quickly.
It’s achieved by knowing exactly what is happening currently within a datacentre and then using advanced engineering simulation tools to map out what the impact of any given change would be. Not just in terms of power draw, but on what the impact would be on the air flow of a room, the additional strain on a given AC unit etc. down to the fine detail.
What this tends to result in, aside from happier business teams who avoid the cost of a new facility, is incredibly efficient datacentres. At the moment the only solution to not knowing precisely the limit of a DC is to factor in a healthy safety margin – this could be an extra AC unit or two, or one less server than could physically fit in the space, in simple cases. A Fluid Data Center has this server installed, safe in the knowledge that it is in fact possible to have it run at X% capacity without compromising the safety of surrounding equipment.
A Fluid Data Center knows exactly how much juice it has, and the size of the container – and it uses this information to act faster and more safely than human predictions could ever achieve.
That sounds like Finance’s ticket out of Compute Jail – until the real cavalry arrives.