By Mike Walton, Founder and CEO at Opsview
Technology is firmly integrated into business operations and now more than ever before, organisations are putting more time and resources into digital transformation projects. Research from Deloitte found that the average digital transformation budget has increased by 25% over the last year, and 19% of respondents confirmed that they were planning to invest a huge $20 million on transformation projects in 2019. Like these industries, financial institutions are expected to follow suit by modernising and enhancing its services, so it too can meet customer expectations and successfully operate in an increasingly digital landscape. As such, we have seen the pace of change rapidly increase over the last two years – a report compiled by PwC revealed that 77% of financial institutions are increasing efforts to innovate.
However, while digital transformation can bring about opportunities for innovation, it has also presented the finance industry with new challenges. Specifically, the difficulty of maintaining an efficient IT infrastructure that supports digital changes. Without this, any innovation is threatened with failure, customer satisfaction will plummet and business operations will be hugely disrupted. The banking industry in particular has been highlighted as one sector that is struggling to successfully implement digital transformation strategies. According to the recent Which? report launched last month, which follows a Financial Conduct Authority survey in November 2018, UK banking has been in meltdown. The sector was hit by IT outages on a daily basis in the last nine months of 2018 –six of the major banks suffered at least one incident every two weeks.
This comes hot on the heels of a series of dramatic IT failures over the last few years. For example, the British Airways IT failure which affected 75,000 passengers. One of the most catastrophic failures must be the TSB IT failure, with the bank losing 12,500 customers and some £330M in the wake of its monumental IT systems migration failure. In this ‘always on’ world driven by consumerisation, it is no longer acceptable for downtime to take place, particularly in mission-critical industries like Financial Services. Furthermore, the banks themselves simply cannot afford to continue to suffer from frequent IT outages. Downtime has a hugely detrimental impact on brand reputation and the loss of trust from customers is difficult to reverse. Just a few minutes of downtime can completely destroy the customer experience and if organisations fail to deliver exceptional customer service in today’s fast-moving world, competition will waste no time trying to steal customers and swallow market share.
IT outages are also financially detrimental. Gartner sent shockwaves through the industry when it estimated that IT downtime costs $300,000 per hour. While this may seem like a huge amount, it is far from a theoretical risk: British Airways lost £170m off its market value during its 2017 IT outage and a similar outage at US airline Southwest, caused by a router failure, led to over 2,000 cancelled flights and an estimated cost of $54m to $82m in lost revenue. In the banking industry, the call for a maximum outage time of two days is a good step in the right direction, however this will likely become unacceptable in the future. Businesses must adopt new processes and tools that leverage the very best systems available today, and seek to reduce the two-day maximum to a mere matter of minutes in the next two years, working towards a new virtual zero-downtime model; if they want to stay competitive.
So, why are established financial institutions failing to do this, and struggling to mitigate against the risk of IT outages? Paradoxically to the picture painted above, modern and disruptive ‘challenger banks’ such as Monzo and Starling Bank have led the way by placing digital at the very heart of their operations. These businesses are able to innovate at an extreme pace, leaving more traditional companies one step behind. However, challenger banks are unique in the fact that they did not inherit any legacy technology, as they were born during the digital age and thus able to implement the services expected within a traditional banking experience while utilising modern technology.
In comparison, organisations that struggle to implement successful digital transformation projects all have one problem in common: sprawling IT systems which are being continuously patched up. Behind a new breed of innovative customer and employee-facing digital services lies a hotchpotch of disparate and decentralised systems – virtual machines, hybrid cloud accounts, IoT endpoints, physical and virtual networks and much more. These disparate, decentralised systems don’t talk to each other, and they frequently fail. To make things worse, many of these systems are outside the control of IT, adding an extra layer of opacity and complexity. In fact, a recent report from Parliament’s Public Accounts Committee revealed that the Bank of England’s IT expenditure is being inflated by the use of legacy systems – the bank is reportedly spending 33.6% more on IT than other central government departments.
Put bluntly, financial institutions simply have too much at stake to risk the continuation of IT outages. In order to mitigate against this risk, financial institutions need to adopt best practice operational activities and processes, such as running regular threat and vulnerability assessments, conducting configuration reviews and including operation process validation checkpoints. This significantly reduces the chances of suffering from a systems failure, by enabling IT teams to anticipate problems and quickly deal with them before they become outages, simply by increasing the visibility into the entire IT network.
Yet gaining that insight is a persistent challenge. It could be because the tools being used were designed only to monitor the static, on-premise infrastructure of the past, rather than today’s modern, dynamic, cloud and virtual-based digital systems. But more commonly, it’s because organisations are using multiple tools, producing varying versions of the truth for siloed IT teams. Research from analyst firm Enterprise Management Associates has indicated that a vast number of organisations have more than ten different monitoring tools and that it can take businesses between three to six hours to find the source of an IT performance issue. As well as this, three out of four network managers say that at least one of their network monitoring tools has failed to address their requirements for monitoring the public cloud environments. A perilous approach, given the extent of public-cloud adoption today,and therefore clearly unsustainable.
Only by unifying IT operations and monitoring under a single pane of glass can an organisation hope to get a holistic view of what’s going on. A centralised view ensures that there is only a single version of the truth,thereby bringing siloed teams together, avoiding duplication of effort and more importantly, ensuring that monitoring finally fulfills its promise to improve service performance, availability, and user experience. There are times when outages can occur suddenly and without warning. In such cases, it’s vital to detect the failure quickly, and know the impacted systems. Once identified, organisations should have processes in place to rapidly mitigate the issue – reducing downtime, unsatisfactory user experience and lost revenue.
The consumerisation of IT has meant that banks and other financial services are under extreme pressure to continually provide an exceptionally high level of IT services 24 hours a day, 7 days a week. While an IT outage may not always be the fault of IT, consumers will still be unforgiving and can easily move their business elsewhere, as they simply have too much choice and flexibility to be understanding of prolonged outages online. Financial institutions need to invest heavily in managing their processes if and when outages do occur or risk losing market share.