Nick Hammond, Lead Advisor for Financial Services at World Wide Technology, discusses how financial companies can ready their systems for the challenges of 2018
The start of the year has already seen its fair share of trends pieces predicting the technology advances just on the horizon for banks and financial services firms. There’s a huge deal of media excitement about new emergent technologies and their potential benefits – the administrative power of blockchain, the capacity of artificial intelligence programmes to automate processes, and the way augmented reality can be used to bring a whole new interface to consumer banking.
And this excitement is with good reason: financial services firms now have many options to adopt new technology from cloud-native vendors who are focused on agility and short cycle times of delivery. But the shift to cloud architectures changes not only how banks will adopt innovations, but also how compute, storage, security and communications infrastructures are managed.
Financial services organisations have already been given many options to build or leverage cloud resources, and the agility this creates will provide many more opportunities over the next year. But they do not have the same number of options to secure this environment. With the complexity of existing infrastructures, and a swathe of new regulations coming into force this year, banks need to find a credible methodology to assure their critical applications within this new environment.
While new technologies hold huge promise, the shift to digital systems means that data and applications within a bank or financial organisation are no longer locked down with limited access in and out of the data centre.
Legacy systems used to be relatively easy to protect, no matter how complex their internal architecture. As long as vital or sensitive data was kept inside, everything could be secured by a firewall surrounding the system perimeter.
But with the advent of cloud computing, online and mobile banking, third party data storage, and third party apps working within and without the infrastructure, this perimeter is no longer easy to define. Computing infrastructure cannot just be simply closed off from the outside now. With bring your own device, mobile and remote access policies, as well as growing incidence of third party services being given access to parts of a bank’s data, more and more users are interacting with a bank’s system.
Following the series of cyber attacks in the past few years, the swathe of regulations coming into effect during 2018 are shifting the mandate from basic compliance to full assurance that companies are in control of their systems and can make sure certain events will not occur.
MiFID II, which passed in January, has stringent requirements that development environments for new programmes are completely separate from the working production environment, to ensure poorly written, untested code doesn’t cause huge problems to the main structure. Financial institutions cannot ensure this without a thorough visibility of their complex infrastructure, and the right policies to ensure that communication between these environments is sealed off.
GDPR, which reaches its compliance deadline in May, stipulates that any company handling European customer data must detect any kind of cyber breach and report it within 72 hours, and be able to delete the entire a customer’s data from the entirety of their systems, if requested. This also means financial services providers need visibility over every place data is used and sent and the policies in place to prevent and detect a leak.
With a variety of critical applications in different places, most organisations have agreed that the best way to protect them is by wrapping individual security policies around each one, and a big trend of 2018 will be the further development and implementation of these kinds of policies. But this is not always as easy as it sounds.
Because each vital application talks to others in a variety of untraced ways, implementing one policy on an app can have knock-on effects down the chain that bring everything to a grinding halt – and when it comes to vital applications such as SWIFT, this kind of disruption is something financial services organisations, and their customers, cannot afford.
So it is not enough to just invest in new security products, without a view to the functioning of the wider system. Many financial services firms run into trouble by investing in security products first, and then finding they cannot be successfully implemented into the working infrastructure.
For many providers, the people who designed and coded the original systems are no longer around, and the knowledge of exactly how each piece is connected to the next has passed out of the company. This leaves institutions with unwieldy, complex systems, with a huge variety of vital applications all communicating with each other in different ways in a system often too complex and opaque to trace without a huge amount of expertise.
To get these policies to work, IT leaders first need a real-time picture of the interdependencies between critical applications and the ways internal and external users interact with them. Security policies need to be adapted to every app, and then it is necessary to test how any policy would work within the wider system, and check that it does not affect other functions down the line. Financial institutions should only invest in security products after all this research is completed.
Legacy architectures, and the way they store and use data, are becoming yet more complex, and as regulatory mandates put the onus on banks to assure their systems, it is even more important for financial companies to gain a deep understanding of their infrastructures. It is only by coming from a place of infrastructure expertise that these systems can be mapped, effectively protected, and simplified in a way that allows firms to take full advantage of the agile, cloud-native technology developments that are available in 2018.