Connect with us
Finance Digest is a leading online platform for finance and business news, providing insights on banking, finance, technology, investing,trading, insurance, fintech, and more. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.

FINANCE

By Richard da Silva, VP EMEA at Revelock

Account takeover (ATO) occurs when a cyber-criminal poses as a genuine customer, for example by using stolen credentials, or hijacks them mid-session in order to make unauthorised transactions. There is a vast and growing range of techniques and technologies bad actors can leverage to take over the accounts of legitimate users in order to steal funds.

What’s more, advances in technology have led to a rise in new, more complex forms of ATO that an increasingly sophisticated and technically-experienced generation of fraudsters are taking advantage of – leading to an ever more lucrative fraud landscape for the cybercriminals, and consequently an endless game of cat-and-mouse for fraud prevention teams that are often left overwhelmed with fraud alerts. In order to get one step ahead of cybercriminals, and both ensure account security for and maintain trust with legitimate customers, financial organisations need to look at leveraging new and innovative technologies themselves to guarantee they can authenticate users and establish without doubt that they are who they say they are.

ATO techniques are becoming increasingly complex

Technologies are emerging that allow cybercriminals to impersonate other people to a startling degree of accuracy and believability. For example, MyHeritage has recently launched ‘Deep Nostalgia’ which uses AI-powered software to reanimate photos – giving families the chance to see their old relatives ‘come back to life’. This includes animating photos to do and say things that the real person themselves never actually did or said.

And it’s not just a person’s appearance that can be stolen – there now exists technology that can perfectly clone an individual’s voice too. Again, this was created with good intentions: VocaliD’s AI-powered technology was set up as an extension to the founder’s clinical work – aiming to give a voice to patients who otherwise could not talk, for example due to surgery.

These innovations, however, relate to the worrying rise of ‘deepfakes’, which have garnered recent media attention due to the associated risk of political and social manipulation. A prominent example of a deepfake in the wild is one that appears to show Mark Zuckerberg apparently celebrating his “total control of billions of people’s stolen data”.

It’s clear to see why the development of technology that allows a person’s image and voice to be perfectly replicated is readily exploited by bad actors looking to perpetrate complex ATO. It provides a path to convincingly pose as either legitimate customers when faced with a bank’s security, or as a spokesperson from the customer’s bank during attempted phishing attacks.

When fraudsters can look and sound exactly like a genuine customer and, in a post-breach world, have access to millions of sets of legitimate account details, how will financial organisations be able to tell the difference between friend and foe?

What does this mean for fraud prevention?

The increasing sophistication of technologies bad actors are using to perpetrate online fraud are causing problems with various methods of authentication and fraud prevention currently being deployed by financial institutions. For example, the startling accuracy of present-day deepfakes demonstrates that solutions founded in physical biometrics – such as facial recognition software – are no longer secure in isolation.

What’s more, modern complex ATO techniques can also undermine the effectiveness of fraud prevention tactics such as multi-factor authentication, including the hijacking of One Time Passcodes (OTPs). For example, SIM swap fraud – where bad actors manage to switch an innocent person’s mobile number onto a new device in their possession – has seen a huge uptick in popularity among threat actors in recent months. To achieve SIM swapping, cybercriminals simply need to convince the mobile provider in question that they are a legitimate customer – which is now easier than ever with advanced voice-cloning and deepfake technology. Once they’ve re-routed a customer’s number to their own phone, the criminal will receive the OTP in their stead and so bypass the extra factor of authentication set up to stop them.

Use technological innovation for good

Fortunately, just as advances in technology have led to more types of complex ATO, they have also led to innovations in fraud prevention solutions. The foremost of these is the analysis of users’ behavioural biometrics.Whereas physical biometrics can be replicated and leveraged in impersonation attacks, behavioural biometrics are unique to each and every user – a bit like a digital fingerprint.

Financial organisations can analyse thousands of parameters related to a user’s online interactions, such as typing speed, touchscreen pressure and more, in order to create unique ‘BionicIDs’ for each user that cannot be replicated. In short, an approach to online fraud prevention founded in behavioural biometrics focuses on asking each user the fundamental question: “Are you really you?” And in an age when both fraudsters and the technology they use are becoming increasingly intelligent, financial institutions’ best defence is to know their customers inside and out.

Automation is technological key to effective fraud prevention

Considering the scale and sophistication of modern fraud attacks, alongside the ever-advancing nature of technology, it is no surprise that traditional prevention methods can no longer keep up – a human-powered approach alone is simply not enough.

Behavioural biometrics analytics is innovative in that it can be layered with other authentication methods, as well as deployed alongside artificial intelligence and machine learning technology. This combination of behavioural biometric analysis with advanced AI means a solution will become more and more accurate to each user, causing a financial institution’s fraud prevention strategy to become increasingly effective over time.

Financial organisations should also look to the enterprise cybersecurity industry for inspiration; it is striding ahead in terms of automating fraud detection and response capabilities to block attempted attacks. In particular, what was once a manual alert system has now become a fully automated process, which moreover spreads across an enterprise’s entire technology stack through a single Extended Detection and Response (XDR) platform. The introduction of automation means potential threats can be responded to in real time, blocking fraud before it can occur, while fraud analysts are freed up to focus on complex and higher-level threats such as intricate networks of mule accounts that could already be hiding in a bank’s system.

In the age of increasingly sophisticated attack vectors such as deepfakes, an approach to fraud prevention founded in behavioural biometrics can comprehensively establish that each legitimate customer is who they say they are and are not being impersonated or manipulated throughout their entire online journey.

Continue Reading

Why pay for news and opinions when you can get them for free?

       Subscribe for free now!


By submitting this form, you are consenting to receive marketing emails from: . You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Recent Posts