By Alix Melchy, Jumio VP of AI
Currently, 85% of global financial institutions are using some form of AI in their processes, with 64% expecting to become mass adopters within two years. Clearly this technology is no longer something from sci-fi films, but is a very real technology that companies are starting to embrace across a broad number of applications.
These numbers are not surprising. We’ve seen a steady increase in AI’s usage in this sector over the last decade. However, the global COVID-19 crisis has increased the use cases for these technologies exponentially. As remaining bank branches were forced to close, many users turned online for the first time with 30% of people in the UK downloading a mobile banking app since lockdown began. While it would be easy to dismiss this change in behaviour as driven by necessity, 25% of Brits say they will use mobile or online banking more after the pandemic than they did before. Therefore, AI could have an even more important role in helping financial institutions maintain and grow customer bases in this digital-first banking world.
However, without proper attention paid, the AI algorithms implemented could become more of a hindrance than a way of enhancing the delivery of digital services. We only need to look to other sectors to see where this has been the case ‚for example, the UK’s A-Level exam grading error in the summer, which led to widespread havoc. Using an AI algorithm to decide grades seemed like a sensible and forward-thinking solution to rectify the fact that students hadn’t sat exams. However, due to inbuilt biases within the algorithm, students from certain communities were disproportionately impacted and their university places put at jeopardy, showing the wide-reaching and damaging potential of misused AI.
If we translate this fiasco to the financial services sector, these kinds of inbuilt biases could negatively impact the way millions of consumers and businesses borrow, save and manage their money if used to determine credit risk and who is eligible for certain types of loans.
One challenge at a time
AI is an incredible tool, but that is all it is. Despite what many believe, it cannot solve all of our problems in one fell swoop, no matter how much data we may feed it. This thinking is known as AI solutionism and is what can jeopardise the usability of AI. Companies must stay aware of its limitations and not expect one algorithm to resolve a whole host of problems.
The best way to make use of this tool is to give it a very specific challenge to solve. As financial institutions begin to implement AI at scale, it is important that businesses identify the exact problem they are trying to solve and then establish the right questions to ask. By identifying this problem at the very start of the process, companies can continue to come back to it throughout and ensure that the algorithm is still doing what it is supposed to.
Large data sets as the key to success
Bias in AI is rightly becoming a hotly debated topic, with companies across sectors coming under fire for not addressing the inbuilt biases that may exist in their algorithms. Part of the solution to solving this problem lies in addressing the size of the dataset. AI systems are built on sets of algorithms that learn by finding patterns on which they can base decisions. Without strong, relevant and representative data underpinning an AI model, it will never be able to produce strong, relevant and representative results.
If you look at Siri and Alexa, they’ve had issues because of how their AI algorithms were trained. Research shows that while a white American male has a 92% accuracy rate when it comes to being understood by a voice-enabled assistant and a white American female has a 79% accuracy rate. A mixed-race American woman only has a 69% chance of being understood. Biases like these are the reasons why Amazon has invested in thousands of staff around the world to review voice content and improve the AI algorithm to be more inclusive of all voices.
At the start of the process, companies should ensure that they have enough data to accurately represent the entire community they are trying to represent. For the financial services sector, this enables employees to treat customers fairly and allows them to maintain transparency and accountability in their decision-making processes. This by extension helps companies avoid legal claims or fines from regulators which can, in turn, cause deep reputational damage.
When it comes to bias especially, the human eye, alongside systematic bias measurement frameworks, is essential to keeping AI on track. Once the AI model is set up, companies must continue to pay attention to how their AI-based models are reaching decisions and must fine-tune these models over time. By doing so, the models will become more accurate and continue to answer the question regardless of contextual changes.
Test, test, test
To ensure that financial institutions are not publicly brought down by their AI algorithms, it is essential that they are tested first. By running algorithms through a pilot testing phase, companies can assess feasibility, duration, costs and adverse effects, and can better understand why an algorithm is making a certain decision. If this self-reflection is not sufficiently done, the algorithm will not provide the right answer which could have long-lasting and wide-reaching consequences.
Building ethical AI
To stay ahead in an increasingly digital marketplace, financial services institutions will need to reap the benefits of AI but, crucially, must address the ethics of it and the distrust people have towards this tool making decisions for important life decisions.
When putting AI in place, businesses will need to make sure that:
- All data has been acquired with the proper consent from users
- AI practitioners and programming teams are themselves representative of the community or undertake bias training to protect against bias
- Accurate and robust record keeping is in place to maintain transparency for users
In the financial services industry, there are many ways that AI can be leveraged. One way is in the document-centric identity proofing space whereby an identification document is matched with a selfie of the user to confirm their identity. This will be an essential area of focus as the diminishing role of the physical branch pushes this process online. Reducing bias is essential here if identity verification online is going to be as reliable as in person. Companies know this, the 2020 Gartner Market Guide for Identity Proofing & Affirmation predicts that by 2022, 95% of RFPs in this space will contain clear requirements around minimising demographic bias.
Clearly the use cases to leverage the power of AI in financial services are only going to increase, whether that’s quickly proving a customer is who they claim to be when opening an online account or deciding whether to provide a small business loan. While AI has the potential to be revolutionary, it’s clear that not addressing potential biases and taking an AI solutionism line of thinking could be real risks to its long-term viability.
Now is the time for financial institutions to look seriously at how they are implementing AI in order to truly improve their fraud detection, user experience and digital transparency.