By Ralf Gladis, CEO, Computop
Where there is money there is crime, but whilst new technology has created opportunities for criminals it’s also enabled new and ingenious forms of counterattack. Developers of fraud-detection software claim they can detect fraud attempts with seemingly legitimate cards or accounts faster than an airbag inflates. The systems used by banks, card networks and Payment Service Providers (PSPs) may be marginally slower, but they are still running thousands of calculations within fractions of a second.
Protection in the future will be focused on substitute numbers and tokens that will flow through the Internet and make use of algorithms that evaluate all relevant information across all channels to distinguish between an honest and a fraudulent transaction. Consumers too are now being shielded by two-factor authentication (2FA) for higher-value transactions, while retailers are taking biometric authentication of customers into their own hands (delegated authentication) to achieve the best possible user experience and avoid accounts being hijacked by criminals. In the process, they are making life easier for the banks.
The fraudsters will persist, but their efforts will be met by increasingly smart defences and the use of big data and artificial intelligence by retailers and banks to learn from each individual case of fraud and identify patterns.
The problems: the changing face of fraud
When it comes to retail fraud, the scene of the crime has moved from the POS to the Internet, and the number of victims is huge. Shoplifting 2.0 is a term that has been given to the new digitally enabled way that ‘supposed’ customers steal from online stores. Some perpetrators specialize in self-print gift vouchers or try to rip off providers of online games and bets. Others are so brazen that they use click-and-collect services in the branches of omnichannel stores to pick up goods that they’ve paid for online with credit-card credentials they’ve either stolen or bought cheaply on the darknet.
How can retailers manage this problem, and defend themselves against fraudsters without scaring away potentially lucrative custom?
The rule-based systems that have been the backbone of automated fraud prevention for many years work because of plausibility checks. This means that if a customer uses a credit card in Manchester, they can’t feasibly use the same card in London within 30 minutes. Today this is not sufficient. In globalized e-commerce, any ordinary consumer can go on a virtual shopping spree through the world’s cities during their lunch break. With outdated — i.e. crude — filter settings, customers who are up to no good and have a good credit rating can easily fall through the cracks.
The challenge is weighing up the value of unnecessarily rejected purchases against the value of fraud-related losses, and this makes the argument for more sophisticated detection tools that can make sense of buyer behaviour strong.
Added to this is the problem of ‘friendly fraud’, or ‘chargeback fraud’ in which a customer claims that a mistake or misunderstanding has happened, they have not taken delivery of their order, or a download was never initiated by them, and they want their money back. Of course, the claim can be genuine, however, estimates that only one in seven unjustified payment challenges is made in error tend to go unchallenged.
Many customers don’t think they’re doing anything wrong when they badmouth their supplier to the bank for their own benefit. Giving false information to get their money back when they’ve made the wrong decision and don’t want the hassle of a return is seen as a trivial offence, equivalent to “borrowing” party clothes from a mail-order company. Everybody’s doing it; therefore it can’t be wrong. Many merchants grudgingly accept an abuse rate and take the view that customers need to be given the benefit of the doubt, but this is costing them money.
Solutions: the changing face of prevention
Whether the crime is ‘friendly’ or not, precautions need to be taken with the interception of confidence tricksters at the online checkout if possible. Many retailers still try to do this on their own, manually checking orders, but criminals are adept at avoiding attention and deliberately do not follow the kind of patterned behaviour that could be picked up by manual checks.
The technical answer to this challenge is big data. In large amounts of data, modern analytical tools can detect hidden patterns that would never be noticed at the level of an individual retailer. This is where payment service providers, banks and credit card organisations come into their own. They can scan the data stream on a completely different scale and offer fraud prevention as an extra service to their customers.
Like the fight against computer viruses and hacker attacks the good guys never run out of work, because the bad guys find the next security hole as soon as the previous one is closed. Criminals are always developing new fraud patterns but machine learning, a method from the toolbox of artificial intelligence (AI), allows the traditional “if-then” rules to be supplemented with new insights and observations that a human would not come up with on their own. First, the self-learning system is ‘trained’ with clean, categorised data, so it can filter out deviating patterns. If recurring patterns emerge that do not correspond to expectations, this means either the habits of consumers are changing or there’s a new attack vector.
When the AI system is running and suspicious activity is detected, appropriate rules can be applied, for example, recommending a manual check or blocking the customer with a request to contact customer service. The tools used in the live monitoring of transactions also include behavioural analysis with characteristic patterns assigned to customers, merchants, user accounts and devices, for later comparison with actual behaviour in real time.
The profiles can also store information on when and how often addresses or passwords were changed or replacement cards requested. The financial data recorded covers typical shopping behaviour, including times of day and places or whether the customer mainly pays amounts in a certain price range by card.
The best protection for all parties involved, however, would be total customer transparency. The better the system knows the person, the harder it is for a criminal to slip into their skin. The developers of AI-based fraud detectors work with so many factors and data points that they no longer speak of a few branching decision trees, but of entire forests. There are good reasons for this level of detail and complexity. Organized crime is behind much of the fraud on the Internet and, according to experts, the attackers themselves now carry out state-of-the-art data analysis.
There are problems with the idea of the ‘transparent customer’. A comprehensive storage of all personal data, including behavioural and device data, could not be reconciled with the spirit and letter of GDPR. It would also be a reputational and security risk. These problems can be overcome if one separates the analysis from the person. Again, technology provides the fix: the data is anonymised; the person “becomes” a token.
The real person behind an attack is not that relevant since professional criminals conceal their identity. They hide behind unsuspecting victims or invented persons, but usually only for a limited time and hide in the crowd so they can strike when there is a lot going on. But they do use the same computer to impersonate different identities, which is why there is now software that recognises a PC even if the user regularly deletes his cookies, uses the browser in incognito mode, and blocks device ‘fingerprinting’.
An incentive to prevent fraud: PSD2
For retailers, arming themselves against unfriendly and friendly fraudsters is important in the scope of the PSD2 payment directive. The greater a retailer’s chances of being targeted by criminals, the greater their legal responsibility to prevent it. Anyone who does business with small baskets of goods can spare their customers the trouble of strong authentication, if the fraud rate is below 1.3 per thousand. For larger orders, the retailer must stay below a threshold of 0.1 or even 0.05 per thousand, depending on the payment method and this is extremely demanding for retailers.
In conclusion . . .
The digital economy creates opportunities for all — including criminals. The spoils are concentrated on a few large-scale cracker attacks / organised heists a year, but the damage is spread over billions of smaller incidents. These are rarely worth the effort of consistent criminal prosecution, but they represent a significant cost to customers and businesses.
The onus is on retailers, banks and PSPs to get to grips with the problem and developments in AI and data collection are helping to restrict opportunities.
Advances such as biometric authentication via a smartphone offer the opportunity to make payment easier, faster and more secure. For companies that can keep the fraudsters in check there’s a double advantage: a simpler and safer check-out experience for the customer and greater profits.