What it takes to fight financial crime with AI
Fighting financial fraud often feels like an uphill battle, and the battle is only becoming harder.
A PwC report found that almost half – 47% – of companies surveyed experienced fraud in the past two years. That comes with a huge price tag: a total of $42 billion.
And financial fraud seems to have only gotten worse in the last year and a half. The 2021 Association for Financial Professionals (AFP) Payments Fraud and Control Survey revealed that 65% of the financial professionals who experienced increased payments fraud attribute that activity uptick to the pandemic.
Companies are investing in technologies to combat financial fraud, but they aren’t always successful. AI and machine learning, in particular, offer many advantages when it comes to fighting fraud, but they won’t work if you don’t have the right groundwork in place first.
Machine learning and AI in the fight against fraud
The use of ML/AI to combat fraud isn’t novel. It’s been around for a while, and these technologies bring a lot of unique strengths to the table.
However, many of the AI and ML solutions being used aren’t living up to the technology’s potential. They’re doing something useful, but most suffer from the same problem, which is that they’re fundamentally founded on a rules-based system with some machine learning layered on top to optimise it.
The reason this is a problem is that fraudsters have quickly changed their behavior in response to how these solutions’ controls work, and the systems themselves – even though they have some ML built into them – are relatively static in what they’re doing. They’re very focused on a specific thing, and they’re quite good at it – but when the fraudsters realise they are being blocked, they shift their behavior.
For many solutions, responding to this new modus operandi (MO) requires a reconfiguration of the entire systems – which means the banks are always several steps behind the fraudsters.
Ingredients for success
What’s needed are more adaptive systems that don’t need to be completely reconfigured when the MO changes again.
Instead, they have more discovery and holistic monitoring baked in so that when the fraudsters turn to a new method, the system can automatically respond to that behavior rather than starting from scratch. This is a fairly new capability, and it has great promise.
The successful AI/ML-based fraud detection capability has these five attributes:
- Data agnosticism: Organisations need to be agnostic in terms of data because no one knows what’s going to happen next year.
- Automation: You need to automate as much as possible. Most older systems are manually configured by experts, but you really need to automate the derivation of information.
- Understanding relationships: Transaction-based systems might look at your current transaction and your few recent transactions, but different types of fraud might require you to look at the last six months of a customer’s history. And this includes relationships – looking at who they have transacted with and being able to not only track this, but model it.
- Rules running in parallel with ML models: You need the capability to have rules running in parallel with machine learning models, and you need an overall alerting strategy framework that takes the input from everything. The ultimate control stays with the business so you can use different models at different times, different rules at different times, and still have complete control over your fraud strategy. To gain the right agility, you need to be able to deploy and test your new detection models in parallel with your production system, so that if you are evolving your system, you don’t have to link the offline processes. The more you can do in this system by getting your new model up and challenging the existing models, the better.
- Algorithms that adapt and don’t overfit to the past fraud MOs: This enables you to discover new fraud quickly. It requires a blend of supervised, semi-supervised and unsupervised learning models to create a holistic monitoring system.
Laying the right foundation
It’s important to set goals for the organisation before implementing an ML/AI-based fraud detection solution. This includes goals for what you want to achieve in terms of responsiveness and how fast you can adapt. You want to set a target.
For instance, if it currently takes you a year to adapt to a new fraud MO, set a target of being able to do it in a month and then work towards that. Accomplishing these goals requires the technology, but it also demands an evaluation of your current processes to make sure they aren’t hindering your progress. Without these steps, you’ll be playing catch-up endlessly.
Goal-oriented
Fraud costs organisations tens of billions of dollars annually. AI can play a strong part in combating fraud, but organisations need to have the right technical pieces in place to reap the benefits of fraud-detecting AI.
To finally get ahead of the fraudsters, make sure you lay a firm technology foundation. This includes a blend of supervised, semi-supervised and unsupervised learning models and a list of goals you can measure to ensure your AI is serving you well.
About the author
Dr. Stephen Moody is the Chief Innovation Officer at software company Symphony AyasdiAI. He has previously worked with Simility, ThreatMetrix, and BAE Systems.
Stephen holds a Ph.D. in Astrophysics from Cambridge University.