If a financial institution looks beyond the hype of AI and tempers its expectations, it can use AI to deliver measurable business results. That’s been the experience of Amount’s director of decision science Garrett Laird.
Given the interest in Chat GPT and related tools, the recent buzz around AI is understandable. Like many in fintech, Laird reminds the excited that AI has been around in such forms as machine learning for years. Avant has used machine learning in credit underwriting for at least a decade.
“It’s not a silver bullet,” Laird said. “It does some things really, really well. But it won’t solve all your problems, especially in our space.
“Financial products are highly regulated, right? These new LLMs (large language models) are entirely unexplainable; they’re pretty much true black-box models, so they limit the applications and use cases.”
Why AI is limited, where it isn’t
Laird sees clear use cases in outlier detection and unsupervised learning. He credits the current AI fervor with igniting interest in LLMs. As businesses look for ways to deploy LLMs, they are also looking at other AI types.
Regulations prevent AI from being used everywhere in financial services. Laird cited the many protected classifications that dictate how and where advertisements and solicitations can be sent. If your AI model cannot explain why one customer got an offer while another did not, you’re asking for trouble.
“Machine learning can be used to become more compliant because you can empirically describe why you’re making the decisions you’re making,” Laird said. “When there are humans making decisions… everyone has their implicit biases, and those are hard to measure or even know what they are.
“With algorithms and machine learning, you can empirically understand if a model is biased and in what ways and then you can control for that. While there are many restrictions on one side, I think many things we’re doing with machine learning and AI benefit consumers from a discrimination and compliance perspective.”
AI and training models
Laird said the training models depend on what their systems are used for. Fraud models must be updated quickly and often with third-party sources, historical information and consumer data.
This is one area where machine learning helps. Machine learning operations can ensure proper validations are completed. They prevent it from picking up discriminatory data or information from protected classes.
Laird said an industry cliche is that 90% of machine learning work is data preparation. That has two parts: having relevant data and ensuring it is accessible in real time so it can make valuable business decisions.
AI’s underhyped role in credit decisioning
While credit provision might not bring the same urgency as fraud, Laird also advises considering how it can benefit from AI. Credit models must have strong governance and risk management processes in place. They need good data sets. Lenders require a thorough understanding of their customers, which, in the case of mortgages, can take years.
“Getting access to the right data is a huge challenge, and then making sure it’s the right population,” Laird said. “That’s a trend the industry is moving in: product-specific but also customer-base-specific modelling.
“The direction we’re headed is like the democratization of machine learning for credit underwriting where you have models that are very catered to your very unique situation. That challenges many banks because it takes a lot of human capital. Having it takes a lot of data, and it’s not something you have overnight.”
Also read:
AI’s role in fraud mitigation depends on the type of fraud
AI lowers the entry barrier for fraudsters by providing sophisticated tools and allowing them to communicate in better-quality English. Combatting them also involves AI as one of many layers.
However, AI is used differently with different fraud types. First-party fraudsters can evade identity checks, which introduce friction for legitimate customers.
Third-party fraud brings challenges to supervised models. Those models are based on learnings from previous cases of such fraud. Their characteristics are identified, and models are developed. AI can help to identify those patterns quickly.
However, the process is never-ending because systems must quickly adjust as fraudsters determine how to beat mitigation challenges. Laird said he focuses on that by deploying velocity checks.
“We put a lot of mental effort into identifying ways to pick up on these clusters of bad actors,” Laird said. “And there are many ways you can do that. A couple of the interesting ones that we employ are velocity checks. A lot of times, a fraud ring will exhibit similar behaviors. They might be applying from a certain geography, have the same bank they’re applying from, or have similar device data. They might use VOIP, any number of like attributes.”
Laird said some institutions also use unsupervised learning. They might not have specific targets, but they can detect patterns using clustering algorithms. If a population starts defaulting or claiming fraud, the algorithms can identify similar behaviors that need further scrutiny.
The coming surge in account fraud
Recent financial sector turbulence lends itself to rising deposit-related fraud. If a bank’s defences are sub-par, they could find themselves vulnerable to fraud that is already happening.
“That is probably a problem that’s already starting to rear its head and will only get worse,” Laird suggested. “I think with all of the movement in deposits that happened this past spring, with SVB and all the other events, there was a mad rush of deposit opening.
“And with that, two things always happen. There’s an influx of volume. It makes it easier for fraudsters to slip through the cracks. Also, many banks saw that as an opportunity and probably either rushed solutions out or reduced some of their defences. We think there’s probably a lot of dormant, recently opened deposit accounts that are probably in the near future going to be utilized as vehicles for bust-out fraud.”
Emerging trend: Case-specific modelling
Laird returned to case-specific modelling as a significant emerging trend. FICO and Vantage are good models many use, but they’re generic for everything from mortgages to credit cards and personal loans. Casting a wide net limits accuracy, and given increased competition, more bespoke models are a must.
“I can go on Credit Karma and get 20 offers with two clicks of a button, or I can go to 100 different websites and get an offer without impacting my credit,” Laird observed. “If you’re trying to compete with that, if your pricing is just based on a FICO score or Vantage score, you’re going to get that 700 FICO customer that’s trending towards 650, whereas someone with a more advanced credit model is going to get that 700 that’s trending towards 750.”
Open data’s a modelling goldmine
Laird is eagerly watching developments following the Consumer Financial Protection Bureau’s recent announcement on open banking. Financial institutions must make their banking data available.
That’s a modelling goldmine, Laird said. Financial institutions had an advantage in lending to their customer bases because only they can access that information. Now that it’s publicly available, that data can be used by all financial institutions to make underwriting decisions. Laird said it’s mission-critical for financial institutions to have good solutions.
Fraud, machine learning – Other AI trends
Financial institutions generally take conservative approaches to AI. Most have used Generative AI for internal efficiencies, not direct customer interactions. That time will come but in limited capacities.
Laird reiterated his excitement about the renewed interest in machine learning. He believes they are well-suited to address the problems.
“I’m excited that there’s that renewed interest in investment and an appetite for starting to leverage AI for fraud,” Laird said. “It’s been there for a while.
“I think the increased focus on credit underwriting is another one that I get really excited about because… with the new open banking regulations coming out, I think financial institutions that don’t embrace it are going to get left behind. They’re going to be adversely selected; they’re not going to be able to remain competitive. It behooves everyone to start thinking about it and understanding ways to leverage that from not just the traditional fraud focuses but increasingly on the credit side.”