Artificial intelligence (AI) can significantly improve loan decisioning accuracy, but several factors must be considered to maximize that effectiveness. Luckily, given some care and consideration, Zest AI’s chief legal officer Teddy Flo said they can be easily applied.
Flo’s career has focused on policy, legal and compliance issues. A consumer finance lawyer for much of his career, Flo worked for Freddie Mac once the housing crisis set in in 2008.
Zest AI’s thesis is that the older ways of evaluating credit are becoming even less effective as the economy becomes more complex. Those outdated methods introduce biases that are unacceptable today, given the technology that is available to mitigate those factors and their effect on women and people of color.
The importance of locked AI models
Flo said care must be taken before using AI. Lenders must use a locked AI form that cannot be changed as it absorbs new information. That means no generative or dynamic AI.
Locked AI systems are updated under very controlled conditions. As they are created, they are fed specific data sets, and their predictions are analyzed for fairness.
After a time, or as market conditions change, the models are paused before new data is added. The output is again tested to ensure the model is behaving fairly.
“You’re able to continually update the model to take in new data, but you do it in a controlled way,” Flo explained. “And you test it before you begin using it to decide to make decisions for actual consumers.”
Clear reasons needed when AI rejects loan applications
According to the Equal Opportunity Act, AI models must provide specific reasons for rejecting applications based on an advanced algorithm. On Sept. 19, 2023, the Consumer Financial Protection Bureau (CFPB) issued guidance on specific legal requirements lenders must adhere to when using AI and other complex models.
Creditors cannot use CFPB sample adverse action forms and checklists if they do not reflect the reason for denying the loan application. Those lists are not exhaustive or automatically cover a creditor’s legal requirements.
“Technology marketed as artificial intelligence is expanding the data used for lending decisions and growing the list of potential reasons why credit is denied,” said CFPB director Rohit Chopra. “Creditors must be able to specifically explain their reasons for denial. There is no special exemption for artificial intelligence.”
The CFPB said many algorithms are fed with data sets that can include data that could be harvested from consumer surveillance. That could lead to application rejections for reasons the consumer may not consider relevant to their finances.
“Creditors that simply select the closest factors from the checklist of sample reasons are not in compliance with the law if those reasons do not sufficiently reflect the actual reason for the action taken,” the circular states. “Creditors must disclose the specific reasons, even if consumers may be surprised, upset, or angered to learn their credit applications were being graded on data that may not intuitively relate to their finances.”
Removing bias from AI-based lending decisions
Given the wide racial gaps in common credit measures, regulators are right to be worried about how models might impact lending decisions. According to the Urban Institute, more than half of white households have a FICO score above 700. Only 20.6% of Black households do. One in three Black households with credit histories have insufficient credit and lack a credit score, nearly double the 17.9% rate for whites. Similar disparities exist between whites and Hispanics.
This shouldn’t be happening in a society with the means to erase those faulty rationales. AI can help render them obsolete.
“There may be some difference in credit quality because of the historical racism within America, but it’s not that extreme,” Flo said. “That is an overstated difference.
“What we wanted to do as a company is try to close the gap in a way that is just as accurate at predicting credit risk but treats different groups of people much more fairly and much more similarly. And we’ve been able to do that on average as a company for our financial institution clients.”
Using explainable AI
Given the technology available, there is no reason to rely on standard codes and checklists. Flo said Zest AI uses Shapely-based explainability methods. Shapely values originate in game theory and are based on the fair distribution of gains and costs to several actors working in a coalition. They are often applied when contributions are unequal.
“There are players in the market today that will use an AI model to make a decision, but then take reason codes from a credit report and give back to the consumer even though that’s not what the AI model was basing its decision on,” Flo said. “We think that’s wrong; we think that’s harming consumers. And we applaud the CFPB’s efforts to stop it.”
Bias removal efforts are in the early innings
Zest AI published a whitepaper on how to maintain guidance by CFPB regulations. Flo said the agency’s concerns of algorithmic bias are well-founded, given the number of examples of AI models being trained inappropriately.
With AI in its early stages, it’s only natural that efforts to eliminate bias are just as nascent. Flo said Zest AI developed and has updated a patented technique to de-bias credit models. It is an excellent early effort in a movement with a long way to go.
“It’s not always possible to close the gap completely with every underwriting model we built,” Flo admitted. “You could imagine a financial institution situated in a city, for example, with a very affluent white population and a Black population that is not as affluent.
“A simple underwriting model can’t solve the income inequality problem in America. But what it can do is make sure that folks who have comparable incomes or credit histories are approved at similar rates, regardless of their backgrounds, and that is what we do in spades.”
Don’t over-regulate AI
Flo is bullish on AI’s potential to help people who deserve credit get it when they would not have under old measures. That will change lives for the better.
Throwing that all away over addressable regulatory concerns would be a travesty. People are being helped in tangible ways that keep them out of high-interest debt.
“They’re getting their emergency bills paid,” Flo said. “They’re also not being pulled into a cycle of debt. There are so many levels of benefits that this technology creates that missing those benefits for addressable regulatory concerns would be a travesty for individuals and the economy.
“Bias is real in lending. But AI is the solution to reducing and eliminating that bias, not the problem because if it is built thoughtfully and intentionally, that’s something that we have access to.”
Also see: