We all like to think we have our credit-scoring models in check. But do we? When was the last time you reviewed the modeling data and validated the performance? I mean really reviewed it – not just checked the annual review box before audit season?
As it turns out, not keeping an eye on your models – from the data input to the decisions it provides – can put you on the road to fair lending risks. And whether you’re a large institution with sophisticated models built in-house or a community bank that utilizes models developed by a third party, neither can escape the risks if you aren’t paying attention and your models erode over time. A change in one part of the business strategy can be a trigger that steers your model away from its intent. Or perhaps your vendor’s update of the model changes the data sources. These are just a couple example scenarios that could send you into rough waters with the regulators.
Then consider a trio of fair lending cases, highlighted earlier this year by the Federal Deposit Insurance Corporation (FDIC), that landed with the Department of Justice (DOJ) in 2020. One of these cases was for a credit-scoring model, developed for the bank by a third party, that contained egregious fair lending violations. The FDIC cited the institution for scoring based on several prohibited bases, including age, sex and public assistance income. It leaves you wondering if the Compliance team was even invited to the table when this change was implemented.
Fast forward to late 2021, and now the Consumer Financial Protection Bureau (CFPB) has jumped on the bandwagon with respect to discriminatory practices with Big Tech data collectors. Regulators issued a call to hold Big Tech companies accountable for discriminatory algorithms. Just as financial institutions use scoring models, the tech industry capitalizes and focuses their efforts with those same capabilities from the data they collect and sell from our everyday lives.
Now we watch as these two worlds collide: Banking and Big Tech. Peering into the future of credit-score modeling, we’ll see more and more artificial intelligence leveraging data at a deeper personal level. This will take the banking industry far beyond its traditional scoring model. Institutions already are looking to those large tech companies that are holding massive amounts of personal behavior data, without a lot of existing guardrails on its usage. For now, this data is readily available for the asking – and for the right price.
If you’re wondering how your financial institution might be sourcing a consumer’s “behavioral data” and pulling that data into its model for score decisioning, start with the example of a single female who searched for baby furniture online. To start, the search engine’s algorithm captured this information. Along came your lending institution, which purchased this data from a Big Tech company. Your organization’s model then flagged this female as someone who might be out on maternity leave soon. That consumer was scored as high risk and declined for a loan or offered an unfavorable rate or terms. If you weren’t aware of this data point provided by the Big Tech company, then say hello to a fair lending issue.
If you think this is too far out there, think again. Personally, I’ll never forget a conversation with a coworker who was getting started as a runner. As we chatted after work one evening, he mentioned his frustration with a training setback due to plantar fasciitis. I’d never spoken of that ailment before, but before he walked out of my office – guess what was on my phone? Yes, an ad for running shoes that claim to ease plantar fasciitis. The future is here and we can’t deny it.
So, what is the takeaway here? Institutions must know their scoring models inside and out. A once-a-year surface scan won’t cut it, particularly in today’s heightened customer-first regulatory environment. Ask yourself the tough questions: What is behind your bank’s algorithms? What makes up each and every data point? Where is the data from those sources coming from (and you can’t be afraid to ask your vendor if you are using a third-party model)?
While we’ve talked about the risks of the model’s degradation over time, don’t forget that a sound model begins with validating its function from Day One. It’s not uncommon for a bank to hire a third party to assist in some aspect of onboarding a scoring model. However, your institution must perform its own verification and model review to ensure there aren’t any hidden prohibited bases. A pre-implementation review allows you to assess the decisions that model is making before you launch it; if you identify trends where the data appears skewed, then pause and investigate that concern.
Finally, whether launching a new model or assessing an existing one, involve your Compliance team in the process. Compliance can be your best friend, so involve them early.
At Spinnaker Consulting Group, you’ll find that our Data Analytics experts team up with our Risk Management and Regulatory Compliance experts to provide you with the latest insights and resources to ensure your model leverages the right data and aligns with regulatory expectations. We bring robust front-line experience from our previous banking careers to partner with you in strengthening or testing your model – and giving you the peace of mind that you’re doing right by your customers.
Let's Talk
Like how we think? Subscribe to have our articles delivered direct to your inbox each month.
Headquarters: 8000 Franklin Farms Drive, Suite 100, Richmond, VA 23229
©2024 Spinnaker Consulting Group. All rights reserved.