MORTGAGE COMPLIANCE FAQs is part of our COMPLIANCE MATTERS® online series.

Visit Our Substack Forum

LENDERS COMPLIANCE GROUP®

AARMR | ABA | ACAMS | ALTA | ARMCP | IAPP | IIA | MBA | MERSCORP | MISMO | NAMB

AI Credit Score Underwriting

Loading the Elevenlabs Text to Speech AudioNative Player...

Tuesday, November 18, 2025

AI Credit Score Underwriting

QUESTION 

Thank you for your recent columns on artificial intelligence in mortgage banking. I want to know how to handle credit scores using AI. I am the SVP Operations of a large wholesale lender. We want to include AI in our underwriting. In particular, we want to use it to evaluate a borrower's creditworthiness. However, our legal department has advised us that there are huge privacy issues. 

We do not want to be dependent on the credit reporting agencies for AI information. And we do not want to outsource AI in our credit score underwriting. The AI evaluation methods we discussed with legal have been shut down due to potential privacy violations. 

What are the privacy risks in using AI to determine a borrower's credit score? 

COMPLIANCE SOLUTION 

AI Policy Program for Mortgage Banking 

A well-constructed AI Policy Program is a proactive means designed to avoid and mitigate risks associated with Artificial Intelligence (AI). AI risk management is a key component of responsible development and use of AI systems. Responsible AI practices can help align the decisions about AI system design, development, and use with intended aims and values.

RESPONSE 

The privacy challenges associated with artificial intelligence are enormous, and the risks will only become more and more difficult to mitigate. In our recently issued AI Policy Program for Mortgage Banking, we sought to provide a comprehensive policy framework for using AI in mortgage banking. Indeed, one of the policies in the Policy Program is titled "Artificial Intelligence Credit Underwriting Policy." 

If you need a policy framework for AI, please request information about our Policy Program. 

AI credit score underwriting is an uncharted legal and regulatory territory! 

You will find that most of your legal department's concerns about AI in mortgage lending involve the collection and potential misuse of vast amounts of sensitive personal data, heightened cybersecurity vulnerabilities, and a lack of transparency that can lead to a loss of consumer trust and potential regulatory non-compliance. 

Broadening this out, AI in credit score underwriting stems from the extensive collection of sensitive, alternative data, the potential for unauthorized access and data breaches, and the difficulty in ensuring transparency and consumer control over how personal information is used. 

Whatever you do, you will need to be in lockstep with your legal advisors. This "territory" is dotted with legal minefields! Let's consider these risks. 

AI models require vast amounts of data, often going beyond traditional financial information to include "alternative data" such as geolocation, social media activity, online behavior, transaction histories, and even biometric data. The sheer volume and sensitive nature of this extensive data collection increase the overall risk to consumer privacy. 

Zero in on that data! It can be collected for one purpose but might be used for other, unforeseen purposes without the user's explicit consent. This lack of control over how personal data is processed raises significant privacy issues. From the legal perspective, this amounts to unauthorized use and repurposing. 

The large datasets used to train AI models are attractive targets for cyber attackers. Inadequate security measures or vulnerabilities in third-party vendor systems can lead to data breaches, exposing sensitive personal and financial information and increasing the risk of identity theft or fraud. Data security must be failsafe. 

AI algorithms can analyze seemingly innocuous data to infer highly personal attributes, such as health status, political views, or ethnic origin (a "predictive harm"). From a regulatory perspective, this risk arises from the inference of sensitive Information. In other words, this capability to derive sensitive insights can lead to potential discrimination and privacy infringements. 

Complex AI algorithms can be difficult to explain, even for their developers, creating a Black Box where it is unclear exactly how a specific credit decision was reached. This opacity, its lack of transparency, deprives consumers of understanding why they were denied credit and of exercising their right to an explanation or an appeal. I have written here about the Black Box "model" or "problem". 

Do not assume that so-called "anonymized" data effectively mitigates risk. Even when data is "anonymized," AI can sometimes de-anonymize individuals by cross-referencing various data points, compromising individual privacy.

AI has exponentially increased privacy risk due to a new nefarious threat: synthetic identity fraud. These are digital means by which malicious actors combine real personal information from various sources to create realistic, entirely fake identities that are then used to apply for loans and credit cards, leading to large-scale financial fraud. 

I could advise you to mitigate these concerns by implementing robust cybersecurity measures, adhering to "privacy by design" principles, minimizing data collection to only what is necessary, and conducting regular audits for bias and security vulnerabilities. But you must do much more! 

You need to have continually updating policies, procedures, plans; extensive monitoring of potential abuse of borrowers' "digital footprint;" coding to prevent AI from using a consumers browsers, emails providers, date and time of day online shopping, and so forth, as a metric for creditworthiness; careful policy planning around the AI tool to ensure that no underrepresentation occurs, that is, the scoring of applicants without traditional credit history. 

In addition, you will need to ensure there is no "algorithmic bias" in the use of AI. AI algorithms are trained on historical data, which can reflect and perpetuate past discriminatory practices. This can result in "digital redlining," where historically marginalized groups are unfairly disadvantaged. Algorithms can use seemingly neutral factors, such as a ZIP code, as proxies for protected characteristics, such as race. This AI model can produce discriminatory outcomes even without directly considering explicitly protected data. 

A final word of caution. Generative AI tools can produce "hallucinations," which I would label the "risk of hallucinations," or confidently delivered but false AI information, which can lead to disastrously bad decisions and legal liability in high-stakes transactions like a mortgage. 

 

This article, AI Credit Score Underwriting, published on November 18, 2025, is authored by Jonathan Foxx, PhD, MBA, the Chairman & Managing Director of Lenders Compliance Group®, the first and only full-service, mortgage risk management firm in the United States, specializing exclusively in residential mortgage compliance.