LENDERS COMPLIANCE GROUP®

AARMR | ABA | ACAMS | ALTA | ARMCP | IAPP | IIA | MBA | MERSCORP | MISMO | NAMB

Wednesday, March 25, 2026

Will AI Reduce Fair Lending Violations?

YOUR COMPLIANCE QUESTION 

Our company is building an AI engine to monitor for fair lending violations. The AI system is extensive and includes chatbots. It will be integrated into our LOS and several other systems. We are a large mortgage originator and servicer. We use one of the most well-known platforms for loan originating and servicing. The system offers several new AI features. But we ran our own test against the LOS and found that our AI engine is identifying more fair lending issues than the one embedded in the LOS. 

As the company's General Counsel and Chief Risk Officer, I was shocked that building our own AI system could produce better results than a highly rated, well-established LOS. Granted, our AI system is proprietary and reflects our unique compliance needs. Full disclosure: We have been a client of yours for over 15 years, and we have discussed these and other AI findings with your team in order to mitigate compliance risk. 

I wonder if a one-size-fits-all AI integration in the LOS can really be effective, given that fair lending involves many state and federal regulations. We are testing and monitoring our AI integration, but many companies lack the resources we have and will rely on their LOS provider's results. 

Do you think a generic AI system can reduce fair lending violations? 

Signed, 

Risk Averse 

OUR COMPLIANCE SOLUTION 

AI POLICY PROGRAM FOR MORTGAGE BANKING™ 

Our AI Policy Program aligns with Freddie Mac's AI governance requirements for Freddie Mac Sellers/Servicers. Responsible AI practices can help align AI system design, development, and use with applicable legal and regulatory guidelines. 

Our AI Policy Program consists of the following policies: 

1.      Artificial Intelligence Governance Policy

2.      Artificial Intelligence Use Policy

3.      Artificial Intelligence Workplace Policy

4.      Artificial Intelligence Credit Underwriting Policy

5.      Artificial Intelligence Do & Do Not Policy

6.      Artificial Intelligence Ethics Policy

7.      Artificial Intelligence Vendor Management Policy 

Contact us for the presentation and pricing 

RESPONSE TO YOUR QUESTION 

Let me begin with my conclusion: there is currently no one-size-fits-all, generic AI system that can be thoroughly relied on to reduce fair lending violations. 

Most companies will rely on originating and servicing platforms that integrate AI into fair lending analytics. Unfortunately, companies are generally liable for AI errors, particularly when AI causes financial losses, safety issues, or provides consumers with false information. Legal responsibility typically falls on the business deploying the technology, even if it properly monitors, tests, or ensures that the AI is fit for fair lending detection. 

Legal and Regulatory Risk 

Put another way, your business is responsible for any misinformation provided by your AI chatbots. As you likely know, there are certain aspects of tort law, like duty of care, that require individuals and entities to act with reasonable care to avoid causing foreseeable harm to others. It forms the basis of negligence claims; if this duty is breached and causes injury, the responsible party may be held liable. 

I have repeatedly said that companies must ensure AI systems are properly trained and monitored to avoid liability for errors caused by biased AI. Although developers may be liable for inherent defects, the business deploying the AI is often responsible for how the system is used. 

If you are going to use AI to detect fair lending, you must be able to identify disparate impact patterns across demographic groups, monitor for "redlining" analogs in digital lending, flag outlier decisions that deviate from modeled norms, and generate audit trails for regulatory review. 

AI is rapidly transforming the mortgage industry, promising increased efficiency, faster decision-making, and improved risk assessment. Still, its integration poses significant challenges related to fair lending compliance, data bias, and transparency. While AI can expand credit access by utilizing alternative data, it risks perpetuating historical biases if models are trained on biased data or utilize "black box" algorithms that make decisions hard to explain. 

The Regulatory Environment 

There is a host of regulations implicated in fair lending legal and regulatory compliance requirements. Not just the Dodd-Frank Act mandates, which, via the CFPB, enforce fair lending and prevent predatory practices, but also other Acts that predate Dodd-Frank, such as the Equal Credit Opportunity Act (ECOA and Regulation B), Fair Housing Act (FHA), Home Mortgage Disclosure Act (Regulation C), and the Community Reinvestment Act (CRA). These regulations by themselves cover all aspects of lending, including marketing, applications, underwriting, and servicing. 

Add to the foregoing Acts, the Fair Credit Reporting Act and SAFE Act, which address consumer credit report accuracy and mortgage loan origination to reduce fraud, and the Truth in Lending Act (Regulation Z), which requires clear disclosure of loan terms, including the "3/7/3" rule for timing, which endeavors to prevent deceptive lending. 

I have only just grazed the surface of the ways and nuances that impact AI and fair lending. 

Pluses and Minuses 

There are pluses and minuses. One big plus is that AI can streamline underwriting, reduce operational costs, and identify creditworthy applicants that traditional credit scoring methods might overlook. But one big minus is that AI models can create "digital redlining" or proxy discrimination, where algorithms use seemingly neutral variables (like zip codes) that correlate strongly with protected characteristics (for instance, race or gender) to perpetuate discrimination. 

Regulations like the ECOA require creditors to provide reasons for adverse actions. "Black-box" AI systems that cannot explain their decisions create high compliance risks. There is already a term for this crucial compliance risk, called Explainable AI (or XAI). 

Research has shown that even sophisticated Large Language Models can exhibit bias, with studies suggesting that minority applicants might need significantly higher credit scores (perhaps 120 points higher) to achieve the same approval rates as white applicants, although specifically prompting the model to "use no bias" can mitigate this effect. That said, any tendency toward persistent racial bias would need to be continually corrected. 

The CFPB has clarified that there are no exceptions to federal consumer financial protection laws for new technologies and that the use of algorithms that produce discriminatory outcomes violates fair lending laws. In our practice, we have observed that regulators expect lenders to conduct continuous, rather than periodic, testing of AI models for disparities and to seek less discriminatory alternatives (known colloquially as LDAs). Furthermore, fair lending examiners now routinely request model documentation, training data descriptions, and bias testing results. 

It is entirely appropriate for you to be concerned about federal and state fair lending requirements affecting your AI system. While some mortgage industry groups are calling for uniform federal AI legislation to preempt a patchwork of state laws, a 2026 White House framework suggests that, at least for now, lenders should expect to maintain compliance with existing, strict standards. 

Policies and Implementation 

Thus, if a company is going to rely on an AI system, whether embedded or developed, its policies should outline, at a minimum, the following: 

·       Rigorous Data Auditing: Lenders must ensure training data is representative and free from historical bias. 

·       Human-in-the-loop: AI should complement rather than replace human underwriting to maintain accountability, especially for complex cases. 

·       Model Validation: Implementing frameworks like the NIST AI Risk Management Framework to evaluate and manage risks associated with third-party and in-house AI tools. 

·       Regular Testing: Actively auditing for disparate impact across all stages of the loan funnel, including in AI-driven marketing and property valuations. 

Predictable Errors 

The errors you are identifying in your proprietary AI system may not necessarily be picked up in a generic AI program. The areas of error, and therefore violations, and thus potential liability, that would likely occur for all AI systems are known – at least to the extent that a known-known is predicable. 

First, proxy discrimination is the central problem. AI models trained on historical data can learn to replicate past discriminatory patterns without ever "seeing" race. For instance, zip code, credit mix, employment type, and device type can all serve as proxies for protected characteristics. Hence, the model is technically race-neutral but functionally discriminatory. 

Second, under ECOA and the Fair Housing Act, lenders must provide specific, actionable reasons for adverse actions. Any opacity or lack of explainability creates legal and regulatory friction. Many high-performance machine learning models resist the kind of explanation regulators and applicants require. 

Third, there is the ornery issue of feedback loops, which are acutely dangerous in lending. If a model trained on historical approvals learns that certain neighborhoods were rarely approved, it may perpetuate that pattern – creating a self-reinforcing cycle. 

Fourth, AI removes intent from the equation, but regulators care about outcomes. Remember, disparate impact without intent can still be illegal. A model that produces statistically worse results for a protected class faces legal exposure regardless of how "objective" the algorithm is. 

Human or AI? 

I want to suggest a conceptual framework for using AI to “manage” fair lending. You must have a human-governed lending program. That's the bottom line! Check out the chart I have created to ensure that AI systems – whether generic or proprietary – comply with foundational fair lending regulations:

 

Role

Human or AI?

Policy design & compliance strategy

Human

Pattern detection & disparate impact testing

AI-assisted

Individual credit decisions

AI with human oversight

Adverse action explanation

Human-reviewable AI

Regulatory response & accountability

Human

Be sure to test for consistency! A well-designed model applies the same criteria to every applicant, eliminating the human variability that has historically produced discriminatory outcomes, such as a loan officer's implicit bias, a branch manager's discretion, and uneven documentation requirements.

 

This article, Will AI Reduce Fair Lending Violations?, published on March 25, 2026, is authored by Jonathan Foxx, PhD, MBA, the Chairman & Managing Director of Lenders Compliance Group, founded in 2006, the first and only full-service, mortgage risk management firm in the United States, specializing exclusively in residential mortgage compliance.