YOUR COMPLIANCE QUESTION
Our company is building an AI engine to monitor for fair lending violations. The AI system is extensive and includes chatbots. It will be integrated into our LOS and several other systems. We are a large mortgage originator and servicer. We use one of the most well-known platforms for loan originating and servicing. The system offers several new AI features. But we ran our own test against the LOS and found that our AI engine is identifying more fair lending issues than the one embedded in the LOS.
As the company's General Counsel and Chief Risk Officer, I was shocked that building our own AI system could produce better results than a highly rated, well-established LOS. Granted, our AI system is proprietary and reflects our unique compliance needs. Full disclosure: We have been a client of yours for over 15 years, and we have discussed these and other AI findings with your team in order to mitigate compliance risk.
I wonder if a one-size-fits-all AI integration in the LOS can really be effective, given that fair lending involves many state and federal regulations. We are testing and monitoring our AI integration, but many companies lack the resources we have and will rely on their LOS provider's results.
Do you think a generic AI system can reduce fair lending violations?
Signed,
Risk Averse
OUR COMPLIANCE SOLUTION
AI POLICY PROGRAM FOR MORTGAGE BANKING™
Our AI Policy Program aligns with Freddie Mac's AI governance requirements for Freddie Mac Sellers/Servicers. Responsible AI practices can help align AI system design, development, and use with applicable legal and regulatory guidelines.
Our AI Policy Program consists of the following policies:
1. Artificial Intelligence Governance Policy
2. Artificial Intelligence Use Policy
3. Artificial Intelligence Workplace Policy
4. Artificial Intelligence Credit
Underwriting Policy
5. Artificial Intelligence Do & Do Not
Policy
6. Artificial Intelligence Ethics Policy
7. Artificial Intelligence Vendor Management Policy
Contact us for the presentation and pricing.
RESPONSE TO YOUR QUESTION
Let me begin with my conclusion: there is currently no one-size-fits-all, generic AI system that can be thoroughly relied on to reduce fair lending violations.
Most companies will rely on originating and servicing platforms that integrate AI into fair lending analytics. Unfortunately, companies are generally liable for AI errors, particularly when AI causes financial losses, safety issues, or provides consumers with false information. Legal responsibility typically falls on the business deploying the technology, even if it properly monitors, tests, or ensures that the AI is fit for fair lending detection.
Legal and Regulatory Risk
Put another way, your business is responsible for any misinformation provided by your AI chatbots. As you likely know, there are certain aspects of tort law, like duty of care, that require individuals and entities to act with reasonable care to avoid causing foreseeable harm to others. It forms the basis of negligence claims; if this duty is breached and causes injury, the responsible party may be held liable.
I have repeatedly said that companies must ensure AI systems are properly trained and monitored to avoid liability for errors caused by biased AI. Although developers may be liable for inherent defects, the business deploying the AI is often responsible for how the system is used.
If you are going to use AI to detect fair lending, you must be able to identify disparate impact patterns across demographic groups, monitor for "redlining" analogs in digital lending, flag outlier decisions that deviate from modeled norms, and generate audit trails for regulatory review.
AI is rapidly transforming the mortgage industry, promising increased efficiency, faster decision-making, and improved risk assessment. Still, its integration poses significant challenges related to fair lending compliance, data bias, and transparency. While AI can expand credit access by utilizing alternative data, it risks perpetuating historical biases if models are trained on biased data or utilize "black box" algorithms that make decisions hard to explain.