LENDERS COMPLIANCE GROUP®

AARMR | ABA | ACAMS | ALTA | ARMCP | IAPP | IIA | MBA | MERSCORP | MISMO | NAMB

Artificial Intelligence Disclosure

Loading the Elevenlabs Text to Speech AudioNative Player...

Thursday, September 4, 2025

Artificial Intelligence Disclosure

QUESTION 

I am the General Counsel and Compliance Officer of a mortgage lender. Our footprint is currently in 35 states. Recently, we have begun to use Artificial Intelligence in our loan origination process. However, I have some concerns about proper consumer disclosure. 

In my view, we should be disclosing our specific use of AI to borrowers. We should disclose the role AI plays in our loan applications from the point of sale to close, and, if applicable, beyond. But I do not find much regulatory guidance to lean on. I would appreciate your views on AI disclosure and, if possible, which areas would be subject to such disclosure. 

Is there a requirement for a mortgage lender to issue an AI consumer disclosure? 

What regulatory areas are potentially impacted by AI, thereby causing AI disclosure? 

COMPLIANCE SOLUTIONS 

AI Tune-up® 

Artificial Intelligence Statement  

RESPONSE 

There is currently no broad legal requirement for lenders to disclose the general use of AI in loan applications. However, under existing consumer protection and fair lending laws, lenders are legally required to disclose specific, accurate reasons for adverse actions, such as a loan denial, even if a complex AI or algorithmic system made the decision. 

This transparency is mandated by the Equal Credit Opportunity Act (ECOA), and regulatory bodies like the Consumer Financial Protection Bureau (CFPB) have issued guidance emphasizing that the complexity of AI is not an excuse for failing to provide a clear explanation. 

Regulatory Mandates 

Take, for instance, the regulatory mandates involving adverse action disclosure. The CFPB has directly addressed the issue of "black-box" models, which are AI systems whose logic is not clear even to their developers. The CFPB emphasizes that lenders cannot point to a broad category from a checklist, such as "purchasing history," if a consumer is denied credit based on AI analysis. Instead, the lender must provide specific details, such as the types of goods or places that influenced the decision. 

Also, there is no "AI exemption." A lender's use of AI or machine learning does not create a special exemption from fair lending laws. The CFPB has made it a priority to ensure that the use of technology does not allow lenders to circumvent established consumer protection regulations. In addition to the CFPB, regulators and the Federal Trade Commission have warned that there is no "AI exemption" for existing fair lending and consumer protection laws. Therefore, undisclosed AI could be found to violate these laws, leading to enforcement actions. 

The Colorado Artificial Intelligence Act 

Some state laws specifically address AI disclosure. For example, the Colorado Artificial Intelligence Act (CAIA) requires developers to test for algorithmic discrimination in consequential decisions, and some state consumer protection statutes allow for prosecution if an AI's biased outcomes cause consumer harm. This is a landmark act in many ways. If you are originating loans in Colorado, you should review the relevant regulations. However, you would do well to conduct a statewide review of AI legislation in all states where you are licensed to originate mortgage loans. 

CAIA may be a model for the direction states are going with respect to AI disclosure. The Act defines algorithmic discrimination, which is the unlawful differential treatment that disfavors an individual or group on the basis of protected characteristics. The algorithmic discrimination would be caused by high-risk artificial intelligence systems, defined as any system that, when deployed, makes — or is a substantial factor in making — a "consequential decision," which generally relates to those involving education, employment, financial services, housing, health care, or legal services. 

Under the CAIA, there are stipulated requirements for developers to clearly display on their website or in public use an up-to-date disclosure of any high-risk AI systems they have developed and make available how they manage known or reasonably foreseeable risks of algorithmic discrimination. Any determination that the AI system has caused or is reasonably likely to cause algorithmic discrimination must be brought to the attention of the Colorado attorney general, among others.

If you are not a developer but are using a high-risk artificial intelligence system, you are not exempt from liability. CAIA imposes various obligations relating to documentation, disclosures, risk analysis and mitigation, governance, and impact assessments for developers and deployers of high-risk AI systems. Importantly, with respect to all AI systems that interact with consumers, deployers must ensure that consumers are aware they are interacting with an AI system. 

Voluntary Disclosure 

In the absence of regulatory requirements, I advocate for voluntary AI disclosure. I have several reasons for taking this position based on ethical and consumer advocacy concerns. 

I believe consumer trust is strengthened by disclosing the use of AI, because transparency builds trust with consumers, particularly in sensitive areas like finance. Preventing bias and discrimination is essential on legal, regulatory, and ethical grounds. Proactive disclosure can help uncover and mitigate bias that can be magnified by AI systems trained on flawed historical data. Transparency makes it easier to audit models and ensure fair lending practices across all demographic groups. 

And, from the point of view of compliance risk, voluntary disclosure reduces legal and reputational risks. Lenders who maintain detailed records and create explainable AI systems reduce their exposure to costly regulatory fines and reputational damage. An AI governance framework can help ensure that a lender's practices align with evolving standards. 

Guidance to Lenders 

For residential mortgage loan originators implementing AI in their loan applications, I believe an AI disclosure should be issued to the consumer. It is a matter of explainability and fairness! 

Here are a few good reasons to issue the AI disclosure: 

Differentiate Internal vs. External use: Disclosure is more critical when AI is directly involved in making significant decisions that affect customers, such as loan approval, rather than simply being an internal tool to improve workflow. 

Provide meaningful Human Review: Until AI models mature, I believe human reviewers should oversee AI evaluations. A consumer should also have the right to challenge an AI-driven decision and receive human re-evaluation. 

Develop robust Internal Governance: Lenders should establish a strong AI governance framework to oversee the entire lifecycle of their AI models. This framework should include tracking and documenting AI decisions and ensuring continuous compliance with regulations. 

Risk Mitigation: Proactive disclosure and ethical frameworks may reduce the risk of lawsuits, regulatory fines, and reputational damage that can result from biased or opaque AI systems 

Although there is no specific regulation in the U.S. that mandates all lenders disclose the use of AI in every loan application, both legal precedent and ethical best practices require transparency, especially regarding automated adverse decisions. 

The level of disclosure depends on how the AI is used in the lending process, such as in credit denials and adverse actions, AI-influenced decisions, direct consumer interaction, and use of non-traditional data. 

I think that there could be situations where AI disclosure is not always necessary. For instance, if AI is used purely as an internal tool to assist human professionals, such as to streamline research or analyze data, disclosure is generally not required. But, even then, I would say that human oversight must remain a central part of the process. My distinction here is based on AI functioning as a professional assistant rather than an autonomous decision-maker.

 

Jonathan Foxx, PhD, MBA, the Chairman & Managing Director of Lenders Compliance Group, authored this article.