LENDERS COMPLIANCE GROUP®

AARMR | ABA | ACAMS | ALTA | ARMCP | IAPP | IIA | MBA | MERSCORP | MISMO | NAMB

New Text

AI Credit Score Underwriting

Loading the Elevenlabs Text to Speech AudioNative Player...

Thursday, December 11, 2025

Shadow AI in Mortgage Banking

Podcast | Substack

QUESTION 

Everyone in our company received a message from management warning us about the use of Shadow AI. Most of us have never heard of Shadow AI. Next week, a company-wide video session is taking place to learn about it. Attendance is mandatory. 

So, I started reading about it. I found that it involves going to websites like ChatGPT. The management notice says that some of us are going online to AI websites and using them to replace our own knowledge and experience. Until further notice, we have been told not to use ChatGPT and other AI websites. 

A few of us got together to find out how this could affect us. We are underwriters, processors, loan officers, and quality control people. It's just a small group. You have written articles on AI and have AI policies. Please tell us how Shadow AI affects mortgage banking.

How does Shadow AI affect mortgage banking? 

Our Compliance Solution 

We recommend our AI Policy Program for Mortgage Banking. 

A well-constructed AI Policy Program is a proactive means designed to avoid and mitigate risks associated with Artificial Intelligence. Responsible AI practices can help align AI system design, development, and use with applicable legal and regulatory guidelines. 

RESPONSE TO YOUR QUESTION 

Shadow AI is not as spooky as it sounds, but it can adversely impact mortgage banking entities. Our AI Policy Program for Mortgage Banking addresses Shadow AI and many other features of artificial intelligence. Keep in mind that the pace of AI development is brisk, somewhat unstable, and rapid. Updates to policies and procedures are necessary for the foreseeable future. You should expect to see more alerts, notices, updates, and training. 

If you want to learn more about AI and mortgage banking, consider our recent articles on artificial intelligence. 

Shadow AI 

What is Shadow AI? Essentially, it is the unauthorized use of artificial intelligence tools, apps, or features by employees within an organization, bypassing official IT and security oversight, often for productivity gains. Unfortunately, it introduces significant risks, including data leaks, compliance failures, bias, regulatory non-compliance, and intellectual property loss. 

Shadow AI is not "Shadow IT," which is quite a bit different, but, in a way, it is adjacent, because Shadow IT manifests where any technology (for instance, software, hardware, cloud services, and apps) is used by employees without the company's IT department approving it. 

COMPONENTS OF SHADOW AI 

Shadow AI refers to the unauthorized use of AI tools like ChatGPT, Midjourney, Claude, Bard, Microsoft 365 Copilot, Salesforce Einstein, or AI plugins, which create vulnerabilities because these tools are not vetted for corporate security or data policies. In other words, employees may be using these AI tools for various purposes – such as summarizing documents, drafting emails, generating content, providing knowledge, and so forth – although IT has not formally approved their use by employees. 

Sometimes, employees use workarounds by accessing AI tools from their personal accounts. Employees may even use their personal logins for AI services that process company data. 

UNAUTHORIZED USE OF AI 

The potential adverse consequences of unauthorized use of AI tools include data leakage, compliance issues, regulatory and legal problems, lending practices, security vulnerabilities, a lack of control and oversight, and inaccurate or biased output. Shadow AI, therefore, can really hobble a company's risk profile. 

Assuming the best of intentions in using Shadow AI tools, I can understand an employee wanting to be more efficient, but such use bypasses crucial safeguards, turning productivity tools into major security and governance risks for the business. 

Rather than outright banning Shadow AI tools, most organizations address Shadow AI by establishing clear governance, monitoring usage, and providing secure, approved AI alternatives. The focus is usually on striking a balance between innovation and essential security and compliance standards. 

ADVERSE RISKS OF SHADOW AI 

Depending on the Shadow AI tool used, there are numerous risks to mortgage banking, among which are surely the following, as I have mentioned in the aforementioned articles.

Regulatory Non-Compliance 

Examples of regulatory risk include violations of fair lending regulations, ECOA requirements, CFPB rules, mandates of the Fair Housing Act, and other regulatory frameworks. Shadow AI systems often lack the necessary audit trails and transparency to demonstrate compliance. The "black-box" nature of some AI models makes it nearly impossible to explain how a specific lending decision was reached, violating regulatory requirements for explainable decisions (such as adverse action notification requirements), let alone to determine such decisions outside the AI system approved by IT. 

Data Security and Privacy Risks 

The unauthorized use of AI tools may result in the handling of sensitive customer financial information (such as loan applications and personal data) without proper security controls. This exposes the lender to data breaches, which can result in significant monetary penalties and reputational damage, among other consequences. Furthermore, some generative AI tools may retain conversational data for model training, meaning confidential information shared with a chatbot could potentially be exposed in future interactions with other users. 

Algorithmic Bias and Discrimination 

AI models are trained on historical data, which may contain patterns of past discrimination. If Shadow AI tools use this data unsupervised, they can perpetuate or even amplify existing biases, leading to discriminatory outcomes against protected groups. Violations would result in legal action and significant fines from regulatory bodies. 

Operational Instability and Integration Issues 

Learn from another company's mistakes! We have a client that integrated unapproved AI solutions with its mainframe, creating an unstable backend and posing security weaknesses that hackers could exploit as a pivot point into the network. The company shut down all Shadow AI activity until it eliminated the exposure. Their IT team determined that poorly documented AI components can behave unpredictably over time, especially when the code is automatically generated and not fully understood by the IT team. 

CONCLUSION 

In my view, employees should avoid using Shadow AI because it poses significant security, data privacy, IP, and compliance risks to the company, potentially exposing sensitive data, proprietary information, and violating regulations. Even though employees often use Shadow AI tools to improve efficiency and productivity, these tools bypass corporate oversight, posing a significant threat to organizational security and data governance. 

 

This article, Shadow AI in Mortgage Banking, published on December 11, 2025, is authored by Jonathan Foxx, PhD, MBA, the Chairman & Managing Director of Lenders Compliance Group, the first and only full-service, mortgage risk management firm in the United States, specializing exclusively in residential mortgage compliance.