LENDERS COMPLIANCE GROUP®

AARMR | ABA | ACAMS | ALTA | ARMCP | IAPP | IIA | MBA | MERSCORP | MISMO | NAMB

Showing posts with label Underwriting. Show all posts
Showing posts with label Underwriting. Show all posts

Wednesday, April 1, 2026

AI Replaced Me

YOUR COMPLIANCE QUESTION

Two weeks ago, you wrote an article titled Will AI Replace Me? When I read it, I was still employed. Well, it's two weeks later, and I have been fired and replaced by an AI bot. I am still in shock. I really did not think my job was in jeopardy. Other people in my company were also fired and replaced by AI bots.

 

Yours is the only compliance firm I have come across that explains the positives and negatives of artificial intelligence. I guess, for me, it is a big negative. I have been in the mortgage world for over twenty years. My main positions were in underwriting, processing, and closing. I have looked around for work, and nobody's hiring. I'll bet those positions are now using AI bots.

 

I don't know what to do next. I'm only forty-five. I have limited savings and a small family. I feel like I'm getting squeezed out of the mortgage industry. A group of us met with our company's COO, and she said the company is moving rapidly toward AI across its origination process. So, it looks like I'm heading for a dead end. It feels like I'm being thrown on a trash heap.

 

What is happening with these AI bots? 


Is it Us (the humans) against Them (the AI bots)?

 

Signed,

Jobless

 

OUR COMPLIANCE SOLUTION

AI POLICY PROGRAM FOR MORTGAGE BANKING™  

Our AI Policy Program aligns with Freddie Mac's AI governance requirements for Freddie Mac Sellers/Servicers. Responsible AI practices can help align AI system design, development, and use with applicable legal and regulatory guidelines. 

Our AI Policy Program consists of the following policies:  

1.      Artificial Intelligence Governance Policy

2.      Artificial Intelligence Use Policy

3.      Artificial Intelligence Workplace Policy

4.      Artificial Intelligence Credit Underwriting Policy

5.      Artificial Intelligence Do & Do Not Policy

6.      Artificial Intelligence Ethics Policy

7.      Artificial Intelligence Vendor Management Policy  

Contact us for the presentation and pricing! 

 

RESPONSE TO YOUR QUESTION

 

This is a scary time as the world embarks on this new era of AI technology. Unfortunately, unemployment will increase as AI replaces human workers. The change will not be one-for-one. In some cases, it will be far worse, as one AI bot can replace hundreds of humans on a task, especially in loan processing, underwriting, and other operational roles. I'm going to be brutally honest with you: underwriters are among the more commonly cited "at risk" roles in mortgage banking.

 

WILL AI REPLACE YOU

 

In the March 19th article you cited, Will AI Replace Me?, the concern expressed was from a loan officer. However, I stated the following AI automations that, as implemented, would adversely affect the need for humans, as follows: 

·       AI underwriting engines can now complete the entire initial underwriting process autonomously, approving loans days faster than traditional methods. This process is probably the clearest current example of loan origination being removed entirely from human hands. 

·       Unfortunately, loan processors, underwriting assistants, compliance analysts, escrow coordinators, closing personnel, and data entry clerks are at the intersection I described above, where humans and mimicking humans reside. 

In the March 25th article, Will AI Reduce Fair Lending Violations?, I noted, in pertinent part, that "AI can streamline underwriting, reduce operational costs, and identify creditworthy applicants that traditional credit scoring methods might overlook." 

SYSTEMIC CHANGE 

The transition is systemic, not particularized to just your company, loan products and services, region, or institutional type. From point of sale to securitization, AI is quickly becoming embedded. AI is already doing a lot of what junior underwriters used to do. And, as you know, Fannie Mae's Desktop Underwriter and similar automated systems have been handling straightforward loan approvals for years. That trend is accelerating due to artificial intelligence.

Wednesday, March 25, 2026

Will AI Reduce Fair Lending Violations?

YOUR COMPLIANCE QUESTION 

Our company is building an AI engine to monitor for fair lending violations. The AI system is extensive and includes chatbots. It will be integrated into our LOS and several other systems. We are a large mortgage originator and servicer. We use one of the most well-known platforms for loan originating and servicing. The system offers several new AI features. But we ran our own test against the LOS and found that our AI engine is identifying more fair lending issues than the one embedded in the LOS. 

As the company's General Counsel and Chief Risk Officer, I was shocked that building our own AI system could produce better results than a highly rated, well-established LOS. Granted, our AI system is proprietary and reflects our unique compliance needs. Full disclosure: We have been a client of yours for over 15 years, and we have discussed these and other AI findings with your team in order to mitigate compliance risk. 

I wonder if a one-size-fits-all AI integration in the LOS can really be effective, given that fair lending involves many state and federal regulations. We are testing and monitoring our AI integration, but many companies lack the resources we have and will rely on their LOS provider's results. 

Do you think a generic AI system can reduce fair lending violations? 

Signed, 

Risk Averse 

OUR COMPLIANCE SOLUTION 

AI POLICY PROGRAM FOR MORTGAGE BANKING™ 

Our AI Policy Program aligns with Freddie Mac's AI governance requirements for Freddie Mac Sellers/Servicers. Responsible AI practices can help align AI system design, development, and use with applicable legal and regulatory guidelines. 

Our AI Policy Program consists of the following policies: 

1.      Artificial Intelligence Governance Policy

2.      Artificial Intelligence Use Policy

3.      Artificial Intelligence Workplace Policy

4.      Artificial Intelligence Credit Underwriting Policy

5.      Artificial Intelligence Do & Do Not Policy

6.      Artificial Intelligence Ethics Policy

7.      Artificial Intelligence Vendor Management Policy 

Contact us for the presentation and pricing 

RESPONSE TO YOUR QUESTION 

Let me begin with my conclusion: there is currently no one-size-fits-all, generic AI system that can be thoroughly relied on to reduce fair lending violations. 

Most companies will rely on originating and servicing platforms that integrate AI into fair lending analytics. Unfortunately, companies are generally liable for AI errors, particularly when AI causes financial losses, safety issues, or provides consumers with false information. Legal responsibility typically falls on the business deploying the technology, even if it properly monitors, tests, or ensures that the AI is fit for fair lending detection. 

Legal and Regulatory Risk 

Put another way, your business is responsible for any misinformation provided by your AI chatbots. As you likely know, there are certain aspects of tort law, like duty of care, that require individuals and entities to act with reasonable care to avoid causing foreseeable harm to others. It forms the basis of negligence claims; if this duty is breached and causes injury, the responsible party may be held liable. 

I have repeatedly said that companies must ensure AI systems are properly trained and monitored to avoid liability for errors caused by biased AI. Although developers may be liable for inherent defects, the business deploying the AI is often responsible for how the system is used. 

If you are going to use AI to detect fair lending, you must be able to identify disparate impact patterns across demographic groups, monitor for "redlining" analogs in digital lending, flag outlier decisions that deviate from modeled norms, and generate audit trails for regulatory review. 

AI is rapidly transforming the mortgage industry, promising increased efficiency, faster decision-making, and improved risk assessment. Still, its integration poses significant challenges related to fair lending compliance, data bias, and transparency. While AI can expand credit access by utilizing alternative data, it risks perpetuating historical biases if models are trained on biased data or utilize "black box" algorithms that make decisions hard to explain.

Thursday, March 19, 2026

Will AI Replace Me?

YOUR COMPLIANCE QUESTION 

I have been a loan officer for fifteen years. I am a single mother of two wonderful teenagers. I have also been the breadwinner for 15 years since my husband passed away. I keep reading how AI is going to replace me. 

In the last few weeks, I've read a few articles about how AI is transforming the mortgage world. Part of that transformation looks like I am going to lose my job and be replaced by a computer program. This is so unfair. I have spent all these years building my professional life, and now I feel it is all going to be trashed. 

I heard you speak at a conference recently. You spoke about your new AI Policy Program and answered many audience questions. One of them was about how loan officers, processors, and underwriters are worried about being replaced by AI. I would like you to share your remarks in your newsletter. 

Will AI replace loan officers like me? 

Signed, 

A Human Being 

OUR COMPLIANCE SOLUTION 

AI POLICY PROGRAM FOR MORTGAGE BANKING™ 

Our AI Policy Program aligns with Freddie Mac's AI governance requirements for Freddie Mac Sellers/Servicers. Responsible AI practices can help align AI system design, development, and use with applicable legal and regulatory guidelines.

Our AI Policy Program consists of the following policies: 

1.      Artificial Intelligence Governance Policy

2.      Artificial Intelligence Use Policy

3.      Artificial Intelligence Workplace Policy

4.      Artificial Intelligence Credit Underwriting Policy

5.      Artificial Intelligence Do & Do Not Policy

6.      Artificial Intelligence Ethics Policy

7.      Artificial Intelligence Vendor Management Policy 

Contact us for the presentation and pricing 

RESPONSE TO YOUR QUESTION 

REVOLUTION AND EVOLUTION 

There have been many technological revolutions in human history. We are now at the advent of another revolution: the onset of artificial intelligence (AI) technology, a massive, incremental, worldwide expansion of knowledge in computer science dedicated to creating systems capable of performing complex tasks that typically require human intelligence. Each revolution has brought about profound changes in civilizations. Surges in technological development have characterized each revolution. 

From stone-age tools to learning to control fire, from foraging for food to the first agricultural revolution, from replacing bronze with iron, each stage of technical knowledge enabled widespread human development at the cost of some trade-off in the human social experience that had evolved heretofore. The printing press brought about mass production of books, democratizing knowledge and literacy; the scientific revolution shifted knowledge from philosophy to evidence-based insights; new farming techniques led to increased food output, population growth, and urbanization. 

And, of course, we all know of the industrial revolution, where machine-based manufacturing shifted society away from manual labor; this was then followed by the technical revolution, where mass production became deeply entrenched in lived experience, such as the creation of assembly lines, steel-making methodologies, and the application of electricity, internal combustion, and telecommunications. Over the last 100 years, the green revolution has introduced high-yielding crops, industrial fertilizers, and new agricultural technologies, thereby increasing global food production. 

Which Revolution Are We In Now? 

So, where are we now in the scheme of things? 

In my view, we are currently living in the information and digital revolution, but rapidly transitioning to the artificial intelligence revolution. We are living at a time when computers and transistors, the Internet, personal computing, and smartphones are being rapidly replaced by the artificial intelligence revolution, characterized by discoveries such as gene editing, advanced robotics, and nanotechnology.

Wednesday, February 4, 2026

Freddie Mac Deadline: March 3, 2026 – AI Governance Framework

YOUR COMPLIANCE QUESTION 

We are using your AI Policy Program. Upon receipt, we had it reviewed by our AI committee to determine whether it complies with Freddie's requirements for establishing a comprehensive AI governance framework for AI and Machine Learning. 

I am pleased to report that your AI Policy Program received the committee's approval. It met our checklist based on Freddie's requirements. 

As a Freddie Mac Seller/Servicer, we want to know what the effect would be on us if we had relationship partners that are not in compliance with the AI governance framework.   

What restrictions will Freddie Mac impose on us if our relationship partners do not comply with their AI requirements as of March 3, 2026? 

Signed,

An Anxious Compliance Manager 

OUR COMPLIANCE SOLUTION 

AI POLICY PROGRAM FOR MORTGAGE BANKING 

Our AI Policy Program aligns with Freddie Mac's AI governance requirements for the Freddie Mac Seller/Servicer (or "Lender"). Our well-constructed AI Policy Program is a proactive means designed to avoid and mitigate risks associated with Artificial Intelligence and Machine Learning. Responsible AI practices can help align AI system design, development, and use with applicable legal and regulatory guidelines. 

Our AI Policy Program consists of the following policies: 

1.      Artificial Intelligence Governance Policy

2.      Artificial Intelligence Use Policy

3.      Artificial Intelligence Workplace Policy

4.      Artificial Intelligence Credit Underwriting Policy

5.      Artificial Intelligence Do & Do Not Policy

6.      Artificial Intelligence Ethics Policy

7.      Artificial Intelligence Vendor Management Policy 

Discount offer available until March 3, 2026! 

Contact us for the presentation and pricing. 

OUR RESPONSE TO YOUR QUESTION 

Thank you for using our AI Policy Program. Since its release on October 30, 2025, it has been in considerable demand. 

Our AI Policy Program for Mortgage Banking, which meets Freddie Mac's AI Governance Framework ("AI Framework"), is the first to provide a set of AI policies dedicated to mortgage banking. 

We had been tracking the GSE formulation of AI requirements for several months. 

On March 11, 2025, Freddie released a formal AI/ML governance framework in its Seller/Servicer Guide ("Guide"), introducing a comprehensive AI Framework for Sellers and Servicers that requires formal policies for the use of artificial intelligence ("AI") and machine learning ("ML"). This update mandated that any AI/ML used in the origination or servicing of Freddie Mac-eligible loans be governed by strict policies. 

On December 3, 2025, Bulletin 2025-16 was issued, clarifying timelines and expectations and stating that AI is no longer optional. In effect, Freddie asserted that implementation is a mission-critical, governed enterprise function. 

The compliance effective date is March 3, 2026. 

After considerable review, research, and drafting, we issued our AI Policy Program on October 30, 2025, thirty-four days before Freddie issued Bulletin 2025-16 on December 3, 2025, and Bulletin 2025-17 issued on December 10, 2025. 

On December 10, 2025, Freddie issued Bulletin 2025-17, which introduced revisions to AI Tools relating to servicing, information security, and Seller/Servicer insurance, with most changes effective on March 3, 2026

In the context of the AI Framework, "AI Tools" are any artificial intelligence or machine learning tools used in the loan lifecycle. 

BULLETINS 

Bulletin 2025-16 solidifies the compliance effective date of March 3, 2026, requires Lenders to have a comprehensive governance framework for AI/ML Tools used in loan origination or servicing, and, effective January 1, 2026, Lenders must ensure executive oversight, document AI use cases, ensure fairness, mitigate bias, and manage vendor risk.

Tuesday, November 18, 2025

AI Credit Score Underwriting

QUESTION 

Thank you for your recent columns on artificial intelligence in mortgage banking. I want to know how to handle credit scores using AI. I am the SVP Operations of a large wholesale lender. We want to include AI in our underwriting. In particular, we want to use it to evaluate a borrower's creditworthiness. However, our legal department has advised us that there are huge privacy issues. 

We do not want to be dependent on the credit reporting agencies for AI information. And we do not want to outsource AI in our credit score underwriting. The AI evaluation methods we discussed with legal have been shut down due to potential privacy violations. 

What are the privacy risks in using AI to determine a borrower's credit score? 

COMPLIANCE SOLUTION 

AI Policy Program for Mortgage Banking 

A well-constructed AI Policy Program is a proactive means designed to avoid and mitigate risks associated with Artificial Intelligence (AI). AI risk management is a key component of responsible development and use of AI systems. Responsible AI practices can help align the decisions about AI system design, development, and use with intended aims and values.

RESPONSE 

The privacy challenges associated with artificial intelligence are enormous, and the risks will only become more and more difficult to mitigate. In our recently issued AI Policy Program for Mortgage Banking, we sought to provide a comprehensive policy framework for using AI in mortgage banking. Indeed, one of the policies in the Policy Program is titled "Artificial Intelligence Credit Underwriting Policy." 

If you need a policy framework for AI, please request information about our Policy Program. 

AI credit score underwriting is an uncharted legal and regulatory territory! 

You will find that most of your legal department's concerns about AI in mortgage lending involve the collection and potential misuse of vast amounts of sensitive personal data, heightened cybersecurity vulnerabilities, and a lack of transparency that can lead to a loss of consumer trust and potential regulatory non-compliance. 

Broadening this out, AI in credit score underwriting stems from the extensive collection of sensitive, alternative data, the potential for unauthorized access and data breaches, and the difficulty in ensuring transparency and consumer control over how personal information is used. 

Whatever you do, you will need to be in lockstep with your legal advisors. This "territory" is dotted with legal minefields! Let's consider these risks. 

AI models require vast amounts of data, often going beyond traditional financial information to include "alternative data" such as geolocation, social media activity, online behavior, transaction histories, and even biometric data. The sheer volume and sensitive nature of this extensive data collection increase the overall risk to consumer privacy. 

Zero in on that data! It can be collected for one purpose but might be used for other, unforeseen purposes without the user's explicit consent. This lack of control over how personal data is processed raises significant privacy issues. From the legal perspective, this amounts to unauthorized use and repurposing. 

The large datasets used to train AI models are attractive targets for cyber attackers. Inadequate security measures or vulnerabilities in third-party vendor systems can lead to data breaches, exposing sensitive personal and financial information and increasing the risk of identity theft or fraud. Data security must be failsafe. 

AI algorithms can analyze seemingly innocuous data to infer highly personal attributes, such as health status, political views, or ethnic origin (a "predictive harm"). From a regulatory perspective, this risk arises from the inference of sensitive Information. In other words, this capability to derive sensitive insights can lead to potential discrimination and privacy infringements. 

Complex AI algorithms can be difficult to explain, even for their developers, creating a Black Box where it is unclear exactly how a specific credit decision was reached. This opacity, its lack of transparency, deprives consumers of understanding why they were denied credit and of exercising their right to an explanation or an appeal. I have written here about the Black Box "model" or "problem". 

Do not assume that so-called "anonymized" data effectively mitigates risk. Even when data is "anonymized," AI can sometimes de-anonymize individuals by cross-referencing various data points, compromising individual privacy.

Monday, June 23, 2025

FICO to include BNPL Data

FICO (Fair Isaac Corp) is making updates to its credit score models to incorporate Buy Now, Pay Later (BNPL) data. It announced this update today: FICO Unveils Groundbreaking Credit Scores That Incorporate Buy Now, Pay Later Data. See the outline below for details.

FICO and BNPL provider Affirm released a study in February simulating the impact of credit score on BNPL loans. It showed that the "majority" of consumers with five or more BNPL loans from Affirm would experience "higher scores or no score changes." *

Key Details of FICO's Updates

New Scores: FICO will be introducing two new credit score models, FICO Score 10 BNPL and FICO Score 10 T BNPL, which will factor in BNPL repayment information.

Launch: These new scores are set to launch in the fall of 2025.

Purpose: FICO's aim is to provide lenders with a more comprehensive understanding of consumers' repayment behavior, especially for those with limited traditional credit history who utilize BNPL services.

Data Aggregation: FICO has developed an innovative approach that involves aggregating separate BNPL loans when calculating certain variables within their models.

Potential Impact: Early studies with Affirm, a major BNPL provider, suggest that for many consumers with five or more BNPL loans, the inclusion of this data could lead to higher scores or no change.

Availability: FICO Score 10 BNPL and FICO Score 10 T BNPL will be offered alongside the existing versions of the FICO Score, allowing lenders to evaluate the impact of BNPL data within their current processes. 

Why the Update Matters

Financial Inclusion: Incorporating BNPL data could help individuals with limited or no traditional credit history to build a credit profile and potentially access more traditional forms of credit.

Lender Visibility: Lenders will gain better insight into how consumers manage BNPL loans, helping them make more informed lending decisions.

"Phantom Debt": BNPL has sometimes been referred to as "phantom debt" because it was often not reported to credit bureaus, making it difficult for lenders to assess a borrower's overall financial health. FICO's update addresses this blind spot. 

Important Considerations

Lender Adoption: It will take time for lenders to widely adopt these new scores.

Bureau Reporting: The credit bureaus (Experian, Equifax, TransUnion) have begun collecting BNPL data, but the consistency of reporting from BNPL providers is still evolving.

Responsible Use: While BNPL can be helpful for building credit, it's crucial for consumers to be mindful of their usage, avoid overspending, and make timely payments to maintain a healthy credit profile. 

FICO's integration of BNPL data into its credit scoring models is meant to provide a more complete picture of consumer creditworthiness.

For inquiries regarding this update contact us for professional advice.

____________________________

FICO and Affirm Unveil Industry-Leading Analysis of ‘Buy Now, Pay Later’ Loans, February 4, 2025, FICO Newsroom, Scoring Solutions

Thursday, January 18, 2024

Artificial Intelligence: Adverse Action Notice

QUESTION 

We have used the model adverse action form for years. It is in our LOS. A question arose when our system put in a reason other than the model not accurately reflecting the basis for the adverse action. 

This happened because we are using artificial intelligence in our credit models. I head underwriting and credit operations and serve on the AI committee. Our decision to use AI did not contemplate that AI would produce an adverse action other than the model form’s requirements. 

Before making changes to our LOS or revising our policies, we want to find out if we must rely on the checklist of reasons for adverse action in Regulation B. 

Is it acceptable not to use an adverse action reason not available in the adverse action notice? 

How does artificial intelligence affect the accuracy required by Regulation B’s adverse action notice? 

ANSWER 

Creditors may not rely on the checklist of reasons provided in the sample forms (codified in Regulation B) to satisfy their obligations under the Equal Credit Opportunity Act (ECOA) if those reasons do not specifically and accurately indicate the principal reason(s) for the adverse action. Indeed, as a general matter, creditors should not rely on overly broad reasons to the extent that they obscure the specific and accurate reasons relied upon. 

The ECOA, implemented by Regulation B, makes it unlawful for any creditor to discriminate against any applicant with respect to any aspect of a credit transaction based on race, color, religion, national origin, sex (including sexual orientation and gender identity), marital status, age (provided the applicant has the capacity to contract) or because all or part of the applicant’s income derives from any public assistance program, or because the applicant has in good faith exercised any right under the Consumer Credit Protection Act.[i]  

When taking adverse action against an applicant, ECOA and Regulation B require that a creditor provide the applicant with a statement of reasons for the action.[ii] This statement of reasons must be “specific” and indicate the “principal reason(s) for the adverse action.”[iii] Furthermore, the specific reasons disclosed must “relate to and accurately describe the factors actually considered or scored by a creditor.”[iv]  

Adverse action notice requirements promote fairness and equal opportunity for consumers engaged in credit transactions by serving as a tool to prevent and identify discrimination by requiring creditors to explain their decisions affirmatively. 

Additionally, adverse action notices are supposed to provide consumers with an educational tool that allows them to understand the reasons for a creditor’s action and take steps to improve their credit status or rectify mistakes made by creditors. 

Indeed, the CFPB does provide sample forms that creditors may use to satisfy their adverse action notification requirements, if appropriate. And these forms include a “checklist” of sample reasons for adverse action, which “creditors most commonly consider.”[v] But, note, there are open-ended fields for creditors to provide other reasons not listed. 

Creditors use the sample forms to satisfy certain adverse action notice requirements under ECOA and the Fair Credit Reporting Act (FCRA),[vi] though the statutory obligations under each remain distinct.[vii] While the sample forms provide examples of commonly considered reasons for taking adverse action, “[t]he sample forms are illustrative and may not be appropriate for all creditors.”[viii]  

So, be aware, reliance on the checklist of reasons provided in the sample forms will satisfy a creditor’s adverse action notification requirements only if the reasons disclosed are specific and indicate the principal reason(s) for the adverse action taken. 

Now, concerning your question about artificial intelligence. 

Some creditors use complex algorithms involving “artificial intelligence” and other predictive decision-making technologies in their underwriting models. The CFPB has previously issued guidance affirming that creditors are not excused from their adverse action notice obligations under ECOA simply because they rely on complex algorithmic underwriting models in making credit decisions.[ix] 

These complex algorithms sometimes rely on data harvested from consumer surveillance or data not typically found in a consumer’s credit file or application. The CFPB has underscored the harm that can result from consumer surveillance and the risk these data may pose to consumers.[x] 

Some of these data may not intuitively relate to the likelihood that a consumer will repay a loan. Consequently, the Bureau and the prudential regulators have previously noted that these data may create additional consumer protection risks.[xi] For instance, adverse action notice requirements under ECOA and Regulation B ensure that financial institutions use the data and advanced technologies in a way that fully complies with other legal requirements, such as the prohibition against illegal discrimination.[xii] 

So, it is essential to understand that the CFPB, the Department of Justice, and other enforcement agencies have pledged to use their collective authorities to protect individual rights regardless of whether legal violations occur through traditional means or advanced technologies.[xiii] 

Under ECOA and Regulation B, a creditor must provide an applicant with a statement of specific reason(s) for an adverse action. These reasons must “relate to and accurately describe the factors actually considered or scored by a creditor.”[xiv] Thus, a creditor may not rely solely on the unmodified checklist of reasons in the sample forms provided by the CFPB if the reasons provided on the sample forms do not reflect the principal reason(s) for the adverse action. As explained in Regulation B,

 

“[i]f the reasons listed on the forms are not the factors actually used, a creditor will not satisfy the notice requirement by simply checking the closest identifiable factor listed.”[xv]  

Rather, the sample forms merely provide an illustrative and non-exclusive list.[xvi] If the principal reason(s) a creditor actually relies on is not accurately reflected in the checklist of reasons in the sample forms, it is the creditor’s responsibility – if it chooses to use the sample forms – either to modify the form or check “other” and include the appropriate explanation, thereby ensuring that the applicant against whom adverse action is taken receives a statement of reasons that is specific and indicates the principal reason(s) for the action taken. 

Let me be clear: creditors that simply select the closest, but nevertheless inaccurate, identifiable factors from the checklist of sample reasons are not complying with the law. Creditors may not evade this requirement, even if the factors considered or scored by the creditor may surprise consumers – as certainly can happen when a creditor relies on complex algorithms using data not typically found in a consumer’s credit file or credit application. 

Because it is unlawful for a creditor to fail to provide a statement of specific reasons for the action taken,[xvii] a creditor will not be complying with the law by disclosing reasons that are overly broad, vague, or otherwise fail to inform the applicant of the specific and principal reason(s) for an adverse action. Just as an accurate description of the factors actually considered or scored by a creditor is critical to ensuring compliant adverse action notifications, sufficient specificity is also required. Such specificity is necessary to ensure consumer understanding is not hindered by explanations that obfuscate the principal reason(s) for the adverse action taken. 

Specificity with respect to artificial intelligence is a critical regulatory concern. To be sure, specificity is particularly important when creditors utilize complex algorithms. Consumers may not anticipate that certain data gathered outside their application or credit file and fed into an algorithmic decision-making model may be a principal reason for reaching a credit decision, particularly if the data are not intuitively related to their finances or financial capacity. 

A creditor must “disclose the actual reasons for denial . . . even if the relationship of that factor to predicting creditworthiness may not be clear to the applicant.”[xviii] So, for instance, if a complex algorithm results in a denial of a credit application due to an applicant’s chosen profession, a statement that the applicant had “insufficient projected income” or “income insufficient for amount of credit requested” would likely fail to meet the creditor’s legal obligations. That would be the case even if the creditor believed that the reason for the adverse action was broadly related to future income or earning potential, providing such a reason likely would not satisfy its duty to provide the specific reason(s) for adverse action. 

I hope you are now getting a sense of how artificial intelligence impacts your credit decisioning and, by extension, the specificity required by the adverse action notice. Concerns regarding specificity may also arise when creditors take adverse action against consumers with existing credit lines. 

An example can be elucidated in an FTC complaint,[xix] where a creditor decides to lower the limit on, or close altogether, a consumer’s credit line based on behavioral data, such as the type of establishment at which a consumer shops or the type of goods purchased. In this instance, it would likely be insufficient for the creditor to simply state “purchasing history” or “disfavored business patronage” as the principal reason for the adverse action. Instead, the creditor would likely need to disclose more specific details about the consumer’s purchasing history or patronage that led to the reduction or closure, such as the type of establishment, the location of the business, the type of goods purchased, or other relevant considerations, as appropriate.[xx]

 The CFPB has determined[xxi] that the requirements under ECOA extend to adverse actions taken in connection with existing credit accounts (i.e., an account termination or an unfavorable change in the terms of an account that does not affect all or substantially all of a class of the creditor’s accounts), as well as new credit applications. However, such factors in a credit model may be improper for other reasons, including that using such factors may violate ECOA or other laws if they constitute unlawful discrimination on a prohibited basis. 

The Bureau has also clarified that adverse action notice requirements apply equally to all credit decisions, regardless of whether the technology used to make them involves complex or “black-box” algorithmic models or other technology that creditors may not understand sufficiently to meet their legal obligations.[xxii] As data use and credit models continue to evolve, creditors must ensure that these models comply with existing consumer protection laws. 

Jonathan Foxx, PhD., MBA

Chairman & Managing Director 
Lenders Compliance Group


[i] 15 USC 1691(a)

[ii] 15 USC 1691(d)(2); 12 CFR 1002.9(a)(2)(i); see also 12 CFR 1002.9(a)(2)(ii), which allows creditors the option of providing notice or, following certain requirements, to inform consumers of how to obtain such notice.

[iii] 15 USC 1691(d)(3); 12 CFR 1002.9(b)(2). See also Adverse action notification requirements and the proper use of the CFPB’s sample forms provided in Regulation B, Circular 2023-03, September 19, 2023, Consumer Financial Protection Bureau 

[iv] 12 CFR Part 1002 (Supp. I), § 1002.9, para. 9(b)(2)-2

[v] 12 CFR Part 1002, (App. C), Comment 3

[vi] Like ECOA, FCRA also includes adverse action notification requirements. See 15 USC 1681m(a)(2). 15 USC 1681g(f)(1)(C); see also 1681g(f)(2)(B). 

[vii] See 12 CFR Part 1002 (Supp. I), § 1002.9, para. 9(b)(2)-9

[viii] 12 CFR Part 1002 (App. C), Comment 3

[ix] Adverse action notification requirements in connection with credit decisions based on complex algorithms, Circular 2022-03, May 26, 2022, Consumer Financial Protection Bureau

[x] Idem

[xi] Interagency Statement on the Use of Alternative Data in Credit Underwriting, at 2 , Board of Governors of the Federal Reserve System, Consumer Financial Protection Bureau, Federal Deposit Insurance Corp, National Credit Union Administration, and Office of the Comptroller of the Currency.

[xii] Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, at 3 (April 23, 2023), Consumer Financial Protection Bureau, Department of Justice, Equal Employment Opportunity Commission, and the Federal Trade Commission.

[xiii] Ibid. at 3

[xiv] Op. cit. iv

[xv] 12 CFR Part 1002 (App. C), Comment 4

[xvi] Op. cit. viii

[xvii] Op. cit. ii

[xviii] 12 CFR Part 1002 (Supp. I), § 1002.9, para. 9(b)(2)-4

[xix] FTC v. CompuCredit, Complaint, No. 1:08-cv-1976-BBM-RGV, 34-35 (N.D. Ga. filed June 10, 2008)

[xx] 12 CFR 1002.2(c)

[xxi] Revocations or Unfavorable Changes to the Terms of Existing Credit Arrangements, 87 FR 30097 (May 18, 2022), Consumer Financial Protection Bureau. See also Credit Card Line Decreases, (June 29, 2022), Consumer Financial Protection Bureau.

[xxii] Op.cit. ix

Thursday, November 2, 2023

Reconsideration of Value and Appraisal Independence

QUESTION 

We are a large wholesale lender. I am a senior underwriter. Every week, we get requests from our broker partners to have properties reappraised. When the appraisal comes back below what they need, they complain to the Account Executives, who then request that we ask for an appraisal re-evaluation.   

Whether we use an AMC or a staff appraiser, we go through a set of procedures to request a second appraisal review to get a valuation closer to the broker’s expectations. It doesn’t always work out, but sometimes we find deficiencies in the original appraisal report, which, if adjusted for, can change the valuation. 

We have a Reconsideration of Value policy and procedure for this process. Our problem is that the new compliance officer is taking the position that this process interferes with appraisal independence. I would like to know if appraisal independence is compromised by requesting a re-evaluation. 

Does Reconsideration of Value compromise appraisal independence? 

Are there procedures we can implement to avoid compromising appraisal independence? 

ANSWER 

There are risks associated with deficient residential real estate valuations. However, financial institutions may incorporate Reconsideration of Value (“ROV”) processes and controls into established risk management functions.[i] The risk occurs not only in collateral valuation models but also in the risk of discrimination impacting residential real estate valuations. 

One problem in providing guidance to you is that no existing requirements are specific to ROV processes. For purposes of this article, I will define an ROV as a request from the financial institution to the appraiser or other preparer of the valuation report to re-assess the report based upon potential deficiencies or other information that may affect the value conclusion. There is some uncertainty in the industry on how ROVs intersect with appraisal independence requirements and compliance with Federal consumer protection laws, including those related to nondiscrimination. 

Collateral valuations may be deficient due to prohibited discrimination; errors or omissions; or valuation methods, assumptions, data sources, or conclusions that are otherwise unreasonable, unsupported, unrealistic, or inappropriate. The concern is that deficient collateral valuations can keep individuals, families, and neighborhoods from building wealth through homeownership by potentially preventing homeowners from accessing accumulated equity, preventing prospective buyers from purchasing homes, thereby making it harder for homeowners to sell or refinance their homes, and increasing the risk of default. 

Up front, it should be understood that valuations that are not credible may pose risks to a financial institution's financial condition and operations. Such risks may include loan losses, violations of law, fines, civil monetary penalties, payment of damages, and civil litigation. 

Regulatory Framework

There are several regulatory frameworks that, taken together, form the basis for ROV activities. For instance, the Equal Credit Opportunity Act (ECOA), and its implementing regulation, Regulation B, prohibit discrimination in any aspect of a credit transaction. The Fair Housing Act (FH Act) and its implementing regulation prohibit discrimination in all aspects of residential real estate-related transactions. ECOA and the FH Act prohibit discrimination based on race and certain other characteristics in residential real estate-related transactions, including in real estate valuations. 

In addition, section 5 of the Federal Trade Commission Act prohibits unfair or deceptive acts or practices, and the Consumer Financial Protection Act prohibits any covered person or service provider of a covered person from engaging in any unfair, deceptive, or abusive act or practice. 

The Truth in Lending Act (TILA) and its implementing regulation, Regulation Z, establish certain federal appraisal independence requirements. Specifically, TILA and Regulation Z prohibit compensation, coercion, extortion, bribery, or other efforts that may impede the appraiser’s independent valuation in connection with any covered transaction. However, Regulation Z also explicitly clarifies that it is permissible for covered persons to, among other things, request the valuation preparer to consider additional, appropriate property information, including information about comparable properties, or to correct errors in the valuation. 

The appraisal regulations implementing Title XI of the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 require all appraisals conducted in connection with federally related transactions to conform with the Uniform Standards of Professional Appraisal Practice (USPAP), which requires compliance with all applicable laws and regulations including nondiscrimination requirements. 

Applicable appraisal regulations also require appraisals to be subject to appropriate review for compliance with USPAP. Financial institutions generally conduct an independent review prior to providing the consumer a copy of the appraisal or evaluation; however, an additional review may be warranted if the consumer provides information that could affect the value conclusion or if deficiencies are identified in the original appraisal. 

An appraisal does not comply with USPAP if it relies on a prohibited basis set forth in either the ECOA or the FH Act or contains material errors, including errors of omission or commission. If a financial institution determines through the appraisal review process, or after consideration of information later provided by the consumer, that the appraisal does not meet the minimum standards outlined in the appraisal regulations and if the deficiencies remain uncorrected, the appraisal cannot be used as part of the credit decision. 

Interagency Guidance

The Federal Reserve Board, FDIC, NCUA, and OCC have issued interagency guidance describing actions that financial institutions may take to resolve valuation deficiencies. These actions include the following:

  • resolving the deficiencies with the appraiser or preparer of the valuation report; 
  • requesting a valuation review by an independent, qualified, and competent state-certified or licensed appraiser; or
  • obtaining a second appraisal or evaluation. 

Deficiencies may be identified through the financial institution’s valuation review or consumer-provided information. The regulatory framework does permit financial institutions to implement ROV policies, procedures, and control systems that allow consumers to provide and the financial institution to review relevant information that may not have been considered during the appraisal or evaluation process.

Appraisers and Third Parties 

You mentioned the use of AMCs. You must know that a financial institution’s use of third parties in the valuation review process does not diminish its responsibility to comply with applicable laws and regulations. Moreover, whether valuation review activities and resolving deficiencies are performed internally or via a third party, financial institutions supervised by the Board, FDIC, NCUA, and the OCC are required to operate safely and soundly and in compliance with applicable laws and regulations, including those designed to protect consumers. 

In addition, the CFPB expects financial institutions to oversee their business relationships with service providers in a manner that ensures compliance with Federal consumer protection laws, which are designed to protect the interests of consumers and avoid consumer harm. A financial institution’s risk management practices include managing the risks arising from its third-party valuations and valuation review functions. 

Now to turn to Reconsideration of Value itself in the loan flow process. 

Reconsideration of Value

An ROV request by the financial institution to the appraiser or other preparer of the valuation report encompasses a request to reassess the appraisal report based on deficiencies or information that may affect the value conclusion. A financial institution may initiate a request for an ROV because of the financial institution’s valuation review activities or after consideration of information received from a consumer through a complaint or appeal to the loan officer or other lender representative. 

A consumer inquiry or complaint regarding a valuation would generally occur after the financial institution has conducted its initial appraisal or evaluation review and resolved any issues identified. Given this timing, a consumer may provide specific and verifiable information that may not have been available or considered when the initial valuation and review were performed. Regardless of how the request for an ROV is initiated, a request could be resolved through a financial institution’s independent valuation review or other processes to ensure credible appraisals and evaluations. 

An ROV request may include consideration of comparable properties not previously identified, property characteristics, or other information about the property that may have been incorrectly reported or not previously considered, which may affect the value conclusion. To resolve deficiencies, including those related to potential discrimination, financial institutions can communicate relevant information to the original valuation preparer and, when appropriate, request an ROV. 

Complaint Resolution

At the core of the complaint that triggers the ROV request is the complaint resolution process. Financial institutions can capture consumer feedback regarding potential valuation deficiencies through existing complaint resolution processes. The complaint resolution process may capture complaints and inquiries about the financial institution’s products and services offered across all lines of business, including those provided by third parties, as well as complaints from various channels (such as letters, phone calls, in-person, transmittal from regulators, third-party valuation service providers, emails, and social media). 

Depending on the nature and volume, appraisal and other valuation-based complaints and inquiries can be important indicators of potential risks and risk management weaknesses. Appropriate policies, procedures, and control systems can adequately address the monitoring, escalating, and resolving of complaints, including determining the merits of the complaint and whether a financial institution should initiate an ROV.

Policies and Procedures

With respect to procedures you can implement to avoid compromising appraisal independence, there are several policies, procedures, and control systems that should be considered. I will offer a brief outline of such systemic activities that should be installed in the loan flow process.