TOPICS

Thursday, September 28, 2023

Artificial Intelligence: Benefits and Risks

QUESTION 

There has been a lot of news about artificial intelligence. I have to admit, I do not know anything about it. Yet my company has just announced that it is linking up with an artificial intelligence provider. 

Now, we are scrambling to understand how artificial intelligence will impact our jobs, loan process, and compliance requirements. Last year, nobody cared about AI. This year, it’s all they can talk about! 

I would like you to tell us some ways that AI is used by banks and nonbanks, since providing compliance to us is your specialty. We need some basic understanding of how AI will be a part of originating and servicing loans. 

What are some ways that financial institutions are using AI? 

ANSWER 

I sense your frustration, and you are not alone. Whenever a new technology or innovation enters the marketplace, there is a perfectly normal tendency to be a bit suspicious and even worried about its implications. In time, these concerns often become resolved, sometimes with less than optimum impact on society, sometimes with far-reaching positive impact. The challenge is anticipating change and preparing proactively to mitigate unwanted outcomes. 

I don’t think financial institutions should rush into Artificial Intelligence (“AI”) without first considering compliance. But, there are good reasons to implement AI as a tool in the quest for a strong compliance program. When planning to partner with an AI vendor, it is important to bring in a firm such as ours to provide reliable due diligence to ensure the compliance component is integral to the plan. This creates a “baseline” that serves to enhance policies and procedures, training, and ongoing improvements in the technological application. 

Many banking agencies have been vetting AI for a few years. They're still in the early stages of drafting the rulemaking, but there has been an increase in regulatory guidance issuances. As a provider of customized compliance libraries, we are updating our clients’ policies for such guidance. And when rulemaking is determined, we will provide an AI policy, specific to a client's needs, and prior to a promulgated effective compliance date. 

Five banking agencies (OCC, FRB, FDIC, CFPB, and NCUA) have sought information and comments on the use of AI, including machine learning, by financial institutions. The caveat thus far is that they support responsible innovation as long as it includes identifying and managing associated risks. 

We can glean the areas of scrutiny being reviewed for supervision, examination, and enforcement by taking note of the following ways financial institutions use or may use AI. Though not meant to be a comprehensive outline, based on our interactions with regulators and published issuances, I’m sure these areas are under review for AI compliance. 

ARTIFICAL INTELLIGENCE: BENEFITS

Flagging Unusual Transactions 

Many institutions use AI to identify potentially suspicious, anomalous, or outlier transactions (for instance, fraud detection and financial crime monitoring). This involves using different forms of data (i.e., email, texts, audio data – both structured and unstructured)[i] to identify fraud or anomalous transactions with greater accuracy and timeliness. It also includes identifying transactions for Bank Secrecy Act/Anti-Money Laundering activities, monitoring employees for improper practices, and detecting data anomalies. 

Personalization of Customer Services 

Institutions use AI technologies, such as voice recognition and Natural Language Processing (NLP),[ii] to improve the customer experience and increase efficiency in allocating financial institution resources. 

One example is using chatbots[iii] to automate routine customer interactions, including account opening activities and general customer inquiries. AI is leveraged at call centers to process and triage customer calls to provide customized service. Institutions also use these technologies to target marketing better and customize trade recommendations. 

Credit Decisions 

Some institutions use AI to inform credit decisions to enhance or supplement existing techniques. This application of AI may use traditional data or employ “alternative data”[iv] (such as cash flow transactional information from a bank account). 

Risk Management 

Institutions may use AI to augment risk management and control practices. For example, an AI approach might be used to complement and provide a check on another, more traditional credit model. Financial institutions may also use AI to enhance credit monitoring (including through early warning alerts), payment collections, loan restructuring and recovery, and loss forecasting. 

AI can assist internal audit and independent risk management to increase sample size (such as for testing), evaluate risk, and refer higher-risk issues to human analysts. Indeed, AI may also be used in liquidity risk management, for example, to enhance monitoring of market conditions or collateral management. 

Textual Analysis 

Textual analysis refers to using NLP for handling unstructured data (generally text) and obtaining insights from that data or improving the efficiency of existing processes. Applications include analysis of regulations, news flow, earnings reports, consumer complaints, analyst ratings changes, and legal documents. 

Cybersecurity 

Institutions may use AI to detect threats and malicious activity, reveal attackers, identify compromised systems, and support threat mitigation. Examples abound, including real-time investigation of potential attacks, the use of behavior-based detection to collect network metadata, flagging and blocking of new ransomware and other malicious attacks, identifying compromised accounts and files involved in exfiltration, and deep forensic analysis of malicious files. 

There are risks, too, which I’ll explain shortly. But, it should be obvious that the agencies recognize that AI has the potential to offer improved efficiency, enhanced performance, and cost reduction for financial institutions, as well as benefits to consumers and businesses. AI can identify relationships among variables that are not intuitive or not revealed by more traditional techniques. And it can better process certain forms of information, such as text, that may be impractical or difficult to process using traditional methods. 

AI also facilitates processing significantly large and detailed datasets, both structured and unstructured, by identifying patterns or correlations that would be impracticable to ascertain otherwise.

In general, other potential AI benefits include more accurate, lower-cost, and faster underwriting and expanded credit access for consumers and small businesses that may not have obtained credit under traditional credit underwriting approaches. AI applications may also enhance an institution’s ability to provide products and services with greater customization. 

ARTIFICAL INTELLIGENCE: RISKS

But there are risks. The agencies have emphasized that financial institutions should have processes to identify and manage the potential risks associated with AI. Many of the risks associated with using AI are not unique to AI. For example, using AI could result in operational vulnerabilities, such as internal process or control breakdowns, cyber threats, information technology lapses, risk associated with using third parties, and model risks, all of which could affect an institution’s safety and soundness. 

Furthermore, the use of AI could also create or increase consumer protection risks, such as risks of unlawful discrimination, unfair, deceptive, or abusive acts or practices (UDAAP) under the Dodd-Frank Act, unfair or deceptive acts or practices regulation (UDAP) under the FTC Act, or privacy concerns.

The agencies have identified three risks particular to AI: 

  • Explainability, 
  • Data Usage, and 
  • Dynamic Updating. 

Here’s a brief explanation of each risk. 

Explainability 

“Explainability” refers to how an AI approach uses inputs to produce outputs. In other words, some AI approaches can exhibit a “lack of explainability” for their overall functioning (sometimes known as global explainability) or how they arrive at an individual outcome in a given situation (sometimes referred to as local explainability). 

Lack of explainability can pose different challenges in different contexts. Lack of explainability can also inhibit a management’s understanding of the conceptual soundness of an AI approach (that is, the quality of the theory, design, methodology, data, developmental testing, and confirmation that an approach is appropriate for the intended use) which, then, can increase uncertainty around the AI approach’s reliability, and increase risk when used in new contexts. 

Lack of explainability can also inhibit independent review and audit and make compliance with laws and regulations, including consumer protection requirements, more challenging. 

Data Usage 

Broader or more intensive data usage plays a particularly important role in AI. In many cases, AI algorithms identify patterns and correlations in training data without human context or intervention and then use that information to generate predictions or categorizations. 

Because the AI algorithm depends on the training data, an AI system generally reflects any dataset limitations. As a result, as with other systems, AI may perpetuate or even amplify bias or inaccuracies inherent in the training data or make incorrect predictions if that data set is incomplete or non-representative. 

Dynamic Updating 

Some AI approaches have the capacity to update on their own, sometimes without human interaction, often known as dynamic updating. Monitoring and tracking an AI approach that evolves on its own can present challenges in review and validation, particularly when a change in external circumstances (i.e., economic downturns and financial crises) may cause inputs to vary materially from the original training data. 

Dynamic updating techniques can produce changes that range from minor adjustments to existing elements of a model to the introduction of entirely new elements. 

Jonathan Foxx, Ph.D., MBA
Chairman & Managing Director 
Lenders Compliance Group


[i] The term “structured data” generally refers to a set of data that has been systematically organized or arranged.

[ii] “Natural Language Processing” or “NLP” generally refers to the use of computers to understand or analyze natural language text or speech.

[iii] The term “chatbot” generally refers to a software application used to conduct an on-line chat conversation via text or text-to-speech, in lieu of providing direct contact with a live human agent.

[iv] “Alternative data” means information not typically found in the consumer’s credit files of the nationwide consumer reporting agencies or customarily provided by consumers as part of applications for credit.