TOPICS

Thursday, February 2, 2023

Artificial Intelligence Benefits and Risks

QUESTION

There has been a lot of news these days about AI. Recently, a Fintech firm pitched us on using their AI platform to enhance our customer services. They also offer many other AI services. Frankly, I am “old school.” I am suspicious of all this technology coming in and taking the place of humans. 

Our president, however, is very into tech and gadgets. He wants to use AI for credit decisions, risk management, and even cybersecurity. Our AML officer has climbed aboard the AI bandwagon and wants to use AI to flag potential SAR filings. 

I head up customer services. The human factor is what I know, and it works well. I need somebody to ease my aching mind about all these AI services. And that “somebody” is going to be you. A lot of people in the company read your newsletter. We like your straightforward approach. 

Here are our questions: 

How is AI used in mortgage banking operations? 

What are the risks of using AI in banks and nonbanks? 

ANSWER

Thank you for your kind words. I will always have a soft spot for the people in customer service, the unit you head. That’s because customer service is often the first contact a consumer has with a company. How communication is handled can make a big difference between a friendly business relationship and a consumer complaint. Consumer complaints risk damaging reputation and loss of business and can cause a lot of legal and regulatory havoc. 

Artificial Intelligence, known by its acronym “AI,” is with us to stay. I’m going to provide a few of the ways that financial institutions are using AI. The five banking agencies (viz., OCC, FRB, FDIC, CFPB, and NCUA) have reviewed AI use for a few years, including machine learning. My sense is that the agencies are receptive to AI innovation, with the caveat that companies ensure that the AI identifies and manages risks associated with AI use. 

I count at least six areas where banks and nonbanks are using AI. 

Using Artificial Intelligence

1. Flagging Unusual Transactions 

Many institutions use AI to identify potentially suspicious, anomalous, or outlier transactions (i.e., fraud detection and financial crime monitoring). This involves using different forms of data (i.e., email text, audio data), both structured, systematically organized or arranged,  and unstructured). The aim is to identify fraud or anomalous transactions with greater accuracy and timeliness. It also includes identifying transactions for Bank Secrecy Act/Anti-Money Laundering investigations, monitoring employees for improper practices, and detecting data anomalies. 

2. Personalization of Customer Services 

Institutions are using AI technologies to improve the customer experience and to gain efficiencies, thereby better managing the allocation of financial resources. An example would be voice recognition and Natural Language Processing (NLP), which generally refers to the use of computers to understand or analyze natural language text or speech. 

Another example is the use of chatbots. The term “chatbot” may be new to you. Generally, it refers to a software application used to conduct an online chat conversation via text or text-to-speech instead of providing direct contact with a live human agent. The chatbot automates routine customer interactions, including account opening activities and general customer inquiries. Chatbots are used in call centers to process and triage customer calls to provide customized service. In fact, some institutions are using chatbot technology to target marketing better and customize certain responses and recommendations. 

3. Credit Decisions 

Some institutions use AI to inform credit decisions to enhance or supplement existing techniques. This implementation of AI uses traditional data or “alternative data,” which is information not typically found in the consumer’s credit files of the nationwide consumer reporting agencies or customarily provided by consumers as part of credit applications. AI can enhance alternative data by providing cash flow transactional information from a bank account. 

4. Risk Management 

We have clients that use AI to augment risk management and control practices. For example, it can be used in credit analytics, where an AI program can complement and provide a check on another, more traditional credit model. Financial institutions may also use AI to enhance credit monitoring (including through early warning alerts), payment collections, loan restructuring and recovery, and loss forecasting. 

Sometimes, AI assists internal audit and independent risk management to increase the sample size (such as for testing), evaluate risk, and refer higher-risk issues to human analysts. AI may also be used in liquidity risk management. We have a client that uses AI for just such a purpose, seeking to enhance monitoring of market conditions or collateral management. 

5. Textual Analysis 

Textual analysis refers to using NLP (viz., natural language processing, supra) for handling unstructured data (generally text) and obtaining insights from that data or improving the efficiency of existing processes. Various applications include analysis of regulations, news flow, earnings reports, consumer complaints, analyst rating changes, and legal documents. 

6. Cybersecurity 

AI is being implemented in cybersecurity because financial institutions use it to detect threats and malicious activity, reveal attackers, identify compromised systems, and support threat mitigation. There are many examples worth noting, like real-time investigation of potential attacks, the use of behavior-based detection to collect network metadata, flagging and blocking of new ransomware and other malicious attacks, identifying compromised accounts and files involved in exfiltration, and deep forensic analysis of malicious files. 

So, it seems obvious that AI has the potential to provide more accurate, lower-cost, and faster underwriting, as well as expanded credit access for consumers who may not have obtained credit under traditional credit underwriting approaches. 

Risks of Artificial Intelligence

Notwithstanding the foregoing benefits, there are risks associated with AI, although it bears stating that many of the risks associated with using AI are not unique to AI. For instance, using AI could result in operational vulnerabilities, such as internal process or control breakdowns, cyber threats, and information technology lapses; nevertheless, these risks are also associated with the use of third parties, and various modeling risks, all of which could affect an institution’s safety and soundness. 

AI could create or increase consumer protection risks, such as risks of unlawful discrimination, violations relating to Unfair, Deceptive, or Abusive Acts or Practices (UDAAP) under the Dodd-Frank Act, FTC’s UDAP rule (viz., unfair or deceptive acts or practices under the  FTC Act), or privacy concerns.

Essentially there are three risk areas associated with AI: explainability, data usage, and dynamic updating. 

1. Explainability 

“Explainability” refers to how AI uses inputs to produce outputs. Some AI applications exhibit a “lack of explainability” for their overall functioning (sometimes known as global explainability) or how they arrive at an individual outcome in a given situation (sometimes referred to as local explainability). Lack of explainability can pose different challenges in different contexts. Lack of explainability may also inhibit management’s understanding of the conceptual soundness of an AI program, such as the quality of the theory, design, methodology, data, developmental testing, and confirmation that the program is appropriate for the intended use. That disconfirmation risk will increase uncertainty around the AI application’s reliability and increase risk when used in new contexts. Furthermore, a lack of explainability may also inhibit independent review and audit and make compliance with laws and regulations, including consumer protection requirements, more challenging. 

2. Broader or More Intensive Data Usage 

Data plays a particularly important role in AI. For instance, AI can be used in training data. In many cases, AI algorithms identify patterns and correlations in training data without human context or intervention and then use that information to generate predictions or categorizations. Because the AI algorithm depends on the training data, an AI system generally reflects any limitations of that dataset. As a result, as with other systems, AI may perpetuate or even amplify bias or inaccuracies inherent in the training data or make incorrect predictions if that data set is incomplete or non-representative. 

3. Dynamic Updating 

Some AI approaches can update independently, sometimes without human interaction, often known as dynamic updating. Monitoring and tracking an AI program that evolves on its own can present challenges in review and validation, particularly when a change in external circumstances (i.e., economic downturns and financial crises) may cause inputs to vary materially from the original training data. Dynamic updating techniques can produce changes ranging from minor adjustments involving existing model elements to introducing entirely new elements. 

Jonathan Foxx, Ph.D., MBA
Chairman & Managing Director
Lenders Compliance Group