TOPICS

Thursday, April 11, 2024

Policy Statement for Artificial Intelligence

QUESTION 

You have been writing about Artificial Intelligence since it became popular. Most of us in my company only have a superficial understanding of AI. As the Compliance Manager, I surveyed those who were using it. It turns out that it’s only used in chats and searches. Meanwhile, our Board wants to introduce it into our loan origination procedures. 

Several companies are now pitching Senior Management and the Board regarding their AI capabilities. Frankly, I see a massive training, monitoring, and auditing future—and they have tasked me with writing a risk/benefit outline for using AI. They plan to use my outline as a scorecard to vet potential AI partners. 

To complicate matters, they want me to present the outline as a policy statement they can adopt. From it, full policies and procedures are supposed to be based on the policy statement. Overnight, I am supposed to be an expert in Artificial Intelligence involving mortgage banking! 

I need help drafting a risk/benefit outline and a policy statement. 

What AI benefits and risks can be listed in the outline? 

What elements constitute a policy statement about AI? 

COMPLIANCE SOLUTION 

Policies and Procedures 

ANSWER 

Virtually since the inception of the Artificial Intelligence (AI) craze, I have been writing and speaking on its pros and cons. Here are some articles. Although I see numerous benefits, I also see numerous risks. Do the benefits outweigh the risks? 

A new technology is often unpredictable with respect to its consequences. Currently, self-driving trucks are promoted as the future of delivery methods. As I write, about two dozen states specifically allow driverless operations of vehicles, and another 16 states have no regulations at all specific to “autonomous vehicles.” Only ten states place limits on autonomous vehicles.[i] 

Now, let's hold the self-driving trucks up to the light of the risks/benefits type thinking. I’m sure there are plenty of benefits, as is the case with new technologies, but the risks can be catastrophic in view of the fact that the livelihoods of human truckers are at stake. Long-haul truckers are estimated to lose at least 500,000 jobs. That amounts to a financial catastrophe for their families. Add in the maintenance and support staff, truck stop employees, and all their families, and the overall consequence is devastating on nearly every level – except it does provide benefits to the self-driving companies since their robotic trucks do not need to feed their families, can be readily replaced, and can drive 24 hours a day, 7 days a week. Millions of lives are impacted adversely. So, you tell me, what are the foreseeable consequences of new technology? Some consequences are “known-known.” The consequences of the self-driving truck are a known-known. 

Artificial intelligence has a few known-known consequences, and I will mention some of them. However, the vast area of the unknown consequences is not entirely apprehended at this early stage of its implementation. When drafting the risk outline and policy statement, I suggest you insert a proviso that the known-known is incomplete and the unknown vastly overwhelms the known. 

Regulators have expressed concern about how we use AI, so you need to be aware of the measures to take to ensure an understanding of its risks. I will provide a brief risk outline in the context of a policy statement because they cannot and should not be separate aspects of AI. Each policy statement must reflect a company's size, products, services, complexity, risk profile, and business strategy. My generic synopsis is not and is not meant to be comprehensive. 

RISK OUTLINE AND POLICY STATEMENT 

Flagging Unusual Transactions 

AI may identify potentially suspicious, anomalous, or outlier transactions (i.e., fraud detection and financial crime monitoring). This involves using different forms of data (i.e., emails and audio data – both structured[ii] and unstructured) to identify fraud or anomalous transactions with greater accuracy and timeliness. It also includes identifying transactions for the Bank Secrecy Act’s Anti-Money Laundering investigations, monitoring employees for improper practices, and detecting data anomalies. 

Personalization of Customer Services 

AI technologies, such as voice recognition and Natural Language Processing (NLP),[iii] may improve the customer experience and increase efficiency in allocating financial institution resources. One example is using chatbots[iv] to automate routine customer interactions, including account opening activities and general customer inquiries. AI is leveraged at call centers to process and triage customer calls to provide customized service. These technologies may be implemented to target marketing efforts better. 

Credit Decisions 

AI may inform credit decisions to enhance or supplement existing techniques. This application of AI may use traditional data or employ alternative data[v] (such as cash flow transactional information from a bank account). 

Risk Management 

AI may be used to augment risk management and control practices. For instance, AI may provide a resource to complement and provide a check on another, more traditional credit model. It may also enhance credit monitoring (including through early warning alerts), payment collections, loan restructuring and recovery, and loss forecasting. AI can assist internal audit and independent risk management functions to increase sample size (such as for testing), evaluate risk, and refer higher-risk issues to human analysts. AI can be used in liquidity risk management, for example, to enhance monitoring of market conditions or real estate collateral management. 

Textual Analysis 

Textual analysis refers to using NLP to handle unstructured data (generally text) and obtain insights from that data or improve existing processes' efficiency. Applications could include analysis of regulations, news flow, earnings reports, consumer complaints, analyst rating changes, and legal documents. 

Cybersecurity 

AI may be able to detect threats and malicious activity, reveal attackers, identify compromised systems, and support threat mitigation. For instance, it could monitor real-time investigation of potential attacks, use behavior-based detection to collect network metadata, flag and block new ransomware and other malicious attacks, identify compromised accounts and files involved in exfiltration, and conduct deep forensic analysis of malicious files. 

KNOWN RISKS 

Several risks are particular to AI: explainability, data usage, and dynamic updating. These are the top three known risks. 

Explainability 

“Explainability” refers to an AI approach using inputs to produce outputs. 

Some AI approaches can exhibit a “lack of explainability” for their overall functioning (sometimes referred to as “global explainability”) or how they arrive at an individual outcome in a given situation (sometimes referred to as “local explainability”). Lack of explainability can pose different challenges in different contexts. 

Lack of explainability can also inhibit management's understanding of the conceptual soundness of an AI approach (that is, the quality of the theory, design, methodology, data, developmental testing, and confirmation that an approach strategy is appropriate for the intended use) which can increase uncertainty around the AI approach's reliability, and increase risk when used in new contexts. 

And, importantly, lack of explainability can also inhibit independent review and audit and make compliance with laws and regulations, including consumer protection requirements, more challenging. 

Broader or More Intensive Data Usage 

Data plays a particularly important role in AI. 

AI algorithms identify patterns and correlations in training data without human context or intervention and then use that information to generate predictions or categorizations. Because the AI algorithm depends on the training data, an AI system generally reflects dataset limitations. As a result, AI may perpetuate or even amplify bias or inaccuracies inherent in the training data or make incorrect predictions if that data set is incomplete or non-representative. 

Dynamic Updating 

Some AI approaches can update on their own, sometimes without human interaction, often known as “dynamic updating.” 

Monitoring and tracking an AI approach that evolves independently can present challenges in review and validation, particularly when changes in external circumstances may cause inputs to vary materially from the original training data. An example would be changes relating to economic downturns and financial crises. 

Dynamic updating techniques can produce changes ranging from minor adjustments to existing model elements to the introduction of entirely new elements. 

POTENTIALS FOR AI 

AI has the potential to offer improved efficiency, enhanced performance, and cost reduction, as well as benefits to customers. It can identify relationships among variables that are not intuitive or not revealed by more traditional techniques. Furthermore, it may also help to process certain forms of information, such as text, that may be impractical or difficult to process using conventional methods. 

AI also facilitates processing significantly large and detailed datasets, both structured and unstructured, by identifying patterns or correlations that would be impracticable to ascertain otherwise. 

Other potential AI benefits include more accurate, lower-cost, and faster underwriting and expanded credit access for customers who may not have obtained credit under traditional credit underwriting approaches. AI applications may also enhance the ability to provide products and services with greater customization. 

ready for artificial intelligence? 

Does a financial institution have adequate processes in place to identify and manage the potential risks associated with AI? I don’t think so, certainly not at this early stage of AI development. 

Many of the risks associated with using AI are not unique to AI. For instance, using AI could result in operational vulnerabilities, such as internal process or control breakdowns, cyber threats, information technology lapses, risks associated with using third parties, and model risks, all of which could affect safety and soundness protocols. 

The use of AI could also create or increase consumer protection risks, such as risks of unlawful discrimination, unfair, deceptive, or abusive acts or practices, or privacy concerns. 

We may know some known-known risks and benefits. 

But we simply do not know the unknown consequences. 

And therein lies the test of time! 

As Banquo said to the witches in Macbeth:[vi] 

If you can look into the seeds of time,
and say which grain will grow and which will not,
speak then unto me.

Jonathan Foxx, Ph.D., MBA
Chairman & Managing Director 
Lenders Compliance Group


[i] Ready or Not, Self-driving Semi-trucks are Coming to America’s Highways, Thadani, Trisha, March 31, 2024. The Washington Post

[ii] “Structured data” generally refers to a set of data that has been systematically organized or arranged.

[iii]  “Natural Language Processing” (“NLP”) generally refers to the use of computers to understand or analyze natural language text or speech.

[iv] The term “chatbot” generally refers to a software application used to conduct an on-line chat conversation via text or text-to-speech, in lieu of providing direct contact with a live human agent.

[v] “Alternative data” means information not typically found in the consumer's credit files of the nationwide consumer reporting agencies or customarily provided by consumers as part of applications for credit.

[vi] Macbeth, Act 1, Scene 3