LENDERS COMPLIANCE GROUP®

AARMR | ABA | ACAMS | ALTA | ARMCP | IAPP | IIA | MBA | MERSCORP | MISMO | NAMB

Wednesday, May 6, 2026

AI Versus Humans: A Dialogue

Substack

YOUR QUESTION 

I've read your posts on AI with considerable interest. I am the owner of a Fintech organization that provides AI to mortgage companies. My partners and I get your posts all the time. Few people in the mortgage world seem to be as honest and forthright as you. We have suggested to our clients that they sign up for your AI Policy Program. We want our customers to be fully engaged in working with AI. I am writing you about a disagreement that I have with your portrayal of AI as eventually replacing humans. 

Please engage with me in a discussion of my view. It's OK with me if you want to publish our dialogue. The more discussion, the better for everyone. But I think the gloom-and-doom perspective overlooks the nuances and is not historically valid. 

There is a fundamental error people have about AI. They look at the economy and see a fixed amount of work to be done, like a pie that can only be sliced smaller and smaller as machines take bigger bites. Critics of AI say that AI users see humans as a competitive resource to be eliminated for a finite amount of work and a finite number of problems. This is fundamentally, totally, and completely wrong. 

Does AI adversely affect jobs in the mortgage world? 

OUR COMPLIANCE SOLUTION 

We suggest:

AI POLICY PROGRAM FOR MORTGAGE BANKING™   

Our AI Policy Program aligns with Freddie Mac's AI governance requirements for Freddie Mac Sellers/Servicers. Responsible AI practices can help align AI system design, development, and use with applicable legal and regulatory guidelines.  

Our AI Policy Program consists of the following policies:  

1.    Artificial Intelligence Governance Policy 

2.    Artificial Intelligence Use Policy 

3.    Artificial Intelligence Workplace Policy 

4.    Artificial Intelligence Credit Underwriting Policy 

5.    Artificial Intelligence Do & Do Not Policy 

6.    Artificial Intelligence Ethics Policy 

7.    Artificial Intelligence Vendor Management Policy   

8.    Artificial Intelligence Mortgage Fraud Policy

Contact us for the presentation and pricing! 

RESPONSE 

I do not see AI as gloom-and-doom; however, I do recognize that it poses significant risks of many kinds. Being aware of those risks may enable preparation for remedies and mitigation of certain adverse, consequential outcomes. 

I have stated my point of view in several speaking engagements and numerous articles, some of which are: 

AI Replaced Me 

Will AI Reduce Fair Lending Violations? 

Will AI Replace Me? 

Freddie Mac Deadline: March 3, 2026 – AI Governance Framework 

Shadow AI in Mortgage Banking 

AI Credit Score Underwriting 

Visit our Compliance Topics to find more articles relating to AI. 

I will not spend time here outlining my perspective fully. For those interested, please read my articles. I always encourage questions and comments. You can contact me here. 

I will provide your views and my responses thereto. For editorial reasons, I will embolden the commenter's statements and follow them with my responses. Also, for editorial reasons, I will publish the two main theses of their opinion, thereby providing both their view and mine. I will include definitions in italics when I think a technical word requires a brief definition. Let's begin! 

Commenter's View 

This is the fundamental error of AI and job doomers. They look at the economy and see a fixed amount of work to be done, a pie that can only be sliced thinner as machines take bigger bites. They see humans as a competitive resource for a finite amount of work and a finite amount of problems to solve that must be eliminated. This is fundamentally, totally, and completely wrong.

The pie isn't fixed. It never was. And the reason it isn't fixed is baked into the very nature of technology itself. Technology is nothing but abstraction stacking. And abstraction stacking is infinite. Therefore, the work is infinite. The hammer didn't reduce the amount of work. It moved the work up the stack. And the new work was more complex, more varied, and more interesting than the old work. Complexity breeds more complexity and more variety.

Definition: Abstraction Stacking

Before proceeding, let me provide a cursory definition of abstraction stacking. Abstraction is the process of simplifying complex systems by focusing on essential, high-level features while hiding irrelevant details. It allows you to use or understand something (like a car or a computer program) without needing to know its complex underlying mechanics. You push the gas pedal on your car to go faster without needing to understand fuel injection; or, you send a message by tapping an app icon, oblivious to systemic protocols and server interactions; or, in banking, you see a balance but not the complex network of databases and security systems. 

My View 

I think you are offering a thoughtful and somewhat compelling argument, but it has both genuine strengths and some real blind spots worth examining. I find your view to be incomplete and a bit overconfident. 

First, I want to agree with your core insight, which seems to be that technology shifts work rather than simply eliminating it. Indeed, this view does have strong historical backing. The agricultural revolution didn't create mass permanent unemployment; it freed labor for industry. Automation in manufacturing didn't end work; it created software engineers, logistics managers, and UX designers. However, I think you have fallen into the "lump of labor fallacy," which is a well-documented error in economic thinking.

Definition: Lump of Labor Fallacy             

The lump of labor fallacy claims there is a fixed number of jobs in the economy. Economists debunk this fallacy because a larger workforce can increase economic growth. This fallacy is often used to argue against immigration and in favor of policies like early retirement. The fallacy is evident in many people's thinking. This thinking assumes there is a fixed amount of work to be done. If this were true, new jobs could not be generated, just redistributed. Those who believe the fallacy have often felt threatened by new technology or the entrance of new people into the labor force. Their fears are rooted in a mistaken zero-sum view of the economy, which holds that when someone gains in a transaction, someone else loses. In reality, labor demand is not fixed. Changes in one industry can be offset, or overshadowed, by growth in another. And as the labor force grows, total employment increases too. 

The abstraction stacking framing is genuinely elegant. Each technological layer tends to create new categories of problems, needs, and, therefore, work. That said, your view is rather incomplete. Even if the proverbial "pie" ultimately expands, the transition period can cause genuine devastation for real people. Your argument dismisses this effect much too casually. Handloom weavers were displaced by industrialization and suffered for decades, though, in the long run, the vindication didn't help them. 

I am uncomfortable with the "pie" analogy. A bigger pie doesn't automatically mean broadly shared prosperity. The new work created by AI may concentrate among a smaller, more technically skilled group, leaving many others behind, perhaps not unemployed forever, but underemployed or economically marginalized. 

You seem to be saying that infinite work (to use your term) is somehow equal to infinite paid work. That view is without merit. They are undoubtedly unequal. In any event, such a scenario reaches a dataset of diminishing returns. Markets only compensate work when someone can afford to pay for it. 

Finally, due to the abstraction stacking, you say that the new work was more complex, more varied, and more interesting than the old work. Complexity breeds more complexity and more variety. Your conclusion is unfounded. New complexity creates potential work, not necessarily economic demand for that work. This transition may be qualitatively different. Previous technologies automated physical or narrow cognitive tasks. AI targets general reasoning, the very faculty we humans used to climb the abstraction stack in the first place. That's a legitimate reason for some heightened concern, even if, in some way, ultimate doom is unwarranted. 

Parts of your argument serve as a useful rationale for discussing the erroneous appeal of naive zero-sum thinking. But what you are doing is swapping one error (the fixed pie) for another (the frictionless, equitable, automatic expansion). I think your position does not withstand scrutiny because, inevitably, the pie will grow, but the transition will be uneven and painful for many. And that transition is precisely the "intersection" I have been discussing in my articles. It will require deliberate policy choices to manage well. Dismissing such a concern entirely is as intellectually unavailing as the doom it critiques. 

Maybe so, but there is a counterargument. Electricity solved "how do we deliver energy anywhere?" – and created problems of grid design, power generation, appliance manufacturing, electrical safety codes, utility regulation, and an entire consumer electronics industry. 

The Internet solved "how do we connect all human knowledge?" – and created problems of cybersecurity, digital privacy, online commerce, content moderation, network infrastructure, cloud computing, social media dynamics, and an entire digital economy that employs tens of millions. 

Notice the pattern? Each solution didn't just solve a problem. It created an entirely new problem space that was larger, more complex, and more varied than the one it replaced. The stack grows. It never shrinks. 

I'm glad you brought this up because I hear this argument all the time. And, by the way, this argument is at least a stronger version of your initial argument. But I am going to push back on it. 

Your earlier argument was somewhat idealized, which, if I could sum it up, suggests that "abstraction stacking is infinite, therefore work is infinite." I can accept this concept. It is even empirically verifiable. It points to concrete historical instances in which the new "problem space" was demonstrably larger than the old one. 

However, the argument loses support when you cherry-pick the examples. You selected technologies that massively expanded the problem space. But what about technologies that, shall we say, "compressed" work rather than fully replacing it? Spreadsheets genuinely did eliminate large numbers of bookkeeping jobs without creating an equivalent new layer of complexity that absorbed those same workers. Word processors eliminated typing pools. GPS eliminated many dispatch and navigation jobs. The pattern isn't universal, but it's common, and there are plenty of other examples. 

As to your specific selections, electricity and the Internet were platforms and infrastructures that millions of humans then built on top of. The expansion happened because humans could grab the new layer and run with it creatively. 

The unsettling question about AI is whether it's a platform that humans build on or an AI agent that climbs the stack itself. If AI can autonomously identify new problems, design solutions, and implement them, the new problem space still exists, but humans may not be the ones hired to work in it. 

You ended our dialogue by invoking the process of abstraction stacking again. You stated: The stack grows. It never shrinks. 

I think your conclusion is nonchalant. The concern isn't that the stack stops growing. Nobody seriously believes that, really. The issue is more like: the stack keeps growing, but humans are no longer the ones doing the climbing. 

That's a different and harder problem than the one your argument supposedly defeats.

_______ 

This article, the AI Versus Humans: A Dialogue, published on May 6, 2026, is authored by Jonathan Foxx, PhD, MBA, the Chairman & Managing Director of Lenders Compliance Group, founded in 2006, the first and only full-service, mortgage risk management firm in the United States, specializing exclusively in residential mortgage compliance.