The Lending Brief - November 18

Welcome to The Lending Brief,

Happy International Fraud Awareness Week,

This year’s International Fraud Awareness Week (Nov 16–22) arrives with both alarm bells and a blueprint for action. The numbers paint a clear picture:

  • 77% of anti-fraud professionals report sharp increases in deepfake-driven fraud.

  • 83% expect AI-enabled fraud to accelerate - but less than 1 in 10 feel prepared.

  • And Deloitte now estimates AI-fueled fraud could reach $40B by 2027.

Across the industry, one theme keeps surfacing: The real readiness gap isn’t AI tools. It’s the quality of data entering the system at intake.

The Fraud You Can’t See Is the Fraud You Can’t Stop

🔍 What’s going on:
Wells Fargo’s head of cyber human defense summarized the shift with one line:
“95% of successful breaches involve a human element - not a system flaw.”

Fraudsters have adopted the same tactics growth marketers use: segmenting, personalizing, and contacting borrowers in the exact channels they trust. Add agentic AI, and the playbook evolves:

  • Deepfake audio that sounds like an executive

  • Fabricated business histories that look legitimate

  • Perfectly formatted tax returns and bank statements

  • Synthetic businesses with websites, EINs, and social profiles

None of these attack the technology. They attack the first moment of trust: intake.

💡 Why it Matters

JPMorgan’s “Scam Interruption” team works because signals are validated before funds move. AI amplifies whatever foundation exists. With validated data, it boosts capacity. With unvalidated data, it accelerates fraud at machine speed.

⚡ Action steps

Pull one recent loan file and trace the verification timeline:

  • Identify any check that happened after the file entered your LOS

  • Ask: "If we processed 3x this volume, would we catch the same red flags?"

  • Move one high-impact verification upstream this quarter (voice verification protocols, serial number cross-checks, document authenticity, behavioral anomaly detection)

➡️ Want to go deeper?

Banking Transformed features TransUnion’s fraud strategy leads breaking down 37% growth in synthetic IDs, 30% bot-driven applications, and why fraud is now “an intake problem, not a detection problem.”

You Don't Need 270 AI Models.
You Need One Clean Intake Layer.

🔍 What’s going on:
At Bank of America's Investor Day, executives described what "AI maturity" actually looks like: 270+ AI/ML models across operations, Fraud loss rate cut in half, and 20% productivity gains that freed engineers for innovation.

Meanwhile, Citi's CTO delivered a simple rule: "Everything we do with AI needs to lead to more revenue or less expense."

But here's what matters more than the scale: Both emphasized that governance and data quality infrastructure came before AI deployment, not after.

The sequence they followed: Governance → Data quality → AI → Scale. Not: AI → scale → "we'll fix data later"

Citi's CTO captured the urgency: "The worst thing to do is inaction. There is a huge opportunity cost if you are not part of this journey."

💡 Why it Matters
Most community institutions don’t have an AI problem. They have an intake problem that looks like an AI problem.

If documentation enters the LOS unvalidated, AI agents will simply:

  • Route bad data faster

  • Approve risky applications more confidently

  • Create downstream rework

  • Increase compliance exposure

  • Amplify human workload

You don’t need 270 models. You need one intake layer that’s clean, consistent, and trustworthy.

 Action Step: 
Choose one loan product and:

  1. Trace the first step where documentation enters your system

  2. Identify verification steps that occur downstream

  3. Select one verification step to move upstream to intake

  4. Automate validation at that step using available tools (document verification APIs, KYC/KYB services, consortium data)

This single adjustment can eliminate 30–50% of downstream rework while strengthening your fraud posture.

Cyber Is Now an "Everybody Problem" - And That's Your Competitive Advantage

🔍 What’s going on:
Wells Fargo's head of cyber human defense described the shift in one line: "In the '90s, cyber was a government problem. In the 2000s, it was an IT problem. In the 2010s, it was a business problem. Now, it's an everybody problem."

That's not hyperbole. It's operational reality.

At Wells, they now run cyber war games - immersive simulations where executives make real-time decisions under pressure during fraud scenarios.

Why? Because fraud defense now belongs to:

  • The SBA specialist who reviews a borrower's website

  • The branch employee who accepts a wire transfer request

  • The underwriter who validates employment history

Every person who touches an application is now part of your fraud posture.

💡 Why it matters: 

Wells Fargo runs cyber war games because fraud now moves across every channel and every role. That reality applies to community institutions too.

But smaller financial institutions face an added challenge: small teams carry outsized responsibility - without the infrastructure large banks rely on.

They know their communities, but they don’t have the time or tools to manually validate every document, identity, or data point with the level of rigor modern fraud requires. This creates a new truth: Relationship intelligence is powerful - but only when it rests on validated data.

 Action Step: 
Give your team this scenario:

"A long-time borrower submits an equipment financing request. Documents look good. But one detail - a manufacturer's model number, an invoice date, a bank account routing number - doesn't match public records."

Ask four questions:

  1. At what point in our workflow would we catch this inconsistency?

  2. Do we have tools to verify this at intake, or only during underwriting?

  3. If volume increased 3x tomorrow, would we still catch this?

  4. What's ONE change we could make in Q1 to validate this earlier?

This exercise creates clarity without blame - and often reveals that your biggest fraud prevention gap isn't technology or training. It's workflow sequence.

🦊 ZorroFi insight

Most institutions don’t have an AI problem. They have an intake validation problem.

When identity verification, document authenticity, and eligibility checks happen at intake, AI becomes a capacity multiplier that strengthens what community institutions do best: relationships, judgment, and local trust.

When they happen later, AI accelerates risk.

The institutions that will win the next five years are those who modernize intake first - then deploy AI to orchestrate efficiency without losing the human element.

That’s the future of trusted, relationship-centered lending.

💡 Want to Dive Deeper?

Demo: I work with CDFIs, community banks, and credit unions on modernizing lending. Want to learn more?

Subscribe: Get the full breakdown every Thursday in my LinkedIn newsletter -Lending Insights. 

🙌 Help Us Grow

Know someone in lending who’d benefit from these insights? Forward this email — and hit reply if there’s a topic you’d like us to explore next.

📅 Next newsletter drops Tuesday, 11/25.

Warmly,

Sandra Wasicek
Founder & CEO ZorroFi