Finance Leaders Trust AI—But Fear It Too, Billtrust Study Finds

Finance Leaders Trust AI—But Fear It Too, Billtrust Study Finds

Artificial intelligence may be rushing into the finance back office at full speed, but trust isn’t exactly riding shotgun. A new Billtrust study surveying 500 finance leaders and C-suite executives paints a picture of a sector eager to automate—but increasingly nervous about what AI might enable in the wrong hands.

If fintech has taught us anything, it’s that innovation rarely waits for comfort. And according to Billtrust, the discomfort is escalating.

AI Moves In—Fraud Follows Closely Behind

Billtrust’s report, “Trust in AI: What Finance Leaders Need to Embrace Artificial Intelligence,” highlights the duality of 2025’s finance landscape: companies want AI, but they’re bracing for its darker side.

A resounding 82% of finance leaders worry about AI misuse, especially in fraud and financial crime. That fear isn’t theoretical. Fraudsters have upgraded from clumsy scams to AI-generated phishing, deepfake video calls, voice-cloned CFOs, and fake invoices so polished they could pass a brand audit.

Nearly half (45%) of surveyed leaders have encountered AI-generated phishing emails. Another 29% say they’ve witnessed AI-powered voice cloning used to impersonate someone they know. In other words: deepfakes aren’t just a viral curiosity—they’re a line item in enterprise risk management.

Ahsan Shah, SVP of AI & Analytics at Billtrust, puts it bluntly: “Trust in AI hinges on transparency, human oversight, and ethical constraints. These aren’t optional features; they are foundational requirements.”

It’s a sentiment that increasingly mirrors the broader enterprise tech ecosystem. As companies shift mission-critical workflows into AI-supported systems, the conversation has moved past “Will AI help us?” to “How do we keep AI from becoming a liability?”

Confidence vs. Reality: A Concerning Gap

Finance teams like to think they’re fraud-proof—or at least fraud-resistant. Seventy-six percent of respondents believe they’d spot a fraudulent invoice before paying it.

Yet many also admit they flag six or more suspicious invoices every month. Optimism is admirable, but the numbers hint at a cognitive dissonance that should give controllers and CFOs pause.

More troubling: 27% of organizations don’t track suspicious activity or have no idea how much of it they encounter. In an era of AI-accelerated threats, flying blind is no longer a quirk—it’s a liability.

Broader industry research shows a similar pattern. As AI speeds up financial operations, the same tools accelerate criminal workflows. Fraudsters now generate convincing documentation in seconds, run automated social engineering campaigns, and simulate trusted identities with uncanny realism. The arms race is officially underway.

AI Adoption Is Not Slowing Down

Despite the risks, companies aren’t abandoning AI; they’re doubling down—with conditions. A full 83% plan to implement AI-enabled solutions in the next two years, aligning with a global trend: automation is moving deeper into AR, AP, reconciliation, risk analytics, and compliance workflows.

This blend of enthusiasm and caution reflects the current market mood. Fintech vendors are under pressure to provide “explainable AI,” guardrails, and auditability—not just automation speed. The winners in the next wave of B2B finance software likely won’t be those with the flashiest models, but those with the safest ones.

The New Standard: Responsible AI

Billtrust’s report maps out a framework for what responsible AI should look like in finance. It’s a blend of policy, architecture, and culture—more governance than gimmick:

  • Human-in-the-loop oversight: AI can flag anomalies and predict risk, but humans still make the final call.
  • Transparency and explainability: Black-box AI is becoming a deal-breaker for compliance-heavy industries.
  • Continuous governance: AI performance must be reviewed as regularly as financial controls.
  • Secure, ethical deployment: Don’t just run AI—run it according to your values (and regulatory obligations).
  • Future-ready digital infrastructure: No amount of oversight helps if your tech stack is stuck in 2012.

Billtrust says its roadmap mirrors this philosophy: AI should augment—not replace—human insight. Shah notes, “Finance teams need systems that scale without sacrificing visibility or control.”

It’s a pointed message at a time when some vendors still pitch “full automation” as the dream scenario. In practice, finance professionals want automation that speeds up workload without erasing accountability.

Context: A Market in Flux

The tension around AI trust isn’t unique to Billtrust’s customers. Across fintech and enterprise SaaS, CFOs are demanding explainable algorithms, vendors are scrambling to re-architect legacy systems, and regulators are inches away from rolling out stricter AI guidelines for audit-sensitive industries.

Competitors in the AR/AP space—including HighRadius, BlackLine, and Quadient—are all betting big on AI-augmented finance. But an increasing share of their R&D is going toward risk mitigation, not just efficiency gains.

If the 2010s were about digitization and the early 2020s about automation, 2025 is shaping up to be the year enterprises demand trustworthy automation—a distinction that may define the next decade of B2B fintech innovation.

The Bottom Line

Billtrust’s research confirms what many in finance have been whispering: AI is no longer just a productivity tool—it’s a risk surface. The future of AR workflows will hinge not on AI adoption alone, but on guardrails that let innovation scale without leaving the back door open.

As finance leaders brace for increasingly sophisticated AI-driven threats, vendors that can deliver clarity, control, and explainability may become the new power players in enterprise fintech.

Leave a Reply

Your email address will not be published. Required fields are marked *