From theory to reality: what AI in FCC should be
AI is already being weaponised by criminals at scale. Many financial crime functions remain stuck in pilots, governance debates, and policy frameworks. In AI We Trust sets out our point of view on how AI in FCC should be designed, governed, and assured—so it works in the real world, not just on slides.
Why Now?
Banks have strategies that look modern, but day-to-day workflows stay manual. Oversight is often symbolic (“human in the loop” without genuine understanding). Meanwhile, adversaries are already scaling AI. The question isn’t could we use AI; it’s how we should—credibly, safely, and with accountability.
“Trust isn’t a phase gate. It’s a design choice.”
The uncomfortable truths
- Criminals are ahead. Adversaries adopt AI faster than compliance functions.
- Trust is the barrier. Without explainability and meaningful oversight, systems stay in pilots.
- Human effort is being wasted on tasks AI does better. “Human in the loop” often means validation without understanding.
- Transformation is stuck in theory. Strategies sound progressive, operations remain manual.
- There will always be a discrepancy between policy intent and practical application—the work is in bridging it.
The questions we’re putting on the table
- What should AI look like when embedded responsibly in financial crime controls?
- How do we bridge the gap between policy and practice without stalling adoption?
- Where should human judgment sit in the loop—and how do we make oversight strategic, not symbolic?
- How do we make AI credible to boards, auditors, and regulators—not just innovative?
What AI in FCC should be - Our point of view, distilled into principles for real-world deployment:
- Human-first, machine-augmented. Elevate judgment; don’t automate it away.
- Traceable and explainable by design. Decisions, overrides, and escalations must be auditable.
- Fair and accountable. Continuously assess bias and unintended harms.
- Continuously governed. Assurance and controls that evolve with threats and regulation.
- Whole-system oriented. Validate data, models, and human workflows—not just the algorithm.
“If it can’t be explained or overseen, it shouldn’t be in production.”
Our call to action
The gap between policy and practice will not close by itself. FCC leaders must:
- Stop designing for control. Start designing for credibility
- Elevate people, not just automate process
- Get AI out of the lab and into production
- Lead with purpose. Govern with intent
What this means in practice
In AI We Trust sets out our point of view on what AI in FCC should be — human-first, explainable, resilient, and credibly overseen. But we also know that firms need more than ideas: they need assurance that stands up in practice.
That’s why Plenitude has developed AI Services for financial crime and fraud prevention, supported by world-leading expertise in data science and AI assurance.
A key component of our approach is the assessment and development of governance, oversight, and control frameworks that enable firms to identify, assess, monitor, and manage risks effectively.
Our services can be delivered pre- or post-deployment, and provide assurance that AI outputs and performance:
- Operate as intended,
- Uphold values and ethical principles, and
- Meet regulatory requirements for safety, transparency, fairness, accountability, and contestability.