top of page

The Tortoise Strategy: Avoiding the AI Shortcut in Financial Crime Compliance

  • Writer: Elizabeth Travis
    Elizabeth Travis
  • Dec 5, 2025
  • 6 min read
A cybernetic rabbit sprints beside a stylish tortoise with glasses and a briefcase in a misty urban setting, symbolizing innovation.
AI-generated fable remix: the tortoise plays it safe, the robot hare bolts ahead, but who is the ultimate winner?

Generative AI, once a speculative tool, has now embedded itself in the daily fabric of financial services. Banks across Europe and the UK are quietly facing a growing reality: their employees are already using tools like ChatGPT, Bard, Claude, and others to expedite routine tasks, draft correspondence, summarise documents, and even script replies to client queries. Often this use is unauthorised and informal, taking place on personal devices or through browser-based access to open models. For example, a McKinsey survey highlighted that 13% of employees self-reported using generative AI for 30% or more of their tasks, despite only 4% of C-suite leaders estimating such usage.


Senior management, though publicly cautious, are privately acknowledging the futility of complete prohibition. Employees, especially younger, tech-native staff, are harnessing LLMs for efficiency, speed, and convenience. In many cases, they are not seeking to breach policies but rather to reduce friction in administrative workflows. However, the implications of this ungoverned usage are profound.


When staff input confidential internal documentation, sensitive case notes, or customer data into public AI tools, the result is not just a technical breach but a regulatory crisis. A GDPR breach is no longer a question of if, but when, and the fines will be painful, financially and reputationally.


Most commercial LLMs operate across jurisdictions, and few offer clear, enforceable guarantees on data deletion, storage, or secondary use. Every prompt submitted becomes a risk vector, and few banks have visibility into what their employees are inputting. This creates a 'shadow AI' problem. Like shadow IT before it, this ecosystem of unsanctioned, unmonitored tools poses serious threats to data governance, operational integrity, and legal compliance. Regulators will not accept ignorance as a defence. When the inevitable breach occurs, financial penalties will be matched by reputational damage, especially if customers discover their information was handed to a generative model without consent.


Formal Adoption: From Risk Aversion to Risk Management


To address this looming threat, banks must stop relying on bans and begin designing responsible adoption frameworks. This starts with the formal integration of AI tools into secure internal environments, using enterprise-grade offerings such as Azure OpenAI, Amazon Bedrock, or fine-tuned models deployed on private infrastructure. These platforms can offer role-based access control, encryption, audit logs, and security policies aligned with industry standards.


But technology alone is not the solution. Success hinges on training, governance, and cultural alignment. Staff must be trained not only in how to use LLMs, but in when and why to use them. The goal is not automation for its own sake but augmentation; empowering employees to accelerate low-risk tasks while maintaining human oversight over decisions involving judgement, ethics, and customer impact.


This requires shifting the perception of AI from a silver bullet to a tool in the broader compliance arsenal. For example, AI can be used to distil transaction monitoring alerts into succinct summaries, saving analysts time and effort. However, it should not be used to dismiss alerts or make decisions on SAR submissions without human validation. Generative models are not yet advanced enough to grasp context, interpret intent, or handle nuance in financial crime detection. Only experienced professionals can do that.


The concept of 'human-in-the-loop' must become standard practice, with LLMs producing draft outputs, and humans evaluating them against regulatory expectations and ethical standards. This ensures accountability, avoids automation bias, and preserves institutional knowledge.


The Governance Gap & Ethical Drift


There is a growing credibility problem within financial institutions. On one side, banks issue strict directives banning ChatGPT and similar tools for fear of compliance risks. On the other, they trumpet their embrace of artificial intelligence, digital transformation, and operational innovation in investor briefings. This mixed messaging breeds confusion and undermines internal compliance efforts.


Worse still, many banks are rushing headlong into AI adoption without first addressing foundational questions of governance and ethics. How will decisions made by AI be explained to customers and regulators? Who owns the outcome when a model makes an error? What audit trails exist to support retrospective reviews? These questions are rarely answered with rigour in the current gold rush towards AI implementation.


The Financial Conduct Authority (FCA) and the Bank of England have already raised red flags on AI explainability and accountability. In their 2023 joint Discussion Paper on AI and Machine Learning, both regulators emphasised that firms must understand model behaviour, maintain oversight, and ensure AI outputs are subject to appropriate scrutiny. Ethical lapses, particularly those affecting vulnerable customers, are likely to attract enhanced regulatory attention in the near future.


Banks must therefore move beyond pilot enthusiasm and adopt a slower, more deliberate pace. Internal ethics committees, AI risk assessments, fairness evaluations, and scenario-based stress testing should be integrated into deployment plans. The race to adopt AI must be tempered by the need to adopt it well.


Expertise Erosion & the AI Shortcut


One of the most underappreciated consequences of AI adoption is the potential erosion of domain expertise. In traditional compliance and financial crime functions, staff develop their skills through direct engagement with messy, complex, and often repetitive tasks. Drafting Suspicious Activity Reports (SARs), analysing sanctions hits, and reviewing transaction monitoring alerts are not glamorous jobs but they are foundational.


These activities teach professionals how to recognise patterns, build intuition, and exercise critical judgement. They are where future subject matter experts (SMEs) are forged. If these tasks are increasingly offloaded to machines, where will the next generation of compliance leaders come from?


There is a real danger that future analysts will become passive consumers of AI outputs rather than active interpreters of financial behaviour. If staff no longer need to write SAR narratives, how will they develop the linguistic precision and legal reasoning that these reports demand? Or are we going to rely on AI writing these reports on our behalf? If models discount transaction alerts automatically, how will new analysts learn the subtleties of transaction structuring or obfuscation techniques?


The answer lies in preserving human exposure to core analytical tasks, even as AI is introduced. AI should assist, not replace. It can offer draft content, suggest common typologies, or identify anomalies but final decision-making must remain human-led, especially in regulated areas.


Critical Thinking & Curriculum Reform


This discussion inevitably leads to a broader question: are we educating future professionals in the right way? For decades, universities have claimed to teach critical thinking, yet graduates are increasingly entering the workforce with limited ability to challenge assumptions, evaluate arguments, or interpret ambiguity. In the age of AI, these skills are more important than ever.


Rather than starting with higher education, perhaps reform should begin earlier. Secondary school curricula must prioritise logic, reasoning, and debate. Students must learn how to question what they read, identify bias, and think independently. These skills will not only prepare them for AI collaboration but will help them guard against manipulation, disinformation, and overreliance on automated outputs.


This is not merely a pedagogical issue; it is a strategic imperative. As generative AI becomes ubiquitous, the most valuable employees will not be those who can write faster but those who can think deeper. The future of compliance, risk, and oversight will belong to those who can challenge AI conclusions, contextualise machine-generated insights, and bring human judgement to machine efficiency.


Strategic Patience in the AI Era


The financial services sector is entering a period of profound transformation. AI will reshape workflows, redefine roles, and reconfigure risk. But transformation without deliberation is dangerous. The sector must resist the urge to move fast and break things. Instead, it should move carefully and build things; systems of oversight, cultures of responsibility, and pipelines of expertise.

Banks that win in the AI era will not be those that adopt first, but those that adopt best. The future demands strategic patience, ethical resolve, and a renewed commitment to critical thinking from the boardroom to the back office, and from the classroom to the compliance desk.


Conclusion: Guardianship Over Speed


As banks navigate the evolving landscape of artificial intelligence, they stand at a crossroads between passive adoption and principled integration. The unauthorised use of generative AI by staff is not simply a policy failure; it is a signal that employees are ready to embrace new tools, even if their institutions are not yet prepared. Rather than reactively banning use or blindly accelerating adoption, banks must take a more mature stance: one of guardianship.


This means managing AI with the same rigour applied to other critical systems: with clear boundaries, robust controls, and a culture of accountability. It also means reinvesting in people, not just platforms. AI can enhance decision-making, but it cannot replace the reasoning, judgement, and ethical discernment that lie at the heart of financial crime compliance. The human mind must remain central to interpreting risk, contextualising anomalies, and safeguarding customers.


Ultimately, the responsible future of AI in financial services lies not in speed, but in stewardship. Those who take the time to train their people, harden their policies, and strengthen their ethical compass will not only mitigate risk, they will build resilient, adaptive institutions fit for the complex challenges of tomorrow. The parable of The Tortoise and the Hare reminds us that rushing ahead with overconfidence often leads to costly missteps. In the age of AI, it is the careful, methodical, and thoughtful approach that will win the race.

 

bottom of page