By Martina Rejsjo
AI has transformed every aspect of the financial industry. That includes compliance and trade surveillance.
As markets grow more complex and regulatory expectations become more stringent, participants need sophisticated technology to detect, prevent and respond to financial misconduct. Traditional rule-based systems can be powerful on their own, yet firms must embrace new capabilities as they work to keep pace with the scale and complexity of modern markets. AI, with its ability to analyze vast datasets, rapidly surface insights and continuously improve through machine learning, is one of the most promising examples.
But for all its potential, AI comes with its own set of challenges. How can market participants ensure that AI-driven decisions are transparent, repeatable and defensible to regulators? On the other side, how can regulators ensure firms are not blindly accepting AI outputs, but actively engaging with and understanding them? If surveillance practitioners cannot fully grasp how AI arrives at its conclusions, they may struggle to trust or effectively utilize it.
To avoid confining their compliance procedures to a black box, these firms face a need for explainable AI for trade surveillance.
The Benefits of Explainability
Financial regulators worldwide tend to avoid strong positions on specific technologies like AI. In most cases, it’s not a question of what technology is used, but how it is applied.
In 2021, former SEC Chair Gary Gensler stated: “To be clear, I think that the SEC should be technology neutral. But one thing we’re not neutral on is investor protection.” The video was focused on crypto, but the concept is universal.
Similarly, in a 2024 discussion paper from the UK Financial Conduct Authority (FCA), the regulator stated: “We are technology-agnostic, principles-based and outcomes-focused. Our regulation needs to adapt to the speed, scale and complexity of technological development.”
The implications for trade surveillance are clear. Regulators will not balk at the mere use of AI, but if they detect systematic issues that have the potential to negatively impact accuracy, consistency or fairness, they will act accordingly. As has been the case with numerous technological innovations, eliminating the black box is likely to be a major focus in future discussions around AI regulation. For this reason, compliance teams must be able to explain how AI-driven alerts are generated and why certain trades are flagged as suspicious. All outcomes must be based on clear, auditable evidence, not assumptions or opaque AI-generated assessments.
Without AI explainability, firms face significant risks:
- Regulatory Scrutiny: If an AI system flags a transaction but cannot justify why, regulators may question the integrity of a firm’s surveillance program, leading to inquiries or penalties.
- Operational Inefficiencies: If compliance analysts don’t trust AI outputs, they may feel compelled to manually revalidate every alert, negating the efficiency benefits AI is supposed to provide.
- Legal and Reputational Risk: Misclassifications — false positives or overlooked misconduct — can have serious legal consequences, especially if firms cannot demonstrate due diligence in their surveillance practices.
- Client and Market Trust Issues: Firms that rely on AI-driven compliance solutions must also be able to explain decisions to clients, internal stakeholders and external auditors to maintain credibility.
In short, AI must do more than generate insights—it must generate trust and defensibility.
Generic AI vs. Deterministic AI
Many AI-driven compliance tools today rely on probabilistic models, such as deep learning and large language models (LLMs). While these models are powerful for recognizing patterns and making predictions, they often struggle with explainability. Their results are based on statistical probabilities rather than explicit, repeatable logic—a fundamental issue in regulatory compliance. A model may highlight a transaction as suspicious, but if the decision is based on an opaque mathematical correlation rather than a clear rule set, it raises concerns about fairness and reliability.
By contrast, deterministic AI models are designed to provide clear, structured and repeatable outputs. Instead of generating predictions based on past trends, deterministic AI retrieves specific, verifiable data points and presents results in a format that compliance teams can audit. This approach ensures compliance professionals can trace every decision back to its underlying data and reproduce outputs to demonstrate their reliability.
Ultimately, AI should elevate trade surveillance without compromising regulatory defensibility. Combining a deterministic approach with natural-language interfaces can help compliance teams interact with data intuitively while maintaining structured, auditable outputs. Additional success factors include robust security and data governance controls and seamless integration with existing compliance workflows.
The Future of AI: Efficient, Accurate, Transparent
At Eventus, the need for explainability and regulatory readiness has defined our AI roadmap. Our forthcoming Frank AI solution will enable natural language queries into Validus data, arming compliance teams with intuitive insights that never compromise on transparency, repeatability or security.
At this point, the potential of AI to drive efficiency gains is well-understood. Future conversations will focus not just on achieving efficiency, but on ensuring — and demonstrating — accuracy and integrity to key stakeholders. We firmly believe the key to explainable AI-powered trade surveillance lies in deterministic AI, and we look forward to diving deeper into its benefits in the months to come.
AI is poised to transform trade surveillance. Are you ready to explain what that means for your firm?