
Robin Newnham, Head of Policy Analysis & Guidance, Alliance for Financial Inclusion
The idea of our reflections beginning to act of their own accord is an old anxiety in folklore and literature. In Jorge Luis Borges’ writing, mirrors are not passive surfaces but rather habitats for a separate species, mimicking humans so precisely that we mistake imitation for reflection. Generative AI has given this story a new form. We have grown used to large language models (LLMs) reflecting human words, imagination, and aspiration. But with the turn to agentic AI, the reflection is becoming operational: no longer merely showing our image, but carrying our desires, instructions, and patterns into the world, where they can produce consequences.
AI is already extending financial inclusion, from alternative credit scoring for microentrepreneurs to bespoke advice for smallholder farmers. At the same time, AI adoption in the financial sector accentuates risk channels: the speed of decision-making compresses the time available for institutions and authorities to respond to market activity; concentration dependencies emerge due to reliance on third-party model providers; and outcomes such as loan rejections become harder to explain and can result from inbuilt gender or geographic bias.
For many financial regulators, the response has been not to draft new laws and regulations, but instead to clarify how existing powers, governance expectations and conduct obligations apply to AI. Bank Negara Malaysia’s recent discussion paper proposes responsible AI principles in the form of complementary guidance, rather than a wholesale rewrite of financial laws.
The advent of agentic AI does not alter this logic but does dial up the urgency. In the first phase of generative AI, risks were predominantly related to the quality of LLMs’ outputs: hallucinations, bias, and misrepresentation. Traditional model risk management (MRM) frameworks go some way to addressing these risks. Agentic AI goes beyond the straightforward input/output relationship by introducing autonomous multi-step reasoning, tool use, and recursive decision loops. In other words, the reflection is beginning to act. Agents will use payment rails and APIs under delegated authorities to execute transactions, a shift that a recent IMF paper characterizes as moving from “click to pay” to “decide to pay”.
The agentic turn is unlocking new financial inclusion use cases. In Latin America, Nubank is using AI agents to move beyond simple customer-service chatbots toward more complex workflows, including debt renegotiation, card logistics and fraud prevention. In Africa and South Asia, Pula’s data-driven agricultural insurance model uses satellite and index-based methods to support parametric coverage for smallholder farmers, illustrating how automation can reduce the cost of serving markets that traditional insurance often struggles to reach. But agentic systems lacking effective governance can elevate both systemic and consumer risks, for example AI agents that proactively offer predatory short-term loans to customers who have overspent.
The supervisory toolkit combines ex-ante measures, controls imposed to reduce the likelihood of negative outcomes, with ex-post remedies to deal with harms after they have occurred. The field of prudential regulation, particularly since the global financial crisis, has strengthened ex-ante controls such as capital and liquidity requirements. In contrast, market conduct regimes have historically relied more heavily on ex-post remedies such as mechanisms for addressing consumer complaints and avenues for recourse. The integration of AI, and particularly agentic AI, into finance is likely to blunt the effectiveness of ex-post remedies: harms can occur faster, scale more widely, and leave accountability harder to trace. As set out in a recent Application Paper of the International Association of Insurance Supervisors, and work on AI explainability by the Bank for International Settlements Financial Stability Institute, the direction of policy is toward lifecycle-oriented AI supervision, where regulators oversee the design and training stage of AI systems before they are deployed, rather than waiting for evidence of consumer harms to materialize after the fact.
What could a minimum viable ex-ante approach to supervising AI’s use in finance look like in practice? Jurisdictional peer learning will be key to capturing and iterating effective approaches, but important elements already in evidence include:
- Adapted MRM frameworks, including independent validation, explainability testing, drift monitoring and post-deployment review, something 11 of 14 surveyed Malaysian banks already practice (BNM, 2025).
- Third-party risk controls, especially for dependency on a small number of model, cloud and infrastructure providers, as highlighted by the Financial Stability Board’s Toolkit on Third Party Risk Management.
- Pre-deployment evaluation and continuous monitoring of agentic AI systems to mitigate risks that can lead to harmful consumer outcomes, included in Singapore’s Model AI Governance Framework for Agentic AI.
- Supervised testing environments for high-risk or novel use cases, such as the United Kingdom Financial Conduct Authority’s “Supercharged Sandbox” for agentic AI.
- Mandatory escalation and human intervention procedures, particularly for high-impact decisions such as creditworthiness assessments, as provided for in the EU’s AI Act.
Each of these measures must be calibrated proportionately and sequenced taking account of supervisory capacity. Proportionate approaches are key to safeguarding consumer interests without restricting innovations with potential to genuinely shift the inclusion frontier.
Borges’ tale ends with an unsettling prophecy: that the creatures behind the mirror will one day break through the glass and bring humanity under their dominion. Global AI governance debates are already grappling with risks of an existential nature. For financial supervisors, the more immediate challenge lies closer to the surface: how to protect consumers when AI systems no longer merely reflect human intent, but act on it. This calls not for a revolution in regulatory approaches, but an evolution of the toolkit, so that supervisors can act at the tempo of the systems they oversee.

