Agentic AI and Financial Markets: From Automation to Market Infrastructure

Agentic AI is often described as the next stage of artificial intelligence: systems that do not merely generate content or assist human users, but can plan, execute, monitor, and adapt multi-step tasks with a degree of operational autonomy. In financial markets, this transition matters because the relevant question is no longer whether AI can support analysis, but whether it can begin to act within workflows that affect trading, distribution, payments, compliance, supervision, and ultimately market structure itself. The issue is not simply technological. It is institutional, prudential, and legal.

The first point to clarify is that “agentic” does not mean fully autonomous in the science-fiction sense. In practice, the emerging model is one in which AI systems are given bounded objectives, access to selected tools and data, and the ability to sequence actions across a workflow: retrieving information, comparing alternatives, generating outputs, escalating exceptions, and in some cases initiating execution subject to rules or human approval. This is why agentic AI is likely to matter more in finance than many earlier AI waves. Financial markets are not only information-rich; they are process-rich. Much of their value chain consists of repetitive but high-stakes sequences of analysis, decision support, verification, and execution.

That said, the current state of adoption suggests that the market is still in a transitional phase. ESMA’s recent evidence on AI in EU securities markets shows that in 2024 adoption remained uneven: 28% of respondents reported using AI in production or development, another 22% were experimenting or planning use within 12 months, 92% of reported use cases were internal rather than client-facing, and 90% involved a human in the loop. The main perceived benefits were improved internal processes, cost reduction, and the ability to analyse large volumes of data, while key challenges concerned data quality, data protection, and data governance. This is highly significant. It suggests that, at least in Europe, finance is not yet moving toward fully autonomous market actors; it is moving toward increasingly autonomous financial workflows.

This distinction matters because the real impact of agentic AI on financial markets will likely be structural before it becomes spectacular. In the short term, the most visible gains will arise in research, compliance, surveillance, onboarding, documentation, internal controls, portfolio support, treasury operations, and customer interaction. The IMF has already observed that recent AI breakthroughs may dramatically increase the efficiency of capital markets through process automation and the analysis of complex unstructured data, with effects already beginning to appear in trading, investment, and asset allocation. BIS research, moreover, indicates that AI agents in payment-system cash management could reduce operational costs, improve efficiency, and enhance resilience, albeit only with adequate safeguards and human oversight.

The next phase, however, is more consequential. Once AI systems can coordinate multiple tasks rather than merely complete one task at a time, they become relevant not only to individual firms but also to market functioning. In asset management, agentic systems may continuously gather market intelligence, reconcile internal and external data, test scenarios, produce risk memos, and propose rebalancing actions. In brokerage and wealth contexts, they may evolve into digital financial copilots that monitor portfolios, identify deviations from investment mandates, prepare client communications, and eventually trigger pre-authorised actions. In market infrastructure, they may support collateral optimisation, liquidity forecasting, payments orchestration, and exception handling. In each of these domains, the economic attraction is the same: lower latency between information and action.

Yet this same compression of time between signal and execution is also where systemic concern begins. Financial authorities have been increasingly explicit that AI does not only create firm-level benefits; it can also amplify market-wide vulnerabilities. The FSB has warned that wider AI uptake may increase third-party dependencies and service provider concentration, particularly because effective AI deployment often relies on specialised hardware, cloud infrastructure, pre-trained models, and concentrated data services. The ECB has similarly stressed that if technological penetration and supplier concentration become simultaneously high, micro-level AI risks may become macro-relevant. In parallel, the Bank of England has emphasised the need for a flexible and forward-looking monitoring framework, precisely because rapid shifts in AI capabilities can translate into new financial stability risks.

This is the core paradox of agentic AI in finance. The more capable the technology becomes, the more institutions may rely on the same foundational stack: the same cloud providers, the same model vendors, the same data sources, the same optimisation logics, and perhaps, over time, the same execution pathways. This creates at least four market-level risks.

First, there is a correlation risk. If many firms rely on similar models trained on similar data and optimised around similar objectives, their reactions to market signals may converge. The FSB explicitly notes that broader AI usage may lead to common modelling approaches and common training data sources, thereby increasing market correlations. In normal times this may simply look like efficiency; under stress it may resemble synchronised behaviour.

Second, there is operational concentration risk. The Bank of England and FCA found that one third of AI use cases in UK financial services were already third-party implementations in 2024, and that the top three providers accounted for 73% of cloud providers, 44% of model providers, and 33% of data providers reported in the survey. That finding is striking because it shows how quickly AI can deepen already familiar outsourcing and cloud-dependency issues. Agentic systems may intensify this trend, as firms may prefer turnkey orchestration layers over costly internal development.

Third, there is opacity risk. Agentic systems can make finance faster while making responsibility harder to localise. If an AI-driven workflow proposes, ranks, filters, escalates, and partially executes actions, the traditional chain of accountability becomes harder to reconstruct. This is one reason explainability is becoming a supervisory priority. BIS Innovation Hub’s Project Noor is expressly designed to help supervisors evaluate the inner workings of AI models used by banks and other financial institutions, including their transparency, fairness, and robustness. The direction of travel is clear: markets may tolerate complexity, but supervisors will increasingly insist on auditability.

Fourth, there is conduct and consumer risk. As AI moves closer to financial recommendations and action initiation, the distance between “assistance” and “advice” narrows. ESMA has already warned retail investors about the use of public AI tools for investing, noting that such tools may produce inaccurate or misleading outputs and can lead to poor investment decisions and significant losses. In an agentic setting, that concern becomes even more acute, because the issue is no longer only whether a recommendation is persuasive, but whether the surrounding system architecture nudges or automates action.

For Europe, the regulatory context is now impossible to ignore. The EU AI Act entered into force on 1 August 2024 and becomes fully applicable on 2 August 2026, with phased obligations already in effect for prohibited practices, AI literacy, governance, and GPAI-related duties. For financial institutions, the AI Act will not replace sectoral rules; it will interact with them. In other words, firms deploying agentic AI in finance will need to manage a layered framework composed of financial regulation, outsourcing and operational-resilience requirements, data protection law, consumer-protection rules, model governance, and—where relevant—the AI Act itself. Recent ECB supervisory messaging captures the position well: the objective is not to slow AI adoption, but to ensure that banks integrate it prudently, coherently, and under effective control before usage reaches systemic scale.

The most plausible conclusion, then, is not that agentic AI will “replace” financial markets or human judgment. Rather, it will reconfigure the allocation of judgment inside market processes. Humans will move upward, toward governance, exception management, strategic oversight, and ex post accountability, while machines move downward and sideways, into monitoring, triage, orchestration, and conditional execution. The firms that benefit most will probably not be those that pursue maximal autonomy, but those that design credible boundaries around autonomy.

For Prometeus Fintech Journal, the key takeaway is therefore a sober one. Agentic AI is not simply another productivity tool for front-office experimentation. It is becoming part of the institutional fabric of finance. Its effect on markets will depend less on how impressive the models appear in demo environments and more on how their deployment reshapes incentives, dependencies, accountability chains, and systemic interconnections. The decisive legal and policy question is no longer whether AI can think about markets. It is whether financial markets can remain governable once AI systems begin to act within them.

Commenti

Lascia un commento

Il tuo indirizzo email non sarà pubblicato. I campi obbligatori sono contrassegnati *