Bryan Zhang is co-founder and executive director of the Cambridge Centre for Alternative Finance at Cambridge Judge Business School, and Kieran Garvey is AI research lead at the CCAF
For decades, machine learning has increasingly powered financial services, refining risk models, fraud detection and credit scoring. More recently, generative AI is enabling prompt-driven creation of coherent text, images, video and audio, while synthesising and summarising large volumes of information.
Yet as impressive as these technologies are, they have a fundamental limitation: they rely on explicit human prompts. In other words, they don’t autonomously plan or take action — they react.
Agentic AI changes that equation. Unlike GenAI, which passively responds to requests, agentic AI not only generates content but also perceives, learns and takes action via integration with tools with minimal human involvement based on set goals, enabling continuous adaptation and decision-making. Agentic AI orchestrates multiple agents using large language models as a collective “brain” to solve complex, multi-step problems autonomously.
While still in its infancy, with limited performance and reliability to date, the technology is fast evolving with the introduction of early agentic tools such as OpenAI’s Operator, DeepMind’s Project Mariner and Anthropic’s Computer Use.
Enabled by ever-increasing computational power, alongside advances in model efficiency (see DeepSeek), it is likely that agentic AI’s capabilities will soon reach a tipping point that will impact banking and financial services in profound ways.
White-collar disruption
The first sectors in financial services to feel a substantial impact from agentic AI will likely be consulting, accounting and auditing — professional services that have historically depended on armies of analysts and associates. Consulting firms, built on a labour-intensive, research-heavy business model, are particularly vulnerable.
Agentic tools, such as OpenAI’s recently launched Deep Research, can autonomously gather and interpret massive datasets, reason independently, highlight key trends and generate draft reports with empirical data and insights.
Auditing is another area ripe for transformation. Instead of manual transaction reviews, agentic workflows can autonomously scan financial statements, cross-check them with compliance regulations and flag anomalies instantly. This won’t eliminate auditors, but it will likely redefine their roles. Instead of focusing on routine checks, professionals will need to provide robust oversight and higher-value strategic insights — augmented by AI rather than bogged down by manual labour.
Agentic AI is set to disrupt banking too. AI chatbots and robo-advisers are already commonplace, and agentic AI promises to take them from scripted LLM-enabled Q&A bots to intelligent assistants that can execute workflows in response to customer needs.
Imagine a virtual banking agent that doesn’t just answer customers’ queries but anticipates and acts upon their needs. If a customer has an outstanding credit card balance, for example, it could detect surplus funds in their savings account and suggest an optimal payment strategy — executing the transfer automatically with the customer’s approval, depending on their consent thresholds.
AI-aided credit and investment decisions
This technology will likely impact credit decisions, too. Traditional credit scoring models rely on static data, providing a snapshot of risk at a single moment. The adoption of agentic AI can empower banks as well as fintechs to undertake continuous credit assessment by incorporating real-time transaction data, behavioural trends and economic indicators. The result? Faster approvals, more precise risk assessment and dynamic lending models that could adjust in real time.
But with innovation comes responsibility. Agentic AI-enabled credit decisioning raises well-documented, critical questions about bias, fairness, and accountability. If historical data reflects past discrimination, these biases could be perpetuated at scale and exacerbated by AI “agents” which are operating with autonomy and have their own agency.
Financial institutions and regulators must carefully balance AI’s transformative power with transparency and ethical oversight to ensure fair outcomes for all borrowers. There is an important trade-off here as higher levels of explainability generally limit AI performance.
Agentic AI is poised to reshape trading and investment by making sophisticated strategies and autonomous trading capabilities underpinned by real-time data readily accessible, to institutional as well as retail investors. However, this level of autonomy introduces risks. By lowering the barrier to entry, AI-driven investment agents may react to the same market signals simultaneously. It could lead to herding behaviour at scale, increasing volatility, flash crashes or market distortions.
Financial institutions and regulators will need to ensure that safeguards — such as algorithmic stress tests and additional circuit breakers — are in place to mitigate these risks.
Walking the innovation vs governance tightrope
The promise of agentic AI in finance is enormous, but so are the challenges. Used wisely, agentic AI could widen access to financial services, reduce inefficiencies and create hyper-personalised customer experiences.
However, overreliance on AI-driven decision-making without robust oversight could undermine trust, introduce new risks, amplify biases, exacerbate discrimination, and create volatility and systemic instability in financial markets.
It is also imperative to think about and indeed debate the wider socio-economic and public policy implications for massive adoption of AI agents in our financial services and economies, from job displacements, taxation regime and social welfare.
One thing is certain, the agentic AI era of financial services is here and the time to act is now.