hero-image

The High-Stakes Reality of AI in Banking

2026-03-12

Bias, black boxes, and the decisions that financial institutions must be able to defend.

By Sheth Sanket, Chief Customer Officer, Axonis

Artificial intelligence is arriving in banking with enormous promise. It can approve loans in seconds, detect fraud in real time, personalize financial products, and reduce operational costs across underwriting, compliance, and risk. But behind the excitement, a quieter reality is emerging.

‍AI in financial services is bringing high-stakes drama. The kind that shows up in regulatory hearings, compliance investigations, and headlines about bias, security breaches, and decisions no one can fully explain.

‍To better understand what’s really happening inside financial institutions today, I recently sat down with Ben Engber, CEO of Lineate, a data engineering firm that has spent more than two decades helping organizations build systems for complex data environments, high-value transactions, and real-time decision making. Our conversation focused on a question many banks are quietly asking right now: How do you deploy AI at speed without breaking the trust, governance, and accountability the financial system depends on?

The explainability problem banks can’t ignore

The first issue Engber raised wasn’t performance. It was explainability.

Financial institutions are under intense pressure to make decisions faster. Customers expect near-instant approvals for loans, mortgages, and credit products. AI promises to deliver exactly that. But speed alone isn’t enough in regulated industries.

“You could ask the same question five times and get one answer four times and a completely different answer the fifth time,” Engber explained. “If you’re deciding whether to approve a loan, that can be a serious regulatory problem.” Unlike traditional software systems, many AI models are not fully deterministic. The same query can sometimes produce different outcomes depending on how the model interprets the request.

In consumer applications, that unpredictability may be acceptable to a degree. In financial decision-making, it isn’t. “If you're making a lending decision, you need to know exactly why you're making that decision when you make it,” Engber said. “You can't go back later and try to explain it to an auditor.” Regulators expect financial institutions to be able to defend decisions in real time. If a loan is denied, the reasoning must be traceable and auditable at the moment that decision occurs. That requirement fundamentally changes how AI is deployed in banking.

When AI learns from an imperfect world

Another issue we discussed was bias.

There’s a persistent belief that AI systems are somehow more neutral than human decision-makers. But as Engber pointed out, AI models are trained on historical data and history itself is far from neutral. “AI isn’t an impartial referee,” he said. “It’s more like a magnification of humanity.” In lending markets, that matters.

Regulations such as the US Fair Housing Act and the EU AI Act prohibit discrimination based on race and other protected characteristics. Those rules exist because discriminatory lending practices were once widespread. Modern institutions are careful to avoid those practices. But AI systems can still reproduce bias in subtle ways.

Machine learning models might identify correlations between geography, income patterns, employment history, or other signals that act as proxy indicators for protected attributes. The model may appear highly accurate during testing. But once deployed, those correlations could produce outcomes regulators view as discriminatory, even if the institution never intended that result. This is why explainability and governance are becoming central to responsible AI adoption.

The architecture problem few people are discussing

Our discussion then turned to a less visible challenge: data architecture and internal security boundaries.

‍For decades, financial institutions have invested heavily in governance frameworks designed to protect sensitive information. Data is segmented across departments, access is tightly controlled, and systems are monitored through compliance policies and audit trails. But those controls are not just about protecting data from external threats. They also exist inside the organization.

Different teams, applications, and users are intentionally limited in what they can see and access. In many cases, sensitive information is protected through multiple layers of controls that ensure only the right people and the right systems can interact with specific data. As Engber explained, that layered security model is easy to overlook when organizations begin introducing AI tools. “Security in these environments isn’t just about putting a big wall around everything,” Engber said. “There are a lot of smaller walls inside the organization as well…controls that limit who and what can access certain pieces of information.”

Many AI initiatives, however, introduce a different architectural pattern. To simplify model development, organizations often centralize large volumes of enterprise data into a single data lake or warehouse so AI systems can analyze it more easily. Technically, that approach works. Operationally, it can expand the scope of what a single system is able to access. “Once a tool touches that data, it now has access to it,” Engber noted.“Your security footprint is now exposed to that entire tool.”

Even when organizations protect those systems behind firewalls and perimeter defenses, the introduction of AI changes how data can be queried and accessed internally. And when that data includes sensitive financial information or personally identifiable information, the implications become significant. “Once you put everything into one central system, it’s all on you to protect how that data is used,” Engber said.  “When you're dealing with unpredictable systems, that becomes very difficult.”

Centralized environments can create powerful analytical capabilities, but they also increase the potential attack surface and expand the scope of what a single system can access. In highly regulated industries like financial services, they can introduce new governance and security challenges that institutions must address before AI can safely scale.

A different model is emerging

As we continued the conversation, Engber described why many organizations are now exploring federated approaches to AI. Rather than moving all enterprise data into a centralized environment, federated architectures allow AI systems to operate across multiple systems while leaving the underlying data where it already resides. That approach offers several advantages.

First, it preserves the governance structures institutions have spent years building. Second, it reduces the need to duplicate or move sensitive information across environments. And third, it creates better visibility into how AI systems interact with data. “Highly regulated organizations have invested enormous effort into making their systems secure and traceable,” Engber said. “You can leverage that infrastructure instead of rebuilding governance from scratch.”

‍More importantly, federated architectures allow organizations to track a critical part of the decision process: what data an AI system accessed, when it accessed it, and how it used that information to produce an outcome. That level of traceability is essential for deploying AI safely in regulated industries and Axonis is leading the way with Axonis Decision Intelligence.

From risk management to competitive advantage

Much of the industry conversation around AI governance focuses on risk mitigation. But Engber believes there is another way to view it. Customers are becoming more aware of how their data is used and how automated decisions affect them. Regulators are increasing scrutiny of algorithmic systems. And financial institutions are recognizing that trust remains one of their most valuable assets.

“Up until now we've talked about AI as a risk story,” Engber said. “But governance could actually become a competitive advantage.” Institutions that build AI systems with transparency, traceability, and strong governance may find themselves able to deploy AI more confidently and more broadly than competitors struggling with compliance concerns. In that sense, governance is not friction. It’s an enabler.

The next phase of AI in banking

Over the past two years, financial institutions have experimented aggressively with AI. Many have launched pilot programs, automated operational workflows, and explored new customer experiences. But the next phase of AI adoption will be different. It will involve deploying AI in environments where decisions affect lending, compliance, security, and public trust. And in those environments, the defining question won’t simply be whether AI can make decisions faster.

It will be whether those decisions can be explained, secured, and defended. Because in financial services, innovation has always required more than speed. It requires accountability. And as AI becomes embedded in the financial system, the institutions that succeed will be the ones that can deliver both.

‍Whether you're just beginning your AI journey or already deploying models and reconsidering how to scale them responsibly, Axonis and Lineate can help. Together, we work with financial institutions to review architecture, governance, and data access models to ensure AI can operate safely and effectively. Set up a 20-minute meeting to discuss how we can help you deploy AI with the governance, transparency, and security that financial institutions require.

Share:

Recent Posts