FSB 2025 Report: Is AI Homogenization the Next Systemic Risk in Banking?
- Connie Tong
- Oct 16
- 5 min read

The Financial Stability Board (FSB) issued a critical warning on October 10, 2025: AI model homogenization in financial services is creating systemic risk through a dangerous "herding effect." When banks rely on similar AI models and data from a handful of third-party vendors, they risk repeating the 2008 financial crisis—this time driven by artificial intelligence rather than VaR models.
The FSB report explicitly warns that similar training data and models can lead different institutions' AI systems to make identical decisions under market pressure, thereby amplifying volatility. This is eerily similar to the role played by the highly homogenized Value at Risk (VaR) models used by nearly all major banks during the 2008 financial crisis. The lessons of history are repeating themselves, this time in the form of AI.
For global financial institutions, this warning plunges them into a deeper strategic dilemma. On one hand, regional leaders like Singapore are championing AI as a core engine for maintaining their competitive edge, encouraging bold innovation. On the other hand, the FSB's report reveals the other side of the coin: in the pursuit of innovation, banks may be unwittingly tying their fate to a handful of tech giants through AI vendor lock-in. When credit risk models and anti-money laundering systems all originate from similar vendors, how much decision-making independence truly remains?
The Hidden Costs of Third-Party AI: Governance and Financial Risks
The greatest predicament today stems from a forming "black box of accountability." As banks entrust core decision-making capabilities to third-party AI, their institutions are simultaneously falling into a loss of governance control and a hemorrhage of financial resources—an unbearable combination.
The root of this dilemma lies in the extreme centralization of AI model sources. The latest Stanford University AI Index Report reveals a startling fact: in 2024, U.S. institutions produced 40 significant AI models, compared to 15 from China and only 3 from all of Europe. This is not a healthy and diverse market but a de facto tech oligopoly.
When banks adopt these external "black boxes," a fatal paradox emerges: Banks face 100% accountability for AI model outcomes while suffering from AI vendor lock-in that limits control over their internal workings and costs. This problem explodes on two levels:
AI Model Governance Crisis: The Explainability Gap
When regulators ask why a suspicious transaction was approved or a credit application was denied, a bank's response of "it was the recommendation of a third-party AI model" is unacceptable under any strict regulatory framework. The lack of model explainability creates fundamental financial AI compliance challenges that no institution can afford to ignore.
More dangerously, vendors are increasingly writing liability waivers into their contracts, leaving banks to bear all legal and reputational risks alone. For the heavily regulated financial industry, this "outsource risk, retain liability" model—a core challenge in third-party AI risk management—is akin to building the institution's fate on quicksand.
AI Cost Overruns: Why 50% of Projects Fail
This "black box of accountability" is not just an AI model governance crisis but also a financial bottomless pit. A recent report reveals the harsh numbers: in addition to millions in platform procurement costs, banks must also bear hundreds of thousands in employee training, failed experiments, and ongoing annual model updates and compliance costs.
The report further cites Gartner's insight, warning that the total cost of AI projects can experience AI cost overruns that "skyrocket uncontrollably by 500% to 1000%." This leads to a grim reality: more than half of organizations ultimately abandon their AI projects due to cost miscalculations. This financial bleeding stems directly from a reliance on opaque third-party systems, where banks lose the ability to independently optimize processes and control costs.
Breaking AI Vendor Lock-In: The Model Customizability Solution
To break free from the "black box of accountability," banks don't need another, more powerful AI tool. They need a strategy to reshape their own core capabilities through model customizability. This is the core value of COMPASS, which transforms the feature of "Customizability" into a solution that helps financial institutions directly confront their governance and financial crises while reducing third-party AI risk.
Building Model Explainability and Regulatory Compliance
The essence of COMPASS is to empower banks to build, train, and fine-tune AI models based on their own business operations, risk appetite, and customer data. The end result is a fully transparent and auditable model environment where every decision can be clearly explained to both internal and external stakeholders (including regulators), transforming financial AI compliance from a passive cost center into a competitive advantage built on trust through enhanced model explainability.
On-Premise AI Deployment: Securing Data Sovereignty
By supporting on-premise deployment and private training, COMPASS ensures that a bank's most critical data assets remain within the institution. Unique models trained on proprietary data naturally produce distinct insights and decisions. This on-premise AI deployment approach not only fundamentally solves the problem of AI model homogenization but is also the foundation for building a core competitive "moat" in the market while mitigating third-party AI risk.
Ensuring Technological Flexibility and Strategic Control
The open architecture of COMPASS allows banks to flexibly select and combine underlying technology components according to their own needs, avoiding being locked into a single vendor's technology roadmap. This technological initiative ensures that the bank's strategy can be rapidly implemented, allowing it to seize opportunities in a fast-changing market rather than passively waiting for the vendor's next update—effectively breaking the cycle of AI vendor lock-in.
Conclusion: A Strategic Watershed in Financial AI Adoption
The FSB's warning is a watershed moment, signaling that the "Wild West" era of financial AI adoption has ended. Continuing to rely on closed, single-source "black boxes" is tantamount to placing an institution's future on a high-stakes gamble of governance and finance, with third-party AI risk and AI cost overruns threatening institutional stability.
A strategic divergence has emerged: it is time to reshape core capabilities in models, data, and technology through model customizability, and to transform AI from a potential systemic risk into a strategic weapon, firmly in their own hands, to create unique value. By addressing AI model governance challenges and achieving model explainability, financial institutions can break free from AI vendor lock-in and build sustainable competitive advantages in an AI-driven future.
Follow us on LinkedIn or subscribe to “FinTech Insights” for more information about FinTech.
References
FSB (2025). Monitoring Adoption of Artificial Intelligence and Related Vulnerabilities in the Financial Sector.
Asian News Network (2025). AI key to boosting Singapore's financial hub edge: Minister.
Stanford HAI (2025). The 2025 AI Index Report.
Ethos AI (2024). The Growing Risk of Third-Party AI.
CostPerform (2024). Uncover the Hidden Costs of AI: A Bank's Journey.
Disclaimer: This article is for informational purposes only and is not investment or professional advice. Information and views are from public sources we believe to be reliable, but we do not guarantee their accuracy or completeness. Content is subject to change. Readers should exercise their own judgment and consult a professional advisor. Any action taken is at your own risk.
Copyright © 2025 Axisoft. All Rights Reserved


