How Explainable AI (XAI) Breaks the 95% Banking AI Failure Rate in APAC
- Connie Tong
- Sep 18
- 5 min read

A recent study from MIT has unveiled a sobering reality: a staggering 95% of generative AI pilot projects ultimately fail. The root cause is unexpected—companies mistakenly try to eliminate "friction," when it is precisely this friction that is key to creating value and driving success.
For the Asia-Pacific banking industry, where banking AI adoption is accelerating with unprecedented determination, this insight is a thunderous wake-up call. It exposes a fundamental flaw in most AI strategies and illuminates a new path to success. The successful 5% have not only achieved tangible ROI but have also demonstrated that embracing and designing "beneficial friction" through deep integration with high-value workflows and the creation of memory and learning loops is the wiser course of action.
The core of the issue, as another Forbes article points out, is that the banking industry must undergo a paradigm shift from the opaque "black box" to the interpretable "glass box."
So, how can one distinguish between "beneficial friction" and "detrimental obstacles"? And how do "black box" AI systems lead banks into the abyss of failure?
The "Black Box" Paradox: Why Most Banking AI Initiatives Fail
Many financial institutions are drawn to the sleek, seamless experience promised by "black box" AI solutions, expecting a plug-and-play fix with immediate results. However, this pursuit of a "frictionless" state is the very source of the 95% failure rate. It creates three insurmountable challenges:
1. Detrimental Compliance Hurdles and a Trust Deficit
When a regulator asks about the decision-making logic of an AI credit model and the bank’s answer is, "We don't know, it's a black box," this opacity creates a massive compliance obstacle—a purely valueless "bad friction." Not only does it fail regulatory scrutiny and damage AI model risk management, but it also erodes internal trust in AI, preventing its application in core business operations.
2. Rigid Integration and Uncontrolled Innovation
"Black box" solutions attempt to offer a universal, smooth solution, but this is precisely what stifles the "beneficial friction" mentioned in the MIT study—the process of system adaptation, workflow redesign, and AI governance maturation. When banks try to integrate these solutions with their complex IT architectures, numerous technical conflicts ("bad friction") arise. More fatally, it excludes the bank's own quantitative and business experts, who cannot integrate their valuable knowledge and models for iteration and optimization. This model prevents the AI system from learning and adapting through feedback, ultimately turning it into a rigid "cage" rather than an evolving partner.
3. Data Sovereignty "Walls" and the Rise of "Shadow AI"
For the APAC banking sector with its strict data sovereignty requirements, "black box" solutions often come with another fatal "bad friction": data security risks. Many solutions require data to be transferred to public clouds overseas, directly crossing compliance red lines. Ironically, as enterprise-level pilots fail due to these obstacles, the MIT study found that a "shadow AI" economy is emerging—employees are turning to personal AI tools to get their work done, achieving significant ROI. This is powerful evidence that what front-line teams truly need are flexible tools that allow them to personally participate, iterate, and apply "beneficial friction," not a closed and rigid system.
The "Glass Box" Paradigm: A Framework for Banking AI Transparency, Governance, and ROI
To break the failure curse, a shift in mindset is required—from "avoiding friction" to "designing and embracing beneficial friction." This is the core idea of the "glass box" paradigm, built on three principles designed to transform AI into a controllable, evolvable, and immensely valuable internal capability.
Principle One: Fostering a "Learning Loop" through Radical Transparency
The cornerstone of the "glass box" paradigm is radical transparency, often referred to as Explainable AI (XAI). It demystifies the AI's decision-making process, allowing a bank's risk, compliance, audit, and even regulatory bodies to clearly review its internal logic. An ideal financial analytics engine should allow users to inspect its source code and understand its decision paths. This transparency eliminates the "bad friction" of compliance and, more importantly, creates the prerequisite for the "memory and learning loops" emphasized by MIT. Only what is visible can be iterated; only what is understood can be optimized.
Principle Two: Transforming "Friction" into an Engine for Innovation with Deep Customizability
This is the fundamental difference between a "glass box" and a "black box." An open analytics engine should be viewed as a "workbench," not a closed "toolbox." It not only provides a rich library of standard models but also allows banks to seamlessly integrate their own hard-won intellectual property and models. This empowers the bank's internal experts, enabling them to redesign workflows, continuously fine-tune models, and deeply adapt business logic around the AI. This is the value-creation process of turning "friction" into an asset—a continuous innovation cycle that directly boosts AI ROI.
Principle Three: Building a Secure "Innovation Sandbox" with Absolute Sovereignty
"Beneficial friction" requires a secure environment to thrive. A true "glass box" solution must be a true on-premise AI platform, deployable in the bank's own data center or its designated private cloud. This means all core data and model iterations are completed within the enterprise's own security perimeter. This not only completely resolves the "bad friction" of data sovereignty but also provides the enterprise with a secure "innovation sandbox," allowing teams to confidently conduct the boldest experiments and derive the deepest insights using their most sensitive data.
Conclusion: From "Choosing AI" to "Choosing the Right AI Philosophy"
The AI journey for the APAC banking industry has reached a watershed moment. The real challenge has shifted from "whether to use AI" to "which AI philosophy to choose." The fascination with "sleek but shallow" black-box solutions is the path to the 95% failure rate.
The "glass box" paradigm represents a more mature and profound strategy. It acknowledges and embraces "beneficial friction," viewing it as the core engine for corporate learning, adaptation, and the creation of a unique competitive advantage. It is about trust, control, and endogenous innovation. When an AI platform can empower internal teams to actively create, iterate, and optimize, the long-term return on investment will far exceed that of any plug-and-play "black box."
In the future, when evaluating any major AI investment, three strategic questions are worth deep consideration:
Does the solution aim to eliminate all friction, or does it aim to harness "beneficial friction"?
Does the enterprise have absolute control over its own data, models, and AI governance processes?
Is it empowering internal teams to become the drivers of innovation, or is it reducing them to passive recipients of technology?
The answers to these three questions will clearly mark the only path to joining the successful 5%.
And this is the core philosophy behind our creation of COMPASS. Through an Open Model Financial Data Analytics (FDA) platform, COMPASS turns the "glass box" concept into reality, returning control, insight, and creativity to the enterprise itself.
Follow us on LinkedIn or subscribe to “FinTech Insights” for more information about FinTech.
References
Forbes. (2025). MIT Finds 95% Of GenAI Pilots Fail Because Companies Avoid Friction.
Forbes. (2025). From Black Box To Glass Box: Navigating Compliance Transparency In Banking AI.
Disclaimer: This article is for informational purposes only and is not investment or professional advice. Information and views are from public sources we believe to be reliable, but we do not guarantee their accuracy or completeness. Content is subject to change. Readers should exercise their own judgment and consult a professional advisor. Any action taken is at your own risk.
Copyright © 2025 Axisoft. All Rights Reserved