Skip to content

Explainable AI (XAI): Challenges & How to Overcome Them

Artificial intelligence and machine learning technologies are making a big impact in the banking industry, as their ability to process millions and even billions of transactions in mere seconds is unparalleled. However, the astounding amount of data being processed means it's not without its pitfalls, particularly when it comes to decision making for complex tasks.

An EnterpriseTalk.com post looks into the intricacies of Explainable AI (XAI) -- a main technology for the banking industry:

Machine,Learning,Technology,Diagram,With,Artificial,Intelligence,(ai),neural,Network,automation,data,Mining

Explainability is a logical, significant, and fascinating aspect of AI. Explainable AI (XAI) is a robust descriptive tool that offers in-depth insights way more than what traditional linear models could provide. But irrespective of the benefits, XAI has its own sets of challenges.

As noted in a blog post on Modernizing Omnichannel Check Fraud Detection, XAI enables transparency for check fraud detection, but there are still challenges to overcome.

Challenges of XAI

The post goes on to identify the Challenges of XAI and how to overcome them:

Overcoming the Challenges of XAI

While the obstacles are daunting, the EnterpriseTalk.com post provides two possible ways to overcome the challenges of XAI:

Model agnostic technique

This strategy can be implemented to the complete set of algorithms or learning ways. The model agnostic approach will assist the enterprises in treating the internal functioning of XAI as an unknown black box.

Model-specific technique

This strategy can be implemented for a few or particular sets of algorithms. The model-specific approach treats the internal functioning of XAI as a white box.

While looking at model agnostic or specific approaches, the global interpretation concentrates more on common patterns across all data points. The local interpretation concentrates on interpreting specific individual data points.

brighterion image

Image Source: Brighterion

The "Black Box" vs. "White Box"

In a previous blog post, we looked at the "black box" issue -- the problematic alternative to Explainable AI.

...The development work defines a black box as a solution where users know the inputs and the final output, but they have no idea what the process is to make the decision. It's only when a mistake occurs that the complexity is noticed.

Artificial Intelligence_reduced

A "black box," of course, is not a viable solution for FI's.

From an FI's perspective, not only are they responsible for how the AI performs for their customers, there are a litany of compliance and regulatory factors that must be addressed such as the Equal Credit Opportunity Act (ECOA) in the U.S. and the Coordinated Plan on Artificial Intelligence in the EU. And, as AI adoption grows, model explainability will become increasingly important and result in new laws and regulations.

To overcome the challenges of the "black box," financial institutions must utilize a "white box" with good model governance.

This overarching umbrella creates the environment for how an organization regulates access, puts policies in place and tracks the activities and outputs for AI models. Good model governance reduces risk in case of an audit for compliance and creates the framework for ethical, transparent AI in banking that eliminates bias.

“It’s important that you don’t cause or make decisions based on discriminatory factors,” he says. “Do significant reviews and follow guidelines to ensure no sensitive data is used in the model, such as zip code, gender, or age.”

As more and more financial institutions deploy artificial intelligence and machine learning technologies to perform more complex tasks and decision making, it will be critical for them to be able to understand the results and ensure that the information remains private.

For check processing, the technologies utilize "confidence scores" for fields extracted from the check image -- providing financial institutions with data on how the technology is interpreting the information. Additionally, the technology will also provide reasoning for why an item was rejected, such as image quality -- helping banks understand the decision-making of the system.

OrbNet AI Check Header-01

And let's not forget about check fraud, where image-based check fraud detection is already utilizing Explainable AI. As the technology interrogates and analyzes check images, the system is comparing the items to previously cleared and flagged checks, while also extracting data and analyzing the information extracted from the checks for fraud. This is applied for check stock validation, along with signature verification.

What's more important is that the technology is able to provide transparency to fraud analysts, including reasons why an item was rejected or flagged for review. This is reflected within the scoring on fraud review platforms.

Leave a Comment