“Accountable” AI is boosting trust in the financial sector


liu, tempo Date: 2021-09-15 10:32:43 From:ozmca.com
Views:144 Reply:0

Ai is proving its value in the financial services industry with actions that have now expanded from identifying fraud and fighting financial crime to providing innovative digital experiences for customers. However, this evolution from traditional rules-based models to decision making using machine learning models is also creating new challenges for financial institutions.

 

Without proper steps to ensure the credibility of decisions made by machine learning models, many corporate organizations could be unknowingly exposed to reputational and financial risks. Using “black box” AI technologies that lack “interpretability” and transparency can make it impossible for an organization to know why and how decisions are made, let alone when they are broken.

 

AI applications aim to output decision judgments, and as people increasingly rely on AI in their daily lives, it is becoming increasingly important to be able to understand how decisions are made. This is where the concept of “explainable” AI comes in. “Explainable AI” means that humans can easily understand the path through which AI technology makes decisions through dynamically generated diagrams or text descriptions. The more explainable AI is, the easier it is for people to understand why certain decisions or judgments are made.

 

Financial institutions are now at a crossroads. A new study by IBM and Morning Consult found that 44% of business organizations in the financial sector say limited expertise and skills are their biggest challenge to successfully deploying AI technology. Throughout the pandemic, pressure has increased to adopt new technologies to improve operational efficiency and differentiate financial institutions from their competitors. As more organizations deploy AI technology, it is important to ensure that the output is fair and equitable, increase trust in AI decisions, and scale up AI deployments to optimize their business operations.

 

How can the financial industry increase its trust in AI?

First and foremost, before any financial institution begins to consider integrating AI into its business operations, they must first understand ethical and trustworthy AI technologies from the very basics of defining policies and regulations. Financial services companies have realized this, as 85% of respondents in IBM’s Global AI Adoption Index 2021 report said it was important for their business to be able to explain how AI makes decisions.

 

AI technology

 

Financial organizations should be able to clearly define what “fairness” really means in their industry and how to monitor it. Likewise, organizations should be clear about where they stand today as a corporate entity and what policies reflect that stance. With this first step completed, the financial institution can begin to explore specific use cases that adopt the AI model. For example, consider how the AI model behaves in various credit risk scenarios. What parameters influence its decision? Does it unfairly correlate risk with demographics?

 

All of these elements need careful consideration and need to be kept in mind throughout the life cycle of an AI run, from building and validating models to deploying and using them. Today, organizations can also use platforms to help guide the process, ensuring that models are fair and unbiased (within the limits of fairness prescribed by policy), and providing regulators with the ability to visualize and interpret their decisions. However, despite the presence of these tools in the market, 63% of financial services organizations surveyed said that AI governance and management tools that do not apply to all data environments are a barrier to deploying trusted AI models.

 

Financial institutions with greater confidence in their AI models can spend less energy on onerous tasks and focus on higher-value work. For example, fraud detection is a common use case for AI in financial services today, but the rate of false positives remains high. If the AI system can explain why it believes a case is fraudulent, and more importantly, if it can demonstrate that it does not systematically favor one group over another, human employees can spend less time verifying results and more time delivering higher-value work.

 

Do start-ups need to take a different approach than traditional financial institutions?

Ultimately, whether you are a traditional financial institution or a fledgling start-up, you need to pay equal attention to AI technologies that ensure fairness, ethics, and transparency. The most striking difference is that traditional financial institutions already have a ready-made model risk management practice that is typically applicable to traditional rules-based models. Also, traditional financial institutions already have technology and processes in place, so changing approaches is often more challenging. However, regardless of which development and deployment tool you use, you must consider how to extend existing model risk management practices to support AI/ML models.

 

Many fintech startups may not be considering existing investments in the technology, giving them more freedom to choose best-in-class development, deployment, and monitoring platforms with built-in capabilities.

 

The future of AI in the financial industry

The pandemic has served as a catalyst for organizations that still consider INVESTING in AI as a “risky move” to recognize the benefits of AI technology for increased efficiency and reduced stress for remote workers. Currently, 28% of companies in the financial industry say they have actively deployed AI as part of their business operations. Despite the rapid and large scale of AI penetration, 44% of enterprises say they are still in the early stages of exploring AN AI solution, and 22% are not currently using or exploring an AI solution. This means that most financial firms are currently developing or analyzing proof-of-concept (PoC) data for future growth and use purposes.

 

As we move into the post-pandemic era, organisations need to be more vigilant than ever to ensure that their AI technologies are operating in a “responsible” manner and not contributing to systemic injustice. Forthcoming laws and regulations from governments around the world will also continue to focus on how organisations, particularly the financial sector, can use this technology responsibly.

 

In short, there are no shortcuts to gaining widespread trust in AI decisions, but organizations can start by taking sustained, deliberate steps to address biases and injustices and improve interpretability.

Leave a comment

You must Register or Login to post a comment.