Artificial intelligence is proving its value in the financial services industry. Today, its application range has developed from identifying fraud and combating financial crime to providing customers with innovative digital experience. However, this evolution from the traditional rule-based model to the use of machine learning model for decision-making is also creating new problems for financial institutions.
Without appropriate steps to ensure the credibility of the decisions made by the machine learning model, many enterprise organizations may be unknowingly exposed to reputation and financial risks. The use of “black box” AI technology that lacks “interpretability” and transparency will make the enterprise organization unable to know the causes and process of decision-making, let alone when the decision-making went wrong.
The goal of AI application is to output decision-making judgment. As people rely more and more on AI in their daily life, it becomes more and more important to understand the process of decision-making. In this case, the concept of “interpretable” Ai came into being. The so-called “interpretable AI” refers to the path that human beings can easily understand AI technology and make decisions through dynamically generated charts or text descriptions. The higher the interpretability of AI, the easier it is for people to understand why they make some decisions or judgments.
Today, financial institutions are at a crossroads. A new study by IBM and morning consult found that 44% of business organizations in the financial field said that limited expertise and skills were the biggest challenge for them to successfully deploy AI technology. Throughout the pandemic, there is increasing pressure to adopt new technologies to improve operational efficiency and make financial institutions stand out from competitors. As more and more organizations deploy AI technology, it is important to ensure the fairness and fairness of output results, improve the trust in AI decision-making, and expand the scale of AI deployment to optimize their business operations.
How can the financial industry enhance its trust in artificial intelligence?
First and foremost, before any financial institution begins to consider integrating AI into its business operations, they must first understand the ethical and trustworthy AI technology from the most basic definition, policies and norms. Financial service companies have realized this, because in IBM’s 2021 global AI adoption index report, 85% of respondents said that it is important for their business to explain how AI makes decisions.
Financial organizations should be able to clearly define the true meaning of “fairness” in their industry and how to monitor fairness. Similarly, organizations should be clear about their current position as corporate entities and what policies reflect this position.
After completing this first step, financial institutions can begin to study specific use cases using AI models. For example, consider the performance of AI model in various credit risk scenarios. What parameters affect their decisions? Does it unfairly link risks to demographics?
All of these elements need to be carefully considered and kept in mind throughout the life cycle of AI operations – from building and validating models to deploying and using them. Today, business organizations can also use various relevant platforms to help guide this process, ensure that the model is fair and unbiased (within the fairness specified in the policy), and provide regulators with the ability to visualize and explain decisions. However, despite the existence of these tools in the market, 63% of the surveyed financial services organizations said that AI governance and management tools that are not applicable to all data environments are obstacles to the deployment of reliable AI models.
If financial institutions have more confidence in their AI model, they can spend less energy on heavy tasks and focus on higher value work. For example, fraud detection is a common use case of AI in financial services, but the false positive rate is still very high. If the AI system can explain why it thinks a case is fraudulent, and more importantly, if it can prove that it will not systematically favor a group, human employees can spend less time verifying the results and more time delivering higher value work.
Do startups need to take a different approach from traditional financial institutions?
In the final analysis, whether you are a traditional financial institution or a fledgling start-up, you need to pay equal attention to AI technology to ensure fairness, ethics and transparency.
The most prominent difference is that traditional financial institutions already have ready-made model risk management practices, which are usually applicable to traditional rule-based models. Moreover, traditional financial institutions already have technologies and processes in place, so changing methods is often more challenging. However, no matter which development and deployment tool you use, you must consider how to extend existing model risk management practices to support AI / ml models.
Many fintech startups may not consider making existing investments in this technology, which also gives them more freedom to choose the best in class development, deployment and monitoring platform with built-in functions.
The future of AI in the financial industry
For those enterprises and organizations that still regard AI investment as a “risky move”, the epidemic has played a role as a catalyst, making them realize the benefits of AI technology for improving efficiency and reducing the pressure of remote workers. At present, 28% of enterprises in the financial industry say they have actively deployed AI as part of their business operations. Although the penetration rate of AI technology is very fast and large, 44% of enterprises say they are still in the preliminary stage of exploring AI solutions, and 22% of enterprises do not use or explore AI solutions at present. This means that at present, most financial companies are developing proof of concept (POC) or analyzing their data for future growth and use purposes.
As we enter the “post epidemic” era, business organizations need to be more vigilant than ever to ensure that their AI technology is operating in a “responsible” manner, rather than contributing to systematic injustice. The laws and regulations to be issued by governments around the world will also continue to pay attention to how organizations (especially the financial industry) use this technology responsibly.
In short, there is no shortcut to gain broad trust in AI decision-making, but business organizations can start by taking sustained and thoughtful steps to address bias and injustice and improve interpretability.