Nearly two years after a global pandemic sent most banking customers online, the majority of financial institutions appear to be embracing digital transformation. But many still have a long way to go. For example, a recent survey of mid-sized U.S. financial institutions by Cornerstone Advisors found that 90% of respondents have launched or are in the process of developing a digital transformation strategy, but only 36% said they were halfway there. I think one of the reasons for the adoption lag is the new reluctance of many banks to use artificial intelligence (AI) and machine learning technologies.
Organizations of all sizes can adopt ethical AI
The responsible application of explainable and ethical AI and machine learning is key to analyzing and ultimately monetizing the vast customer data that is a byproduct of any institution’s effective digital transformation. Yet, according to the Cornerstone study cited above, only 14% of institutions that are halfway or more into their digital transformation journey (5% of total respondents) have deployed machine learning.
Low adoption rates may illustrate a reluctance of the C suite to use AI, which is not entirely unfounded: AI has become deeply suspicious even among many workers who deploy it, with research finding that 61% of knowledge workers believe the data that powers AI is biased.
Yet ignoring AI is also not a feasible avoidance strategy, as it is already widely adopted by the business world as a whole. A recent PwC survey of U.S. business and technology executives found that 86% of respondents see AI as a “mainstream technology” in their business. More importantly, AI and machine learning present the best possible solution to a problem faced by many financial institutions: having established anytime, anywhere digital access – and collected the high volume of data customer it produces – they often find that they are not really benefiting from it. data appropriately to serve customers better than before.
The impact of a mismatch between increased digital access and the digital data provided, coupled with unmet customer needs, can be seen in FICO Searchwhich revealed that while 86% of consumers are satisfied with their bank’s services, 34% have at least one financial account or engage in shadow activity with a non-bank financial service provider. At the same time, 70% say they are “likely” or “very likely” to open an account with a competing vendor that offers products and services that address unmet needs such as expert advice, automated budgeting, plans personalized savings plans, online investments and electronic money transfers.
The solution, which has been gaining momentum throughout 2021, is for financial institutions of all sizes to implement AI that is explainable, ethical and accountable, and incorporates interpretable, auditable and humble techniques.
Why Ethics by Design is the solution
September 15, 2021 marked a major step towards a global standard for responsible AI with the publication of the IEEE 7000-2021 Standard. It provides businesses (including financial service providers) with an ethical framework for implementing artificial intelligence and machine learning by setting standards for:
- The quality of the data used in the AI system;
- The selection processes feeding the AI;
- Design of algorithms;
- The evolution of AI logic;
- The transparency of AI.
As Chief Analytics Officer at one of the world’s leading developers of AI decision-making systems, I advocated Ethics by Design as the standard in AI modeling for years. The framework established by IEEE 7000 is long overdue. As it materializes into broad adoption, I see three complementary new branches of AI becoming mainstream in 2022:
- Interpretable AI focuses on machine learning algorithms that specify which machine learning models are interpretable versus which are explainable. Explainable AI applies algorithms to post-hoc machine learning models to infer behaviors that led to an outcome (usually a score), while interpretable AI specifies machine learning models that provide irrefutable insight. latent characteristics that actually produced the score. This is an important differentiation; interpretable machine learning allows for accurate explanations (vs. inferences) and, more importantly, this in-depth knowledge of specific latent characteristics allows us to ensure that the AI model can be tested for ethical treatment.
- Auditable AI produces a trail of details about itself, including variables, data, transformations, and model processes, including algorithm design, machine learning, and model logic, making it easy to audit (hence the name). Meeting the transparency requirement of the IEEE 7000 standard, Auditable AI relies on well-established model development governance frameworks such as blockchain.
- Humble AI is the artificial intelligence that knows if it is not sure of the correct answer. Humble AI uses uncertainty measures such as a numeric uncertainty score to gauge a model’s confidence in its own decision, ultimately providing researchers with greater confidence in the decisions produced.
When properly implemented, interpretable AI, auditable AI, and humble AI are symbiotic; Interpretable AI takes the guesswork out of what drives machine learning for explainability and ethics; Auditable AI records a model’s strengths, weaknesses, and transparency during the development phase; and finally establishes the criteria and measures of uncertainty evaluated by Humble AI. Together, Interpretable AI, Auditable AI, and Humble AI provide financial services institutions and their customers with not only a greater sense of confidence in digital transformation tools, but also the benefits these tools can bring.
About the Author: Scott Zoldi is Director of Analytics at FICO responsible for the analytical development of FICO’s products and technology solutions, including the FICO Falcon Fraud Manager product which protects approximately two-thirds of payment card transactions worldwide against fraud. At FICO, Scott was responsible for drafting over 120 patents with 71 patents granted and 49 pending. Scott is actively involved in the development of new analytical products using artificial intelligence and machine learning technologies, many of which exploit new continuous artificial intelligence innovations such as adaptive analysis, collaborative profiling, learning in depth and self-learning models. Scott has recently focused on streaming self-learning analytics applications for real-time detection of cybersecurity attacks and money laundering. Scott sits on two boards, including Tech San Diego and Cyber Center of Excellence. Scott earned his Ph.D. in theoretical physics from Duke University. Follow Scott’s latest thoughts on the Data Literacy Alphabet by following him on Twitter @ScottZoldi and on LinkedIn.
The new European law on AI puts ethics in the spotlight
Achieving Data Literacy: Companies Must First Learn New ABCs
The AI Bias Problem Needs More Academic Rigor, Less Hype