Managing Bias in AI: Strategic Risk Management Strategy for Banks
AI is set to transform the banking industry, using vast amounts of data to build models that improve decision making, tailor services, and improve risk management. According to the EIU, this could generate value of more than $250 billion in the banking industry. But there is a downside, since ML models amplify some elements of model risk. And although many banks, particularly those operating in jurisdictions with stringent regulatory requirements, have validation frameworks and practices in place to assess and mitigate the risks associated with traditional models, these are often insufficient to deal with the risks associated with machine-learning models. The added risk brought on by the complexity of algorithmic models can be mitigated by making well-targeted modifications to existing validation frameworks.
Conscious of the problem, many banks are proceeding cautiously, restricting the use of ML models to low-risk applications, such as digital marketing. Their caution is understandable given the potential financial, reputational, and regulatory risks. Banks could, for example, find themselves in violation of anti discrimination laws, and incur significant fines—a concern that pushed one bank to ban its HR department from using a machine-learning resume screener. A better approach, however, and ultimately the only sustainable one if banks are to reap the full benefits of machine-learning models, is to enhance model-risk management.
Regulators have not issued specific instructions on how to do this. In the United States, they have stipulated that banks are responsible for ensuring that risks associated with machine-learning models are appropriately managed, while stating that existing regulatory guidelines, such as the Federal Reserve’s “Guidance on Model Risk Management” (SR11-7), are broad enough to serve as a guide. Enhancing model-risk management to address the risks of machine-learning models will require policy decisions on what to include in a model inventory, as well as determining risk appetite, risk tiering, roles and responsibilities, and model life-cycle controls, not to mention the associated model-validation practices. The good news is that many banks will not need entirely new model-validation frameworks. Existing ones can be fitted for purpose with some well-targeted enhancements.
New Risk mitigation exercises for ML models
There is no shortage of news headlines revealing the unintended consequences of new machine-learning models. Algorithms that created a negative feedback loop were blamed for the “flash crash” of the British pound by 6 percent in 2016, for example, and it was reported that a self-driving car tragically failed to properly identify a pedestrian walking her bicycle across the street. The cause of the risks that materialized in these machine-learning models is the same as the cause of the amplified risks that exist in all machine-learning models, whatever the industry and application: increased model complexity. Machine-learning models typically act on vastly larger data sets, including unstructured data such as natural language, images, and speech. The algorithms are typically far more complex than their statistical counterparts and often require design decisions to be made before the training process begins. And machine-learning models are built using new software packages and computing infrastructure that require more specialized skills. The response to such complexity does not have to be overly complex, however. If properly understood, the risks associated with machine-learning models can be managed within banks’ existing model-validation frameworks
Here are the strategic approaches for enterprises to ensure that that the specific risks associated with machine learning are addressed :
Demystification of “Black Boxes” : Machine-learning models have a reputation of being “black boxes.” Depending on the model’s architecture, the results it generates can be hard to understand or explain. One bank worked for months on a machine-learning product-recommendation engine designed to help relationship managers cross-sell. But because the managers could not explain the rationale behind the model’s recommendations, they disregarded them. They did not trust the model, which in this situation meant wasted effort and perhaps wasted opportunity. In other situations, acting upon (rather than ignoring) a model’s less-than-transparent recommendations could have serious adverse consequences.
The degree of demystification required is a policy decision for banks to make based on their risk appetite. They may choose to hold all machine-learning models to the same high standard of interpretability or to differentiate according to the model’s risk. In USA, models that determine whether to grant credit to applicants are covered by fair-lending laws. The models therefore must be able to produce clear reason codes for a refusal. On the other hand, banks might well decide that a machine-learning model’s recommendations to place a product advertisement on the mobile app of a given customer poses so little risk to the bank that understanding the model’s reasons for doing so is not important. Validators need also to ensure that models comply with the chosen policy. Fortunately, despite the black-box reputation of machine-learning models, significant progress has been made in recent years to help ensure their results are interpretable. A range of approaches can be used, based on the model class:
Linear and monotonic models (for example, linear-regression models): linear coefficients help reveal the dependence of a result on the output. Nonlinear and monotonic models, (for example, gradient-boosting models with monotonic constraint): restricting inputs so they have either a rising or falling relationship globally with the dependent variable simplifies the attribution of inputs to a prediction. Nonlinear and nonmonotonic (for example, unconstrained deep-learning models): methodologies such as local interpretable model-agnostic explanations or Shapley values help ensure local interpretability.
Bias : A model can be influenced by four main types of bias: sample, measurement, and algorithm bias, and bias against groups or classes of people. The latter two types, algorithmic bias and bias against people, can be amplified in machine-learning models. For example, the random-forest algorithm tends to favor inputs with more distinct values, a bias that elevates the risk of poor decisions. One bank developed a random-forest model to assess potential money-laundering activity and found that the model favored fields with a large number of categorical values, such as occupation, when fields with fewer categories, such as country, were better able to predict the risk of money laundering.
To address algorithmic bias, model-validation processes should be updated to ensure appropriate algorithms are selected in any given context. In some cases, such as random-forest feature selection, there are technical solutions. Another approach is to develop “challenger” models, using alternative algorithms to benchmark performance. To address bias against groups or classes of people, banks must first decide what constitutes fairness. Four definitions are commonly used, though which to choose may depend on the model’s use: Demographic blindness: decisions are made using a limited set of features that are highly uncorrelated with protected classes, that is, groups of people protected by laws or policies. Demographic parity: outcomes are proportionally equal for all protected classes. Equal opportunity: true-positive rates are equal for each protected class. Equal odds: true-positive and false-positive rates are equal for each protected class. Validators then need to ascertain whether developers have taken the necessary steps to ensure fairness. Models can be tested for fairness and, if necessary, corrected at each stage of the model-development process, from the design phase through to performance monitoring.
Feature engineering : is often much more complex in the development of machine-learning models than in traditional models. There are three reasons why. First, machine-learning models can incorporate a significantly larger number of inputs. Second, unstructured data sources such as natural language require feature engineering as a preprocessing step before the training process can begin. Third, increasing numbers of commercial machine-learning packages now offer so-called AutoML, which generates large numbers of complex features to test many transformations of the data. Models produced using these features run the risk of being unnecessarily complex, contributing to overfitting. For example, one institution built a model using an AutoML platform and found that specific sequences of letters in a product application were predictive of fraud. This was a completely spurious result caused by the algorithm’s maximizing the model’s out-of-sample performance.
In feature engineering, banks have to make a policy decision to mitigate risk. They have to determine the level of support required to establish the conceptual soundness of each feature. The policy may vary according to the model’s application. For example, a highly regulated credit-decision model might require that every individual feature in the model be assessed. For lower-risk models, banks might choose to review the feature-engineering process only: for example, the processes for data transformation and feature exclusion. Validators should then ensure that features and/or the feature-engineering process are consistent with the chosen policy. If each feature is to be tested, three considerations are generally needed: the mathematical transformation of model inputs, the decision criteria for feature selection, and the business rationale. For instance, a bank might decide that there is a good business case for using debt-to-income ratios as a feature in a credit model but not frequency of ATM usage, as this might penalize customers for using an advertised service.
Hyper parameters : Many of the parameters of machine-learning models, such as the depth of trees in a random-forest model or the number of layers in a deep neural network, must be defined before the training process can begin. In other words, their values are not derived from the available data. Rules of thumb, parameters used to solve other problems, or even trial and error are common substitutes. Decisions regarding these kinds of parameters, known as hyper parameters, are often more complex than analogous decisions in statistical modeling. Not surprisingly, a model’s performance and its stability can be sensitive to the hyper parameters selected. For example, banks are increasingly using binary classifiers such as support-vector machines in combination with natural-language processing to help identify potential conduct issues in complaints. The performance of these models and the ability to generalize can be very sensitive to the selected kernel function.Validators should ensure that hyper parameters are chosen as soundly as possible. For some quantitative inputs, as opposed to qualitative inputs, a search algorithm can be used to map the parameter space and identify optimal ranges. In other cases, the best approach to selecting hyperparameters is to combine expert judgment and, where possible, the latest industry practices.
Production readiness : Traditional models are often coded as rules in production systems. Machine-learning models, however, are algorithmic, and therefore require more computation. This requirement is commonly overlooked in the model-development process. Developers build complex predictive models only to discover that the bank’s production systems cannot support them. One US bank spent considerable resources building a deep learning–based model to predict transaction fraud, only to discover it did not meet required latency standards. Validators already assess a range of model risks associated with implementation. However, for machine learning, they will need to expand the scope of this assessment. They will need to estimate the volume of data that will flow through the model, assessing the production-system architecture (for example, graphics-processing units for deep learning), and the runtime required.
Dynamic model calibration : Some classes of machine-learning models modify their parameters dynamically to reflect emerging patterns in the data. This replaces the traditional approach of periodic manual review and model refresh. Examples include reinforcement-learning algorithms or Bayesian methods. The risk is that without sufficient controls, an overemphasis on short-term patterns in the data could harm the model’s performance over time. Banks therefore need to decide when to allow dynamic recalibration. They might conclude that with the right controls in place, it is suitable for some applications, such as algorithmic trading. For others, such as credit decisions, they might require clear proof that dynamic recalibration outperforms static models. With the policy set, validators can evaluate whether dynamic recalibration is appropriate given the intended use of the model, develop a monitoring plan, and ensure that appropriate controls are in place to identify and mitigate risks that might emerge. These might include thresholds that catch material shifts in a model’s health, such as out-of-sample performance measures, and guardrails such as exposure limits or other, predefined values that trigger a manual review.
Banks will need to proceed gradually. The first step is to make sure model inventories include all machine learning–based models in use. One bank’s model risk-management function was certain the organization was not yet using machine-learning models, until it discovered that its recently established innovation function had been busy developing machine-learning models for fraud and cyber security.
From here, validation policies and practices can be modified to address machine-learning-model risks, though initially for a restricted number of model classes. This helps build experience while testing and refining the new policies and practices. Considerable time will be needed to monitor a model’s performance and finely tune the new practices. But over time banks will be able to apply them to the full range of approved machine-learning models, helping companies mitigate risk and gain the confidence to start harnessing the full power of machine learning.
(AIQRATE, A bespoke global AI advisory and consulting firm. A first in its genre, AIQRATE provides strategic AI advisory services and consulting offerings across multiple business segments to enable clients on their AI powered transformation & innovation journey and accentuate their decision making and business performance.
AIQRATE works closely with Boards, CXOs and Senior leaders advising them on navigating their Analytics to AI journey with the art of possible or making them jump start to AI progression with AI@scale approach followed by consulting them on embedding AI as core to business strategy within business functions and augmenting the decision-making process with AI. We have proven bespoke AI advisory services to enable CXO’s and Senior Leaders to curate & design building blocks of AI strategy, embed AI@scale interventions and create AI powered organizations. AIQRATE’s path breaking 50+ AI consulting frameworks, assessments, primers, toolkits and playbooks enable Indian & global enterprises, GCCs, Startups, VC/PE firms, and Academic Institutions enhance business performance and accelerate decision making.
Visit www.aiqrate.ai to experience our AI advisory services & consulting offerings
How artificial intelligence is changing the face of banking in India
Artificial intelligence (AI) will empower banking organisations to completely redefine how they operate, establish innovative products and services, and most importantly impact customer experience interventions. In this second machine age, banks will find themselves competing with upstart fintech firms leveraging advanced technologies that augment or even replace human workers with sophisticated algorithms. To maintain a sharp competitive edge, banking corporations will need to embrace AI and weave it into their business strategy.
In this post, I will examine the dynamics of AI ecosystems in the banking industry and how it is fast becoming a major disrupter by looking at some of the critical unsolved problems in this area of business. AI’s potential can be looked at through multiple lenses in this sector, particularly its implications and applications across the operating landscape of banking. Let us focus on some of the key artifiicial intelligence technology systems: robotics, computer vision, language, virtual agents, and machine learning (including deep learning) that underlines many recent advances made in this sector.
Banks entering the intelligence age are under intense pressure on multiple fronts. Rapid advances in AI are coming at a time of widespread technological and digital disruption. To manage this impact, many changes are being triggered.
- Leading banks are aggressively hiring Chief AI Officers while investing in AI labs and incubators
- AI-powered banking bots are being used on the customer experience front.
- Intelligent personal investment products are available at scale
- Multiple banks are moving towards custom in-house solutions that leverage sophisticated ontologies, natural language processing, machine learning, pattern recognition, and probabilistic reasoning algorithms to aid skilled employees and robots with complex decisions
Some of the key characteristics shaping this industry include:
- Decision support and advanced algorithms allow the automation of processes that are more cognitive in nature
- Solutions incorporate advanced self-learning capabilities
- Sophisticated cognitive hypothesis generation/advanced predictive analytics
Surge of AI in Banking
Banks today are struggling to reduce costs, meet margins, and exceed customer expectations through personal experience. To enable this, implementing AI is particularly important. And banks have started embracing AI and related technologies worldwide. According to a survey by the National Business Research Institute, over 32 percent of financial institutions use AI through voice recognition and predictive analysis. The dawn of mobile technology, data availability and the explosion of open-source software provides artificial intelligence huge playing field in the banking sector. The changing dynamics of an app-driven world is enabling the banking sector to leverage AI and integrate it tightly with the business imperatives.
AI in Banking Customer Services
Automated AI-powered customer service is gaining strong traction. Using data gathered from users’ devices, AI-based relay information using machine learning by redirecting users to the source. AI-related features also enable services, offers, and insights in line with the user’s behaviour and requirements. The cognitive machine is trained to advise and communicate by analysing users’ data. Online wealth management services and other services are powered by integrating AI advancements to the app by capturing relevant data.
The tested example of answering simple questions that the users have and redirecting them to the relevant resource has proven successful. Routine and basic operations i.e. opening or closing the account, transfer of funds, can be enabled with the help of chatbots.
Fraud and risk management
Online fraud is an area of massive concern for businesses as they digitise at scale. Risk management at internet scale cannot be managed manually or by using legacy information systems. Most banks are looking to deploy machine or deep learning and predictive analytics to examine all transactions in real-time. Machine learning can play an extremely critical role in the bank’s middle office.
The primary uses include mitigating fraud by scanning transactions for suspicious patterns in real-time, measuring clients for creditworthiness, and enabling risk analysts with right recommendations for curbing risk.
Trading and Securities
Robotic Process Automation (RPA) plays a key role in security settlement through reconciliation and validation of information in the back office with trades enabled in the front office. Artificial intelligence facilitates the overall process of trade enrichment, confirmation and settlement.
Lending is a critical business for banks, which directly and indirectly touches almost all parts of the economy. At its core, lending can be seen as a big data problem. This makes it an effective case for machine learning. One of the critical aspects is the validation of creditworthiness of individuals or businesses seeking such loans. The more data available about the borrower, the better you can assess their creditworthiness.
Usually, the amount of a loan is tied to assessments based on the value of the collateral and taking future inflation into consideration. The potential of AI is that it can analyse all of these data sources together to generate a coherent decision. In fact, banks today look at creditworthiness as one of their everyday applications of AI.
Banks are increasingly relying on machine learning to make smarter, real-time investment decisions on behalf of their investors and clients.
These algorithms can progress across distinct ways. Data becomes an integral part of their decision-making tree, this enables them to experiment with different strategies on the fly to broaden their focus to consider a more diverse range of assets.
Banks are focussed to leverage an AI and machine learning-based technology platforms that make customised portfolio profiles of customers based on their investment limits, patterns and preferences.
Banking and artificial intelligence are at a vantage position to unleash the next wave of digital disruption. A user-friendly AI ecosystem has the potential for creating value for the banking industry, but the desire to adopt such solutions across all spectrums can become roadblocks. Some of the issues can be long implementation timelines, limitations in the budgeting process, reliance on legacy platforms, and the overall complexity of a bank’s technology environment.
To overcome the above challenges of introducing and building an AI-enabled environment. Banks need to enable incremental adoption methods and technologies. The critical part is ensuring that the transition allows them to overcome the change management/behavioural issues. The secret sauce of successful deployment is to ensure a seamless fit into the existing technology architecture landscape, making an effective AI enterprise environment.
AI & FINTECH – TWO GAME CHANGING REVOLUTIONS IN THE DIGITAL ERA
More investors are setting their sights on the financial technology (Fintech) arena. According to consulting firm Accenture, investment in Fintech firms rose by 10 percent worldwide to the tune of $23.2 billion in 2016.
China is leading the charge after securing $10 billion in investments in 55 deals which account for 90 percent of investments in Asia-Pacific. The US came second taking in $6.2 billion in funding. Europe, also saw an 11 percent increase in deals despite Britain seeing a decrease in funding due to the uncertainty from the Brexit vote.
The excitement stems from the disruption of traditional financial institutions (FIs) such as banks, insurance, and credit companies by technology. The next unicorn might be among the hundreds of tech startups that are giving Fintech a go.
What exactly is going to be the next big thing has yet to be determined, but artificial intelligence (AI) will play a huge part.
The growing reality is that, while opportunities are abound, competition is also heating up.
Take, for example, the number of Fintech startups that aim to digitize routine financial tasks like payments. In the US, the digital wallet and payments segment is fiercely competitive. Pioneers like PayPal see themselves being taken on by other tech giants like Google and Apple, by niche-oriented ventures like Venmo, and even by traditional FIs.
Most recently, the California-based robo-advisor, Wealthfront, has added artificial intelligence capabilities to track account activity on its own product and other integrated services such as Venmo, to analyze and understand how account holders are spending, investing and making their financial decisions, in an effort to provide more customized advice to their customers. Sentient Technologies, which has offices in both California and Hong Kong, is using artificial intelligence to continually analyze data and improve investment strategies. The company has several other AI initiatives in addition to its own equity fund. AI is even being used for banking customer service. RBS has developed Luvo, a technology which assists it service agents in finding answers to customer queries. The AI technology can search through a database, but also has a human personality and is built to learn continually and improve over time.
Some ventures are seeing bluer oceans by focusing on local and regional markets where conditions are somewhat favorable.
The growth of China’s Fintech was largely made possible by the relative age of its current banking system. It was easier for people to use mobile and web-based financial services such as Alibaba’s Ant Financial and Tencent since phones were more pervasive and more convenient to access than traditional financial instruments.
In Europe, the new Payment Services Directive (PSD2) set to take effect in 2018 has busted the game wide open. Banks are obligated to open up their application program interfaces (APIs) enabling Fintech apps and services to tap into users’ bank accounts. The line between banks and fintech companies are set to blur so just about everyone in finance is set to compete with old and new players alike.
Convenience has become a fundamental selling point to many users that a number of Fintech ventures have zeroed in on delivering better user experiences for an assortment of financial tasks such as payments, budgeting, banking, and even loan applications.
There is a mad scramble among companies to leverage cutting-edge technologies for competitive advantage. Even established tech companies like e-commerce giant Amazon had to give due attention to mobile as users shift their computing habits towards phones and tablets. Enterprises are also working on transitioning to cloud computing for infrastructure.
But where do more advanced technologies such as AI come in?
The drive to eliminate human fallibility has also made artificial intelligence (AI) driven to the forefront of research and development. Its applications range from sorting what gets shown on your social media newsfeed to self-driving cars. It’s also expected to have a major impact in Fintech due to potential of game changing insights that can be derived from the sheer volume of data that humanity is generating. Enterprising ventures are banking on it to expose the gap in the market that has become increasingly small due to competition.
All about algorithms
AI and finance are no strangers to each other. Traditional banking and finance have relied heavily on algorithms for automation and analysis. However, these were exclusive only to large and established institutions. Fintech is being aimed at empowering smaller organizations and consumers, and AI is expected to make its benefits accessible to a wider audience.
AI has a wide variety of consumer-level applications for smarter and more error-free user experiences. Personal finance applications are now using AI to balance people’s budgets based specifically to a user’s behavior. AI now also serves as robo-advisors to casual traders to guide them in managing their stock portfolios.
For enterprises, AI is expected to continue serving functions such as business intelligence and predictive analytics. Merchant services such as payments and fraud detection are also relying on AI to seek out patterns in customer behavior in order to weed out bad transactions.
People may soon have very little excuse of not having a handle of their money because of these services
Concerns Going Forward
While artificial intelligence holds the promise of efficiency, better decision-making, stronger compliance and potentially even more profits for investors, the technology is young. Banks need to find ways to lower costs and technology is the most obvious answer. A logical response by banks is to automate as much decision-making as possible, hence the number of banks enthusiastically embracing AI and automation. But the unknown risks inherent in aspects of AI have not been eliminated. According to a Euromoney Survey and report commissioned by Baker & McKenzie, out of 424 financial professionals, 76% believe that financial regulators are not up to speed on AI and 47% are not confident that their own organizations understand the risks of using AI. Additionally an increasing reliance on artificial intelligence technologies comes with a reduction in jobs. Many argue that the human intuition plays a valuable role in risk assessment and that the black box nature of AI makes it difficult to understand certain unexpected outcomes or decisions produced by the technology.
Towards the future
With the stiff competition in Fintech, ventures have to deliver a truly valuable products and services in order to stand out. The venture that provides the best user experience often wins but finding this X factor has become increasingly challenging.
The developments in AI may provide that something extra especially if it could promise to eliminate the guess work and human error out of finance. It’s for these reasons that AI might just hold the key to what further Fintech innovations can be made.