Data Driven Enterprise – Part I: Building an effective Data Strategy for competitive edge
Few Enterprises take full advantage of data generated outside their walls. A well-structured data strategy for using external data can provide a competitive edge. Many enterprises have made great strides in collecting and utilizing data from their own activities. So far, though, comparatively few have realized the full potential of linking internal data with data provided by third parties, vendors, or public data sources. Overlooking such external data is a missed opportunity. Organizations that stay abreast of the expanding external-data ecosystem and successfully integrate a broad spectrum of external data into their operations can outperform other companies by unlocking improvements in growth, productivity, and risk management.
The COVID-19 crisis provides an example of just how relevant external data can be. In a few short months, consumer purchasing habits, activities, and digital behavior changed dramatically, making preexisting consumer research, forecasts, and predictive models obsolete. Moreover, as organizations scrambled to understand these changing patterns, they discovered little of use in their internal data. Meanwhile, a wealth of external data could—and still can—help organizations plan and respond at a granular level. Although external-data sources offer immense potential, they also present several practical challenges. To start, simply gaining a basic understanding of what’s available requires considerable effort, given that the external-data environment is fragmented and expanding quickly. Thousands of data products can be obtained through a multitude of channels—including data brokers, data aggregators, and analytics platforms—and the number grows every day. Analyzing the quality and economic value of data products also can be difficult. Moreover, efficient usage and operationalization of external data may require updates to the organization’s existing data environment, including changes to systems and infrastructure. Companies also need to remain cognizant of privacy concerns and consumer scrutiny when they use some types of external data.
These challenges are considerable but surmountable. This blog series discusses the benefits of tapping external-data sources, illustrated through a variety of examples, and lays out best practices for getting started. These include establishing an external-data strategy team and developing relationships with data brokers and marketplace partners. Company leaders, such as the executive sponsor of a data effort and a chief data and analytics officer, and their data-focused teams should also learn how to rigorously evaluate and test external data before using and operationalizing the data at scale.
External-data success stories: Companies across industries have begun successfully using external data from a variety of sources . The investment community is a pioneer in this space. To predict outcomes and generate investment returns, analysts and data scientists in investment firms have gathered “alternative data” from a variety of licensed and public data sources, many of which draw from the “digital exhaust” of a growing number of technology companies and the public web. Investment firms have established teams that assess hundreds of these data sources and providers and then test their effectiveness in investment decisions.
A broad range of data sources are used, and these inform investment decisions in a variety of ways:
- Investors actively gather job postings, company reviews posted by employees, employee-turnover data from professional networking and career websites, and patent filings to understand company strategy and predict financial performance and organizational growth.
- Analysts use aggregated transaction data from card processors and digital-receipt data to understand the volume of purchases by consumers, both online and offline, and to identify which products are increasing in share. This gives them a better understanding of whether traffic is declining or growing, as well as insights into cross-shopping behaviors.
- Investors study app downloads and digital activity to understand how consumer preferences are changing and how effective an organization’s digital strategy is relative to that of its peers. For instance, app downloads, activity, and rating data can provide a window into the success rates of the myriad of live-streaming exercise offerings that have become available over the last year.
Corporations have also started to explore how they can derive more value from external data . For example, a large insurer transformed its core processes, including underwriting, by expanding its use of external-data sources from a handful to more than 40 in the span of two years. The effort involved was considerable; it required prioritization from senior leadership, dedicated resources, and a systematic approach to testing and applying new data sources. The hard work paid off, increasing the predictive power of core models by more than 20 percent and dramatically reducing application complexity by allowing the insurer to eliminate many of the questions it typically included on customer applications.
Three steps to creating value with external data:
Use of external data has the potential to be game changing across a variety of business functions and sectors. The journey toward successfully using external data has three key steps.
1. Establish a dedicated team for external-data sourcing
To get started, organizations should establish a dedicated data-sourcing team. Per our understanding at AIQRATE , a key role on this team is a dedicated data scout or strategist who partners with the data-analytics team and business functions to identify operational, cost, and growth improvements that could be powered by external data. This person also would be responsible for building excitement around what can be made possible through the use of external data, planning the use cases to focus on, identifying and prioritizing data sources for investigation, and measuring the value generated through use of external data. Ideal candidates for this role are individuals who have served as analytics translators and who have experience in deploying analytics use cases and in working with technology, business, and analytics profiles.
The other team members, who should be drawn from across functions, would include purchasing experts, data engineers, data scientists and analysts, technology experts, and data-review-board members . These team members typically spend only part of their time supporting the data-sourcing effort. For example, the data analysts and data scientists may already be supporting data cleaning and modeling for a specific use case and help the sourcing work stream by applying the external data to assess its value. The purchasing expert, already well versed in managing contracts, will build specialization on data-specific licensing approaches to support those efforts.
Throughout the process of finding and using external data, companies must keep in mind privacy concerns and consumer scrutiny, making data-review roles essential peripheral team members. Data reviewers, who typically include legal, risk, and business leaders, should thoroughly vet new consumer data sets—for example, financial transactions, employment data, and cell-phone data indicating when and where people have entered retail locations. The vetting process should ensure that all data were collected with appropriate permissions and will be used in a way that abides by relevant data-privacy laws and passes muster with consumer.This team will need a budget to procure small exploratory data sets, establish relationships with data marketplaces (such as by purchasing trial licenses), and pay for technology requirements (such as expanded data storage).
2. Develop relationships with data marketplaces and aggregators
While online searches may appear to be an easy way for data-sourcing teams to find individual data sets, that approach is not necessarily the most effective. It generally leads to a series of time-consuming vendor-by-vendor discussions and negotiations. The process of developing relationships with a vendor, procuring sample data, and negotiating trial agreements often takes months. A more effective strategy involves using data-marketplace and -aggregation platforms that specialize in building relationships with hundreds of data sources, often in specific data domains—for example, consumer, real-estate, government, or company data. These relationships can give organizations ready access to the broader data ecosystem through an intuitive search-oriented platform, allowing organizations to rapidly test dozens or even hundreds of data sets under the auspices of a single contract and negotiation. Since these external-data distributors have already profiled many data sources, they can be valuable thought partners and can often save an external-data team significant time. When needed, these data distributors can also help identify valuable data products and act as the broker to procure the data.
Once the team has identified a potential data set, the team’s data engineers should work directly with business stakeholders and data scientists to evaluate the data and determine the degree to which the data will improve business outcomes. To do so, data teams establish evaluation criteria, assessing data across a variety of factors to determine whether the data set has the necessary characteristics for delivering valuable insights . Data assessments should include an examination of quality indicators, such as fill rates, coverage, bias, and profiling metrics, within the context of the use case. For example, a transaction data provider may claim to have hundreds of millions of transactions that help illuminate consumer trends. However, if the data include only transactions made by millennial consumers, the data set will not be useful to a company seeking to understand broader, generation-agnostic consumer trends.
3. Prepare the data architecture for new external-data streams
Generating a positive return on investment from external data calls for up-front planning, a flexible data architecture, and ongoing quality-assurance testing.Up-front planning starts with an assessment of the existing data environment to determine how it can support ingestion, storage, integration, governance, and use of the data. The assessment covers issues such as how frequently the data come in, the amount of data, how data must be secured, and how external data will be integrated with internal data. This will provide insights about any necessary modifications to the data architecture.
Modifications should be designed to ensure that the data architecture is flexible enough to support the integration of a continuous “conveyor belt” of incoming data from a variety of data sources—for example, by enabling application-programming-interface (API) calls from external sources along with entity-resolution capabilities to intelligently link the external data to internal data. In other cases, it may require tooling to support large-scale data ingestion, querying, and analysis. Data architecture and underlying systems can be updated over time as needs mature and evolve.The final process in this step is ensuring an appropriate and consistent level of quality by constantly monitoring the data used. This involves examining data regularly against the established quality framework to identify whether the source data have changed and to understand the drivers of any changes (for example, schema updates, expansion of data products, change in underlying data sources). If the changes are significant, algorithmic models leveraging the data may need to be retrained or even rebuilt.
Minimizing risk and creating value with external data will require a unique mix of creative problem solving, organizational capability building, and laser-focused execution. That said, business leaders who demonstrate the achievements possible with external data can capture the imagination of the broader leadership team and build excitement for scaling beyond early pilots and tests. An effective route is to begin with a small team that is focused on using external data to solve a well-defined problem and then use that success to generate momentum for expanding external-data efforts across the organization.
Redefine the new code for GCCs: Winning with AI – strategic perspectives
Global Capability Centers( GCCs) are reflections of strategic components to parent organization’s business imperatives. GCCs are at an inflection point as the pace at which AI is changing every aspect is exponential and at high velocity. The rapid transformation and innovation of GCCs today is driven largely by ability for them to position AI strategic imperative for their parent organizations. AI is seen to the Trojan horse to catapult GCCs to the next level on innovation & transformation. In recent times; GCC story is in a changing era of value and transformative arbitrage.
Most of the GCCs are aiming towards deploying suite of AI led strategies to position themselves up as the model template of AI Center of Excellence. It is widely predicted that AI will disrupt and transform capability centers in the coming decades. How are Global Capability Centers in India looking at positioning themselves as model template for developing AI center of competence? How have the strategies of GCCs transformed with reference to parent organization? whilst delivering tangible business outcomes, innovation & transformation for parent organizations?
Strategic imperatives for GCC’s to consider to move incrementally in the value chain & develop and edge and start winning with AI:
AI transformation :
Artificial Intelligence has become the main focus areas for GCCs in India. The increasing digital penetration in GCCs across business verticals has made it imperative to focus on AI. Hence, GCCs are upping their innovation agenda by building bespoke AI capabilities , solutions & offerings. Accelerated AI adoption has transcended industry verticals, with organizations exploring different use cases and application areas. GCCs in India are strategically leveraging one of the following approaches to drive the AI penetration ahead –
- Federated Approach: Different teams within GCCs drive AI initiatives
- Centralized Approach: Focus is to build a central team with top talent and niche skills that would cater to the parent organization requirements
- Partner ecosystem : Paves a new channel for GCCs by partnering with research institutes , start-ups , accelerators
- Hybrid Approach: A mix of any two or more above mentioned approaches, and can be leveraged according to GCCs needs and constraints.
- Ecosystem creation : Startups /research institutes/Accelerators
One of the crucial ways that GCCs can boost their innovation agenda is by collaborating with start-ups, research institutes , accelerators. Hence, GCCs are employing a variety of strategies to build the ecosystem. These collaborations are a combination of build, buy, and partner models:
- Platform Evangelization: GCCs offer access to their AI platforms to start-ups
- License or Vendor Agreement: GCCs and start-ups enter into a license agreement to create solutions
- Co-innovate: Start-ups and GCCs collaborate to co-create new solutions & capabilities
- Acqui-hire: GCCs acquire start-ups for the talent & capability
- Research centers : GCCs collaborate with academic institutes for joint IP creation, open research , customized programs
- Joint Accelerator program : GCCs & Accelerators build joint program for customized startups cohort
To drive these ecosystem creation models, GCCs can leverage different approaches. Further, successful collaboration programs have a high degree of customization, with clearly defined objectives and talent allocation to drive tangible and impact driven business outcomes.
Differentiated AI Center of Capability :
GCCs are increasingly shifting to competency, capability creation models to reduce time-to-market. In this model, the AI Center of Competence teams are aligned to capability lines of businesses where AI center of competence are responsible for creating AI capabilities, roadmaps and new value offerings, in collaboration with parent organization’s business teams. This alignment and specific roles have clear visibility of the business user requirement. Further, capability creation combined with parent organization’s alignment helps in tangible value outcomes. In several cases, AI teams are building new range of innovation around AI based capabilities and solutions to showcase ensuing GCC as model template for innovation & transformation. GCCs need to conceptualize a bespoke strategy for building and sustaining AI Center of Competence and keep it up on the value chain with mature and measured transformation & innovation led matrices.
AI Talent Mapping Strategy:
With the evolution of analytics ,data sciences to AI, the lines between different skills are blurring. GCCs are witnessing a convergence of skills required across verticals. The strategic shift of GCCs towards AI center of capability model has led to the creation of AI, data engineering & design roles. To build skills in AI & data engineering, GCCs are adopting a hybrid approach. The skill development roadmap for AI is a combination of build and buy strategies. The decision to acquire talent from the ecosystem or internally build capabilities is a function of three parameters – Maturity of GCC s existing AI capabilities in the desired or adjacent areas ,Tactical nature of skill requirement & Availability and accessibility of talent in the ecosystem. There’s always a heavy Inclination towards building skills in-house within GCCs and a majority of GCCs have stressed upon that the bulk of the future deployment in AI areas will be through in-house skill-building and reskilling initiatives. However, talent mapping strategy for building AI capability is a measured approach else can result in being a Achilles heel for GCC and HR leaders.
GCCs in India are uniquely positioned to drive the next wave of growth with building high impact AI center of competence , there are slew of innovative & transformative models that they are working upon to up the ante and trigger new customer experience , products & services and unleash business transformation for the parent organizations. This will not only set the existing GCCs on the path to cutting-edge innovation but also pave the way for other global organizations contemplating global center setup in India.AI is becoming front runner to drive innovation & transformation for GCCs.
Cloud Platforms: Strategic Enabler for AI led Transformation
CIOs & CTOs have been toying with the idea of cloud adoption at scale for more than a decade since the first corporate experiments with external cloud platforms were conceptualized, and the verdict is long in on their business value. Companies that adopt the cloud well bring new capabilities to market more quickly, innovate more easily, and scale more efficiently—while also reducing technology risk.
Unfortunately, the verdict is still out on what constitutes a successful cloud implementation to actually capture that value. Most CIOs and CTOs default to traditional implementation models that may have been successful in the past but that make it almost impossible to capture the real value from the cloud. Defining the cloud opportunity too narrowly with siloed business initiatives, such as next-generation application hosting or data platforms, almost guarantees failure. That’s because no design consideration is given to how the organization will need to operate holistically in cloud, increasing the risk of disruption from nimbler attackers with modern technology platforms that enable business agility and innovation.
Companies that reap value from cloud platforms treat their adoption as a business- AI led transformation by doing three things:
- Focusing investments on business domains where cloud can enable increased revenues and improved margins
2. Selecting a technology and sourcing model that aligns with business strategy and risk constraints
3. Developing and implementing an operating model that is oriented around the cloud
CIOs and CTOs need to drive cloud adoption, but, given the scale and scope of change required to exploit this opportunity fully, they also need support and air cover from the rest of the management team.
Using cloud to enable AI led transformation : Only 14 percent of companies launching AI transformations have seen sustained and material performance improvements. Why? Technology execution capabilities are often not up to the task. Outdated AI technology environments make change expensive. Quarterly release cycles make it hard to tune AI capabilities to changing market demands. Rigid and brittle infrastructures choke on the data required for sophisticated analytics.
Operating in the cloud can reduce or eliminate many of these issues. Exploiting cloud services and tooling, however, requires change across all of IT and many business functions as well—in effect, a different business-technology model.
AI led transformation success requires CIOs and tech leaders to do three things :
1. Focus cloud investments in business domains where cloud platforms can enable increased revenues and improved margins:
The vast majority of the value the cloud generates comes from increased agility, innovation, and resilience provided to the business with sustained velocity. In most cases, this requires focusing cloud adoption on embedding re usability and composability so investment in modernizing can be rapidly scaled across the rest of the organization. This approach can also help focus programs on where the benefits matter most, rather than scrutinizing individual applications for potential cost savings
Faster time to market: Cloud-native companies can release code into production hundreds or thousands of times per day using end-to-end automation. Even traditional enterprises have found that automated cloud platforms allow them to release new capabilities daily, enabling them to respond to market demands and quickly test what does and doesn’t work. As a result, companies that have adopted cloud platforms report that they can bring new capabilities to market about 20 to 40 percent faster.
Ability to create innovative business offerings: Each of the major cloud service providers offers hundreds of native services and marketplaces that provide access to third-party ecosystems with thousands more. These services rapidly evolve and grow and provide not only basic infrastructure capabilities but also advanced functionality such as facial recognition, natural-language processing, quantum computing, and data aggregation.
Reduced risk: Cloud clearly disrupts existing security practices and architectures but also provides a rare opportunity to eliminate vast operational overhead to those that can design their platforms to consume cloud securely. Taking advantage of the multi billion-dollar investments CSPs have made in security operations requires a cyber-first design that automatically embeds robust standardized authentication, hardened infrastructure, and a resilient interconnected data-center availability zone.
Efficient scalability: Cloud enables companies to automatically add capacity to meet surge demand (in response to increasing customer usage, for example) and to scale out new services in seconds rather than the weeks it can take to procure additional on-premises servers. This capability has been particularly crucial during the COVID-19 pandemic, when the massive shift to digital channels created sudden and unprecedented demand peaks.
2. Select a technology, sourcing, and migration model that aligns with business and risk constraints
Decisions about cloud architecture and sourcing carry significant risk and cost implications—to the tune of hundreds of millions of dollars for large companies. The wrong technology and sourcing decisions will raise concerns about compliance, execution success, cyber security, and vendor risk—more than one large company has stopped its cloud program cold because of multiple types of risk. The right technology and source decisions not only mesh with the company’s risk appetite but can also “bend the curve” on cloud-adoption costs, generating support and excitement for the program across the management team.
If CIOs or CTOs make those decisions based on the narrow criteria of IT alone, they can create significant issues for the business. Instead, they must develop a clear picture of the business strategy as it relates to technology cost, investment, and risk.
3. Change operating models to capture cloud value
Capturing the value of migrating to the cloud requires changing both how IT works and how IT works with the business. The best CIOs and CTOs follow a number of principles in building a cloud-ready operating model:
Make everything a product : To optimize application functionality and mitigate technical debt,CIOs need to shift from “IT projects” to “products”—the technology-enabled offerings used by customers and employees. Most products will provide business capabilities such as order capture or billing. Automated as-a-service platforms will provide underlying technology services such as data management or web hosting. This approach focuses teams on delivering a finished working product rather than isolated elements of the product. This more integrated approach requires stable funding and a “product owner” to manage it.
Integrate with business. Achieving the speed and agility that cloud promises requires frequent interaction with business leaders to make a series of quick decisions. Practically, business leaders need to appoint knowledgeable decision makers as product owners for business-oriented products. These are people who have the knowledge and authority to make decisions about how to sequence business functionality as well as the understanding of the journeys of their “customers.”
Drive cloud skill sets across development teams. Traditional centers of excellence charged with defining configurations for cloud across the entire enterprise quickly get overwhelmed. Instead, top CIOs invest in delivery designs that embed mandatory self-service and co-creation approaches using abstracted, unified ways of working that are socialized using advanced training programs (such as “train the trainer”) to embed cloud knowledge in each agile tribe and even squad.
How Technology Leaders can join forces with leadership to drive AI led transformation
Given the economic and organizational complexity required to get the greatest benefits from the cloud, heads of infrastructure, CIOs, and CTOs need to engage with the rest of the leadership team. That engagement is especially important in the following areas:
Technology funding. Technology funding mechanisms frustrate cloud adoption—they prioritize features that the business wants now rather than critical infrastructure investments that will allow companies to add functionality more quickly and easily in the future. Each new bit of tactical business functionality built without best-practice cloud architectures adds to your technical debt—and thus to the complexity of building and implementing anything in the future. CIOs and CTOs need support from the rest of the management team to put in place stable funding models that will provide resources required to build underlying capabilities and remediate applications to run efficiently, effectively, and safely in the cloud.
Business-technology collaboration. Getting value from cloud platforms requires knowledgeable product owners with the power to make decisions about functionality and sequencing. That won’t happen unless the CEO and relevant business-unit heads mandate people in their organizations to be product owners and provide them with decision-making authority.
Engineering talent. Adopting the cloud requires specialized and sometimes hard-to-find technical talent—full-stack developers, data engineers, cloud-security engineers, identity and access-management specialists, cloud engineers, and site-reliability engineers. Unfortunately, some policies put in place a decade ago to contain IT costs can get in the way of on boarding cloud talent. Companies have adopted policies that limit costs per head and the number of senior hires, for example, which require the use of outsourced resources in low-cost locations. Collectively, these policies produce the reverse of what the cloud requires, which is a relatively small number of highly talented and expensive people who may not want to live in traditionally low-cost IT locations. CIOs and CTOs need changes in hiring and location policies to recruit and retain the talent needed for success in the cloud.
The recent COVID-19 pandemic has only heightened the need for companies to adopt AI led business models. Only cloud platforms can provide the required agility, scalability, and innovative capabilities required for this transition. While there have been frustrations and false starts in the enterprise cloud journey, companies can dramatically accelerate their progress by focusing cloud investments where they will provide the most business value and building cloud-ready operating models.
AIQRATE in 2020 ….A walk to remember
“Enabling clients reimagine their decision making & accentuate the business performance with AI strategy in a transformation, innovation and disruption driven world”
In today’s fast paced & volatile VUCA world, leaders face unprecedented challenges. They need to navigate through volatility while staying focused on strategy, business performance and culture. Artificial Intelligence is fast becoming a game changing catalyst and a strategic differentiator and almost a panacea to solve large, complex and unresolved problems. To be an AI powered organization, leaders not only need to have a broad understanding of AI strategy, they need to know how and where to use it. AIQRATE advisory services and consulting offerings are designed to enable leaders and decision makers from Enterprises, GCCs, Cloud Providers, Technology players, Startups, SMBs, VC/PE firms, Public Institutions and Academic Institutions to become AI ready and reduce the risk associated with curating, deploying AI strategy and ensuing interventions and increase the predictability of a durable leader’s success.
In the age of the bionic enterprises, AI continues to dominate the technology & business landscape. Under the aegis of transformation, disruption and innovation, AI has several applications and impact areas which usher a new change in how we make decisions in the enterprise and personal spheres. Traditionally, human decisions are to a large extent based on intuition, gut and historical data. In the age of AI, several of our decisions will be taken by algorithms. Leveraging AI, the ability to mimic the human brain and the ensuing ability to sense, comprehend and act will significantly go up and will result in emergence of augmented intelligence in decision making. Enterprises, GCCs, SMBs, Startups and Government Institutions are attempting to harness the power of AI to change the way they do business. All these industry segments are looking at AI becoming the secret sauce behind making them gain a competitive advantage. If you have not started yet, you are already behind the competition, however large or pedigreed you might be.
So, where are you placed on your AI journey? At AIQRATE, we can guide you on your journey of understanding what AI can do for you, embedding it within your business strategy, functional areas and augmenting the decision-making process.
At AIQRATE, we are here to help you with the art of the possible with AI. Through our bespoke AI strategy frameworks, methodologies, toolkits, playbooks and assessments, we will bring seamless Transformation, Innovation and Disruption to your businesses. Leveraging our proven repository of consulting templates and artifacts, we will curate your AI strategic approach roadmap. Our advisory offerings and consulting engagements are designed in alignment with your strategic growth, vision and competitive scenarios.
We are at an inflection point where AI will revolutionize the way we do business. The paradigms of customer, products, offerings, services and competition will change dramatically; and being AI-ready will become a true differentiator. AIQRATE will be your strategic partner to help you to prepare for what’s next in order to stay relevant.
Wish you a great 2021!
Chief Executive Officer
Bangalore , India
Best Practices to Accelerate & Transform Analytics Adoption in the Cloud
Reimagining analytics in the cloud enables enterprises to achieve greater agility, increase scalability and optimize costs. But organizations take different paths to achieving their goals. The best way to proceed will depend on data environment and business objectives. There are two best practices to maximize analytics adoption in the cloud:
• Cloud Data Warehouse, Data Lake, and Lakehouse Transformation: Strategically moving data warehouse and data lake to the cloud over time and adopting a modern, end-to-end data infrastructure for AI, and machine learning projects.
• New Cloud Data Warehouse and Data Lake: Start small and fast and grow as needed by spinning up a new cloud data warehouse or cloud data lake. The same guidance applies whether implementing new data warehouses and data lakes in the cloud for the first time, or doing so for an individual department or line of business.
As cloud adoption grows, most organizations will eventually want to modernize their enterprise analytics infrastructure entirely in the cloud. With the transformation pathway, rebuild everything to take advantage of the most modern cloud-based enterprise data warehouse, data lake, and lake house technology to end up in the strongest position long term. But migrate data and workloads from existing on-premises enterprise data warehouse and data lake to the cloud incrementally, over time. This approach allows enterprises to be strategic while minimizing disruption. Enterprises can take the time to carefully evaluate data and bring over only what is needed, which makes this a less risky approach. It also enables more complex analysis of data, using artificial intelligence, machine learning. The combination of a cloud data warehouse and data lake allows to manage the data necessary for analytics by providing economical scalability across compute and storage that is not possible with an on-premises infrastructure. And it enables to incorporate new types of data, from IoT sensors, social media, text, and more, into your analysis to gain new insights.
For this pathway ,enterprises need an intelligent, automated data platform that delivers a number of critical capabilities. It should handle new data sources, accommodate AI and machine learning projects, support new processing engines, deliver performance at a massive scale, and offer serverless scale up/scale down capabilities. As with a brand-new cloud data warehouse or data lake, enterprises need cloud-native, best-of-breed data integration, data quality, and metadata management to ensure maximizing the value of cloud analytics. Once the data is in the cloud, organization can provide users with self-service access to this data so they can more easily and seamlessly create reports or take swift decision. Subsequently , this transformation pathway gives organizations an end-to-end modern infrastructure for next-generation cloud analytics
Lines of business increasingly rely on analytics to improve processes and business impact. For example, sales and marketing no longer ask, “How many leads did we generate?” They want to know how many sales-ready leads we gathered from Global 500 accounts as evidenced by user time spent consuming content on the web. But individual lines of business may not have the time or resources to create and maintain an on-premises data warehouse to answer these questions. With a new cloud data warehouse and data lake, departments can get analytics projects off the ground quickly and cost effectively. Departments simply spin up their own cloud data warehouses, populate them with data, and make sure they’re connected to analytics and BI tools. For data science projects, a team may want to quickly add a cloud data lake. In some cases, this approach enables the team to respond to requests for sophisticated analysis faster than centralized teams can normally handle. Whatever the purpose of new cloud data warehouse and data lake, enterprises need intelligent, automated cloud data management with best of-breed, cloud-native data integration, data quality, and metadata management all built on a cloud-native platform in order to deliver value and drive ROI. And note that while this approach allows enterprises to start small and scale as needed, the downside is that data warehouse and data lake may only benefit a particular department inside the enterprise.
Some organizations with significant investments in on-premises enterprise data warehouses and data lakes are looking to simply replicate their existing systems to the cloud. By lifting and shifting their data warehouse or data lake “as is” to the cloud, they seek to improve flexibility, increase scalability, and lower data center costs while migrating quickly to minimize disruption. Lifting and shifting an on-premises system to the cloud may seem fast and safe. But in reality, it’s an inefficient approach, one that’s like throwing everything you own into a moving van instead of packing strategically for a plane trip. In the long run, reducing baggage and traveling by air delivers greater agility and faster results because you are not weighed down by unnecessary clutter. Some organizations may need to do a lift and shift, but most will find it’s not the best course of action because it simply persists outdated or inefficient legacy systems and offers little in the way of innovation.
AI led Algorithms can decide on how we need to emote, behave, react, transact or interact with an individual – Sameer with SCIKEY
AI led Algorithms can decide on how we need to emote, behave, react, transact or interact with an individual – Sameer with SCIKEY
In an exclusive interaction with SCIKEY, Sameer Dhanrajani, CEO at AIQRATE Advisory & Consulting, speaks about how the future of work will look like enabled by AI, and it’s contribution in building productive teams and the emerging AI trends to watch out for in Post COVID scenario.
“AI led algorithms can decide on how we need to emote, behave, react, transact or interact with an individual,” Sameer Dhanranjani
Sameer is a globally recognized AI advisor, business builder, evangelist and thought leader known for his deep knowledge, strategic consulting approaches in AI space. Sameer has consulted with several Fortune 500 global enterprises, Indian corporations , GCCs, startups , SMBs, VC/PE firms, Academic Institutions in driving AI led strategic transformation and innovation strategies. Sameer is a renowned author, columnist, blogger and four times Tedx speaker. He is an author of bestselling book – AI and Analytics: accelerating business decisions.
In an exclusive interaction with SCIKEY, Sameer Dhanranjani, CEO at AIQRATE advisory consulting, speaks about how the future of work will look like enabled by AI, and it’s contribution in building productive teams and the emerging AI trends to watch out for in Post COVID scenario.
Mr Dhanranjani, you have consulted with several Fortune 500 enterprises, GCCs also start-ups in driving AI-led strategic transformation strategies. What according to you, are the topmost strategic considerations to weigh for managing accelerating business in Post COVID world for a start-up?
The unprecedented times of COVID-19 have brought the aspect of decision making under consideration. This includes tactical, strategic, and operational decision making that is crucial to make the venture more sustainable. Today the use of artificial intelligence is quite high amongst organizations. It can be used by start-up ventures and other outfits to make decisions irrespective of the area that needs decision making.
Most decisions that need to be made strategically are being passed on to artificial intelligence-enabled interventions. The algorithm makes similar decisions based on the previous decisions taken. Algorithms can decide how we need to emote, behave, react, transact or interact with the opposite individual This advancement in AI brings the challenge for organizations to create products and services specific to each customer through hyper-personalization and micro-segmenting. However, it can also be considered as an opportunity for organizations to emerge from the pandemic with newer business models and experiences for customers. Start-ups, especially, can make use of such advancements to reinvent and rejuvenate the organizational ecosystem.
You are known for your passion for Artificial Intelligence and are an author to the bestselling book – AI and Analytics: Accelerating Business Decisions. Tell us where how can AI be strategically significant while building productive teams.
My experience has led me to deal with engagements in the entire value chain of HR, ranging from hiring to engagement to incentivization that has leveraged using AI. It is phenomenal to see how AI can help build, engage, and sustain productive teams. AI can help in hiring through the detection emotions, facial expressions, tone modulations of the interviewee through computer vision and image classification techniques.
In the creation of productive teams, AI can gauge the engagement levels of an employee. It tries to look at the various interventions made by an employee regarding their attendance, participation in virtual meetings, and propensity to ask and engage themselves in conversations. It also keeps in check the number of pauses, intervals, and breaks taken by an employee. Every aspect of the employee is being marked to see how productive, inclusive, as an individual and in teams.
What are the top 5 AI trends to watch out for in Post COVID the scenario of the next one year?
When it comes to AI, the first trend emerging is that AI is not a tool or a technology, but it is now being touted as a strategic imperative for any organization. This means that AI strategies will become an intrinsic part and feature of every organisation.
The second trend is the democratization of AI. There is a possibility of the emergence of an AI marketplace where virtual exchanges related to business problems, demo runs etc. can be conducted. One would actually be able to figure out which algorithm is best for them in customer experience, supply chain etc.
The third trend being the cloud will act as a catalyst for AI proliferation. The propensity for cloud providers to enable AI companies with possible aspects of microservice API’s, Product Solutions will be created on the go. This means that the cloud enablers will have options to see various possibilities specific to their organisation when it comes to AI-specific use cases.
The fourth trend is linked to skilling. AI today is a part of a lot of course curriculums. But what is missing is the whole aspect of how does it get applied? The new courseware will be focused on how is AI implemented, adopted in the organization.
The last fifth trend is decision-making enabled by AI, which means humans will have no option but to upskill and reskill themselves to take a more rational, pragmatic and sanguine approach. So new models, new emerging realities of decision making will emerge.
How is AI powering the Future of Work, what are critical considerations for business and tech leaders considering the rapidly changing business dynamics due to COVID?
The future of work will be about AI and what we call AI plus a set of exponential technologies. This means that every aspect of our performance interaction and our responses will be gauged very manually through these technologies. This indicates that the level of performances in terms of how we go up-to-date needs to be worked upon. The future of work is an ecosystem where one particular employer cannot do it all.
This means that if learning must occur through an external player, it must come through the ecosystem of co-employees and the employer. In the future, we will not be caged as mere professionals doing our job but will be encouraged to push our boundaries to explore more at work. At the same time, transformation, innovation, and disruption will be a part of the future’s performance metrics. They will become a major parameter for the organization to create a mediocre versus proficient employee or a professional. This is where the onus will fall on the employees to ensure that they are not just doing what is being called out, but are going beyond to create what we call a value creation for the organisation.
SCIKEY Market Network is a Digital Marketplace for Jobs, Work Business solutions, supported by a Professional Network and an integrated Services Ecosystem. It enables enterprises, businesses, job seekers, freelancers, and gig workers around the world. With its online events, learning certifications, assessments, ranking awards, content promotion tools, SaaS solutions for business, a global consulting ecosystem, and more, companies can get the best deals in one place.
‘SCIKEY Assured,’ a premium managed services offering by SCIKEY, delivers the best outcomes to enterprise customers globally for talent and technology solutions getting delivered offshore, remotely, or on-premise. We are super-proud to be working with some of the world’s most iconic Fortune1000 brands.
Better Work. Better Business. Better Life. Better World.
CXO Insights: Establishing AI fluency with Boards – The new strategic imperative
Though a rhetorical theme , We can safely defer the discussion about whether artificial intelligence will eventually take over board functions. We cannot, however, defer the discussion about how boards will oversee AI — a discussion that’s relevant whether organizations are developing AI systems or buying AI-powered software. With AI adoption in increasingly widespread use, it’s time for every board to develop a proactive approach for overseeing how AI operates within the context of an organization’s overall mission and risk management.
According to a recent global AI survey, although AI adoption is increasing rapidly, overseeing and mitigating its risks remain unresolved and urgent tasks: Just 41% of respondents said that their organizations “comprehensively identify and prioritize” the risks associated with AI deployment. Board members recognize that this task is on their agendas: According to the 2019 National Association of Corporate Directors (NACD) Blue Ribbon Commission report, “Fit for the Future: An Urgent Imperative for Board Leadership,” 86% of board members “fully expect to deepen their engagement with management on new drivers of growth and risk in the next five years.”
Why’s this an imperative ? Because AI’s potential to deliver significant benefits comes with new and complex risks. For example, the frequency with which AI-driven facial recognition technologies misidentify nonwhite or female faces is among the issues that have driven a pullback by major vendors — which are also concerned about the use of the technology for mass surveillance and consequent civil rights violations. Recently, IBM stopped selling the facial technology altogether. Further, Microsoft said it would not sell its facial recognition technology to police departments until Congress passes a federal law regulating its use by law enforcement. Similarly, Amazon said it would not allow police use of its technology for a year, to allow time for legislators to act.
The use of AI-driven facial recognition technology in policing is just one notorious example, however. Virtually all AI systems & platforms in use today may be vulnerable to problems that result from the nature of the data used to train and operate them, the assumptions made in the algorithms themselves, the lack of system controls, and the lack of diversity in the human teams that build, instruct, and deploy them.Many of the decisions that will determine how these technologies work, and what their impact will be, take place largely outside of the board’s view — despite the strategic, operational, and legal risks they present. Nonetheless, boards are charged with overseeing and supporting management in better managing AI risks.
Increasing the board’s fluency with and visibility into these issues is just good governance. A board, its committees, and individual directors can approach this as a matter of strict compliance, strategic planning, or traditional legal and business risk oversight. They might also approach AI governance through the lens of environment, social, and governance (ESG) considerations: As the board considers enterprise activity that will affect society, AI looms large. The ESG community is increasingly making the case that AI needs to be added to the board’s portfolio.
How Boards can assess the quality & impact of AI
Directors’ duties of care and loyalty are familiar and well established. They include the obligations to act in good faith, be sufficiently informed, and exercise due care in oversight over strategy, risk, and compliance.
Boards assessing the quality and impact of AI and oversight is required should understand the following:
- AI is more than an issue for the technology team. Its impact resonates across the organization and implicates those managing legal, marketing, and human resources functions, among others.
- AI is not a siloed thing. It is a system comprising the technology itself, the human teams who manage and interact with it, and the data upon which it runs.
- AI systems need the accountability of C-level strategy and oversight. They are highly complex and contextual and cannot be trustworthy without integrated, strategic guidance and management.
- AI is not static. It is designed to adapt quickly and thus requires continuous oversight.
- The AI systems most in use by business today are efficient and powerful prediction engines. They generate these predictions based on data sets that are selected by engineers, who use them to train and feed algorithms that are, in turn, optimized on goals articulated — most often — by those developers. Those individuals succeed when they build technology that works, on time and within budget. Today, the definition of effective design for AI may not necessarily include guardrails for its responsible use, and engineering groups typically aren’t resourced to take on those questions or to determine whether AI systems operate consistently with the law or corporate strategies and objectives.
The choices made by AI developers — or by an HR manager considering a third-party resume-screening algorithm, or by a marketing manager looking at an AI-driven dynamic pricing system — are significant. Some of these choices may be innocuous, but some are not, such as those that result in hard-to-detect errors or bias that can suppress diversity or that charge customers different prices based on gender. Board oversight must include requirements for policies at both the corporate level and the use-case level that delineate what AI systems will and will not be used for. It must also set standards by which their operation, safety, and robustness can be assessed. Those policies need to be backed up by practical processes, strong culture, and compliance structures.
Enterprises may be held accountable for whether their uses of algorithm-driven systems comply with well-established anti-discrimination rules. The U.S. Department of Housing and Urban Development recently charged Facebook with violations of the federal Fair Housing Act for its use of algorithms to determine housing-related ad-targeting strategies based on protected characteristics such as race, national origin, religion, familial status, sex, and disabilities. California courts have held that the Unruh Civil Rights Act of 1959 applies to online businesses’ discriminatory practices. The legal landscape also is adapting to the increasing sophistication of AI and its applications in a wide array of industries beyond the financial sector. For instance, the FTC is calling for the “transparent, explainable, fair, and empirically sound” use of AI tools and demanding accountability and standards. The Department of Justice’s Criminal Division’s updated guidance underscores that an adequate corporate compliance program is a factor in sentencing guidelines.
From the board’s perspective, compliance with existing rules is an obvious point, but it is also important to keep up with evolving community standards regarding the appropriate duty of care as these technologies become more prevalent and better understood. Further, even after rules are in force, applying them in particular business settings to solve specific business problems can be difficult and intricate. Boards need to confirm that management is sufficiently focused and resourced to manage compliance well, along with AI’s broader strategic trade-offs and risks.
Risks to brand and reputation. The issue of brand integrity — clearly a current board concern — may most likely drive AI accountability in the short term. Recent issues faced by individuals charged with advancing responsible AI within companies found that the “most prevalent incentives for action were catastrophic media attention and decreasing media tolerance for the status quo.” Well before new laws and regulations are in effect, company stakeholders such as customers, employees, and the public are forming opinions about how an organization uses AI. As these technologies penetrate further into business and the home, their impact will increasingly define a brand’s reputation for trust, quality, and its mission.
The role of AI in exacerbating racial, gender, and cultural inequities is inescapable. Addressing these issues within the technology is necessary, but it is not sufficient. Without question, we can move forward only with genuine commitments to diversity and inclusion at all levels of technology development and technology consumption.
Business continuity concerns. Boards and executives are already keenly aware that technology-dependent enterprises are vulnerable to disruption when systems fail or go wrong, and AI raises new board-worthy considerations on this score. First, many AI systems rely on numerous and unknown third-party technologies, which might threaten reliability if any element is faulty, orphaned, or inadequately supported. Second, AI carries the potential of new kinds of cyber threats, requiring new levels of coordination within any enterprise. And bear in mind that many AI developers will tell you that they don’t really know what an AI system will do until it does it — and that AI that “goes bad,” or cannot be trusted, will need remediation and may have to be pulled out of production or off the market.
The ”NEW” strategic imperative for Boards
Regardless of how a board decides to approach AI fluency, it will play a critical role in considering the impact of the AI technologies that a business chooses to use. Before specific laws are in effect, and even well after they are written, businesses will be making important decisions about how to use these tools, how they will impact their workforces, and when to rely upon them in lieu of human judgment. The hardest questions a board will face about proposed AI applications are likely to be “Should we adopt AI in this way?” and “What is our duty to understand how that function is consistent with all of our other beliefs, missions, and strategic objectives?” Boards must decide where they want management to draw the line: for example, to identify and reject an AI-generated recommendation that is illegal or at odds with organizational values .
Boards should do the following in order to establish adequate AI fluency mechanics:
- Learn where in the organization AI and other exponential technologies are being used or planning to be used, and why.
- Set a regular cadence for management to report on policies and processes for governing these technologies specifically, and for setting standards for AI procurement and deployment, training, compliance, and oversight.
- Encourage the appointment of a C-level executive to be responsible for this work, across company functions.
- Encourage adequate resourcing and training of the oversight function.
It’s not too soon for boards to begin this work; even for enterprises with little investment in AI development, it will find its way into the organization through AI-infused tools and services. The legal, strategic, and brand risks of AI are sufficiently grave that boards need facility with them and a process by which they can work with management to contain the risks while reaping the rewards.AI Fluency is the new strategic agenda.
Managing Bias in AI: Strategic Risk Management Strategy for Banks
AI is set to transform the banking industry, using vast amounts of data to build models that improve decision making, tailor services, and improve risk management. According to the EIU, this could generate value of more than $250 billion in the banking industry. But there is a downside, since ML models amplify some elements of model risk. And although many banks, particularly those operating in jurisdictions with stringent regulatory requirements, have validation frameworks and practices in place to assess and mitigate the risks associated with traditional models, these are often insufficient to deal with the risks associated with machine-learning models. The added risk brought on by the complexity of algorithmic models can be mitigated by making well-targeted modifications to existing validation frameworks.
Conscious of the problem, many banks are proceeding cautiously, restricting the use of ML models to low-risk applications, such as digital marketing. Their caution is understandable given the potential financial, reputational, and regulatory risks. Banks could, for example, find themselves in violation of anti discrimination laws, and incur significant fines—a concern that pushed one bank to ban its HR department from using a machine-learning resume screener. A better approach, however, and ultimately the only sustainable one if banks are to reap the full benefits of machine-learning models, is to enhance model-risk management.
Regulators have not issued specific instructions on how to do this. In the United States, they have stipulated that banks are responsible for ensuring that risks associated with machine-learning models are appropriately managed, while stating that existing regulatory guidelines, such as the Federal Reserve’s “Guidance on Model Risk Management” (SR11-7), are broad enough to serve as a guide. Enhancing model-risk management to address the risks of machine-learning models will require policy decisions on what to include in a model inventory, as well as determining risk appetite, risk tiering, roles and responsibilities, and model life-cycle controls, not to mention the associated model-validation practices. The good news is that many banks will not need entirely new model-validation frameworks. Existing ones can be fitted for purpose with some well-targeted enhancements.
New Risk mitigation exercises for ML models
There is no shortage of news headlines revealing the unintended consequences of new machine-learning models. Algorithms that created a negative feedback loop were blamed for the “flash crash” of the British pound by 6 percent in 2016, for example, and it was reported that a self-driving car tragically failed to properly identify a pedestrian walking her bicycle across the street. The cause of the risks that materialized in these machine-learning models is the same as the cause of the amplified risks that exist in all machine-learning models, whatever the industry and application: increased model complexity. Machine-learning models typically act on vastly larger data sets, including unstructured data such as natural language, images, and speech. The algorithms are typically far more complex than their statistical counterparts and often require design decisions to be made before the training process begins. And machine-learning models are built using new software packages and computing infrastructure that require more specialized skills. The response to such complexity does not have to be overly complex, however. If properly understood, the risks associated with machine-learning models can be managed within banks’ existing model-validation frameworks
Here are the strategic approaches for enterprises to ensure that that the specific risks associated with machine learning are addressed :
Demystification of “Black Boxes” : Machine-learning models have a reputation of being “black boxes.” Depending on the model’s architecture, the results it generates can be hard to understand or explain. One bank worked for months on a machine-learning product-recommendation engine designed to help relationship managers cross-sell. But because the managers could not explain the rationale behind the model’s recommendations, they disregarded them. They did not trust the model, which in this situation meant wasted effort and perhaps wasted opportunity. In other situations, acting upon (rather than ignoring) a model’s less-than-transparent recommendations could have serious adverse consequences.
The degree of demystification required is a policy decision for banks to make based on their risk appetite. They may choose to hold all machine-learning models to the same high standard of interpretability or to differentiate according to the model’s risk. In USA, models that determine whether to grant credit to applicants are covered by fair-lending laws. The models therefore must be able to produce clear reason codes for a refusal. On the other hand, banks might well decide that a machine-learning model’s recommendations to place a product advertisement on the mobile app of a given customer poses so little risk to the bank that understanding the model’s reasons for doing so is not important. Validators need also to ensure that models comply with the chosen policy. Fortunately, despite the black-box reputation of machine-learning models, significant progress has been made in recent years to help ensure their results are interpretable. A range of approaches can be used, based on the model class:
Linear and monotonic models (for example, linear-regression models): linear coefficients help reveal the dependence of a result on the output. Nonlinear and monotonic models, (for example, gradient-boosting models with monotonic constraint): restricting inputs so they have either a rising or falling relationship globally with the dependent variable simplifies the attribution of inputs to a prediction. Nonlinear and nonmonotonic (for example, unconstrained deep-learning models): methodologies such as local interpretable model-agnostic explanations or Shapley values help ensure local interpretability.
Bias : A model can be influenced by four main types of bias: sample, measurement, and algorithm bias, and bias against groups or classes of people. The latter two types, algorithmic bias and bias against people, can be amplified in machine-learning models. For example, the random-forest algorithm tends to favor inputs with more distinct values, a bias that elevates the risk of poor decisions. One bank developed a random-forest model to assess potential money-laundering activity and found that the model favored fields with a large number of categorical values, such as occupation, when fields with fewer categories, such as country, were better able to predict the risk of money laundering.
To address algorithmic bias, model-validation processes should be updated to ensure appropriate algorithms are selected in any given context. In some cases, such as random-forest feature selection, there are technical solutions. Another approach is to develop “challenger” models, using alternative algorithms to benchmark performance. To address bias against groups or classes of people, banks must first decide what constitutes fairness. Four definitions are commonly used, though which to choose may depend on the model’s use: Demographic blindness: decisions are made using a limited set of features that are highly uncorrelated with protected classes, that is, groups of people protected by laws or policies. Demographic parity: outcomes are proportionally equal for all protected classes. Equal opportunity: true-positive rates are equal for each protected class. Equal odds: true-positive and false-positive rates are equal for each protected class. Validators then need to ascertain whether developers have taken the necessary steps to ensure fairness. Models can be tested for fairness and, if necessary, corrected at each stage of the model-development process, from the design phase through to performance monitoring.
Feature engineering : is often much more complex in the development of machine-learning models than in traditional models. There are three reasons why. First, machine-learning models can incorporate a significantly larger number of inputs. Second, unstructured data sources such as natural language require feature engineering as a preprocessing step before the training process can begin. Third, increasing numbers of commercial machine-learning packages now offer so-called AutoML, which generates large numbers of complex features to test many transformations of the data. Models produced using these features run the risk of being unnecessarily complex, contributing to overfitting. For example, one institution built a model using an AutoML platform and found that specific sequences of letters in a product application were predictive of fraud. This was a completely spurious result caused by the algorithm’s maximizing the model’s out-of-sample performance.
In feature engineering, banks have to make a policy decision to mitigate risk. They have to determine the level of support required to establish the conceptual soundness of each feature. The policy may vary according to the model’s application. For example, a highly regulated credit-decision model might require that every individual feature in the model be assessed. For lower-risk models, banks might choose to review the feature-engineering process only: for example, the processes for data transformation and feature exclusion. Validators should then ensure that features and/or the feature-engineering process are consistent with the chosen policy. If each feature is to be tested, three considerations are generally needed: the mathematical transformation of model inputs, the decision criteria for feature selection, and the business rationale. For instance, a bank might decide that there is a good business case for using debt-to-income ratios as a feature in a credit model but not frequency of ATM usage, as this might penalize customers for using an advertised service.
Hyper parameters : Many of the parameters of machine-learning models, such as the depth of trees in a random-forest model or the number of layers in a deep neural network, must be defined before the training process can begin. In other words, their values are not derived from the available data. Rules of thumb, parameters used to solve other problems, or even trial and error are common substitutes. Decisions regarding these kinds of parameters, known as hyper parameters, are often more complex than analogous decisions in statistical modeling. Not surprisingly, a model’s performance and its stability can be sensitive to the hyper parameters selected. For example, banks are increasingly using binary classifiers such as support-vector machines in combination with natural-language processing to help identify potential conduct issues in complaints. The performance of these models and the ability to generalize can be very sensitive to the selected kernel function.Validators should ensure that hyper parameters are chosen as soundly as possible. For some quantitative inputs, as opposed to qualitative inputs, a search algorithm can be used to map the parameter space and identify optimal ranges. In other cases, the best approach to selecting hyperparameters is to combine expert judgment and, where possible, the latest industry practices.
Production readiness : Traditional models are often coded as rules in production systems. Machine-learning models, however, are algorithmic, and therefore require more computation. This requirement is commonly overlooked in the model-development process. Developers build complex predictive models only to discover that the bank’s production systems cannot support them. One US bank spent considerable resources building a deep learning–based model to predict transaction fraud, only to discover it did not meet required latency standards. Validators already assess a range of model risks associated with implementation. However, for machine learning, they will need to expand the scope of this assessment. They will need to estimate the volume of data that will flow through the model, assessing the production-system architecture (for example, graphics-processing units for deep learning), and the runtime required.
Dynamic model calibration : Some classes of machine-learning models modify their parameters dynamically to reflect emerging patterns in the data. This replaces the traditional approach of periodic manual review and model refresh. Examples include reinforcement-learning algorithms or Bayesian methods. The risk is that without sufficient controls, an overemphasis on short-term patterns in the data could harm the model’s performance over time. Banks therefore need to decide when to allow dynamic recalibration. They might conclude that with the right controls in place, it is suitable for some applications, such as algorithmic trading. For others, such as credit decisions, they might require clear proof that dynamic recalibration outperforms static models. With the policy set, validators can evaluate whether dynamic recalibration is appropriate given the intended use of the model, develop a monitoring plan, and ensure that appropriate controls are in place to identify and mitigate risks that might emerge. These might include thresholds that catch material shifts in a model’s health, such as out-of-sample performance measures, and guardrails such as exposure limits or other, predefined values that trigger a manual review.
Banks will need to proceed gradually. The first step is to make sure model inventories include all machine learning–based models in use. One bank’s model risk-management function was certain the organization was not yet using machine-learning models, until it discovered that its recently established innovation function had been busy developing machine-learning models for fraud and cyber security.
From here, validation policies and practices can be modified to address machine-learning-model risks, though initially for a restricted number of model classes. This helps build experience while testing and refining the new policies and practices. Considerable time will be needed to monitor a model’s performance and finely tune the new practices. But over time banks will be able to apply them to the full range of approved machine-learning models, helping companies mitigate risk and gain the confidence to start harnessing the full power of machine learning.
(AIQRATE, A bespoke global AI advisory and consulting firm. A first in its genre, AIQRATE provides strategic AI advisory services and consulting offerings across multiple business segments to enable clients on their AI powered transformation & innovation journey and accentuate their decision making and business performance.
AIQRATE works closely with Boards, CXOs and Senior leaders advising them on navigating their Analytics to AI journey with the art of possible or making them jump start to AI progression with AI@scale approach followed by consulting them on embedding AI as core to business strategy within business functions and augmenting the decision-making process with AI. We have proven bespoke AI advisory services to enable CXO’s and Senior Leaders to curate & design building blocks of AI strategy, embed AI@scale interventions and create AI powered organizations. AIQRATE’s path breaking 50+ AI consulting frameworks, assessments, primers, toolkits and playbooks enable Indian & global enterprises, GCCs, Startups, VC/PE firms, and Academic Institutions enhance business performance and accelerate decision making.
Visit www.aiqrate.ai to experience our AI advisory services & consulting offerings
Emergence of AI Powered Enterprise: Strategic considerations for Leaders
The excitement around artificial intelligence is palpable. It seems that not a day goes by without one of the giants in the industry coming out with a breakthrough application of this technology, or a new nuance is added to the overall body of knowledge. Horizontal and industry-specific use cases of AI abound and there is always something exciting around the corner every single day.
However, with the keen interest from global leaders of multinational corporations, the conversation is shifting towards having a strategic agenda for AI in the enterprise. Business heads are less interested in topical experiments and minuscule productivity gains made in the short term. They are more keen to understand the impact of AI in their areas of work from a long-term standpoint. Perhaps the most important question that they want to see answered is – what will my new AI-enabled enterprise look like? The question is as strategic as it is pertinent. For business leaders, the most important issues are – improving shareholder returns and ensuring a productive workforce – as part of running a sustainable, future-ready business. Artificial intelligence may be the breakout technology of our time, but business leaders are more occupied with trying to understand just how this technology can usher in a new era of their business, how it is expected to upend existing business value chains, unlock new revenue streams, and deliver improved efficiencies in cost outlays. In this article, let us try to answer these questions.
AI is Disrupting Existing Value Chains
Ever since Michael Porter first expounded on the concept in his best-selling book, Competitive Advantage: Creating and Sustaining Superior Performance, the concept of the value chain has gained great currency in the minds of business leaders globally. The idea behind the value chain was to map out the inter linkages between the primary activities that work together to conceptualize and bring a product / service to market (R&D, manufacturing, supply chain, marketing, etc.), as well as the role played by support activities performed by other internal functions (finance, HR, IT etc.). Strategy leaders globally leverage the concept of value chains to improve business planning, identify new possibilities for improving business efficiency and exploit potential areas for new growth.
Now with AI entering the fray, we might see new vistas in the existing value chains of multinational corporations. For instance:
- Manufacturing is becoming heavily augmented by artificial intelligence and robotics. We are seeing these technologies getting a stronger foothold across processes requiring increasing sophistication. Business leaders need to now seriously consider workforce planning for a labor force that consists both human and artificial workers at their manufacturing units. Due attention should also be paid in ensuring that both coexist in a symbiotic and complementary manner.
- Logistics and Delivery are two other areas where we are seeing a steady growth in the use of artificial intelligence. Demand planning and fulfilment through AI has already reached a high level of sophistication at most retailers. Now Amazon – which handles some of the largest and most complex logistics networks in the world – is in advanced stages of bringing in unmanned aerial vehicles (drones) for deliveries through their Amazon Prime Air program. Business leaders expect outcomes to range from increased customer satisfaction (through faster deliveries) and reduction in costs for the delivery process.
- Marketing and Sales are constantly on the forefront for some of the most exciting inventions in AI. One of the most recent and evolved applications of AI is Reactful. A tool developed for eCommerce properties, Reactful helps drive better customer conversions by analyzing the clickstream and digital footprints of people who are on web properties and persuades them into making a purchase. Business leaders need to explore new ideas such as this that can help drive meaningful engagement and top line growth through these new AI-powered tools.
AI is Enabling New Revenue Streams
The second way business leaders are thinking strategically around AI is for its potential to unlock new sources of revenue. Earlier, functions such as internal IT were seen as a cost center. In today’s world, due to the cost and competitive pressure, areas of the business which were traditionally considered to be cost centers are require to reinvent themselves into revenue and profit centers. The expectation from AI is no different. There is a need to justify the investments made in this technology – and find a way for it to unlock new streams of revenue in traditional organizations. Here are two key ways in which business leaders can monetize AI:
- Indirect Monetization is one of the forms of leveraging AI to unlock new revenue streams. It involves embedding AI into traditional business processes with a focus on driving increased revenue. We hear of multiple companies from Amazon to Google that use AI-powered recommendation engines to drive incremental revenue through intelligent recommendations and smarter bundling. The action item for business leaders is to engage stakeholders across the enterprise to identify areas where AI can be deeply ingrained within tech properties to drive incremental revenue.
- Direct Monetization involves directly adding AI as a feature to existing offerings. Examples abound in this area – from Salesforce bringing in Einstein into their platform as an AI-centric service to cloud infrastructure providers such as Amazon and Microsoft adding AI capabilities into their cloud offerings. Business leaders should brainstorm about how AI augments their core value proposition and how it can be added into their existing product stack.
AI is Bringing Improved Efficiencies
The third critical intervention for a new AI-enabled enterprise is bringing to the fore a more cost-effective business. Numerous topical and early-stage experiments with AI have brought interesting success for reducing the total cost of doing business. Now is the time to create a strategic roadmap for these efficiency-led interventions and quantitatively measure their impact to business. Some food for thought for business leaders include:
- Supply Chain Optimization is an area that is ripe for AI-led disruption. With increasing varieties of products and categories and new virtual retailers arriving on the scene, there is a need for companies to reduce their outlay on the network that procures and delivers goods to consumers. One example of AI augmenting the supply chain function comes from Evertracker – a Hamburg-based startup. By leveraging IOT sensors and AI, they help their customers identify weaknesses such as delays and possible shortages early, basing their analysis on internal and external data. Business leaders should scout for solutions such as these that rely on data to identify possible tweaks in the supply chain network that can unlock savings for their enterprises.
- Human Resources is another area where AI-centric solutions can be extremely valuable to drive down the turnaround time for talent acquisition. One such solution is developed by Recualizer – which reduces the need for HR staff to scan through each job application individually. With this tool, talent acquisition teams need to first determine the framework conditions for a job on offer, while leaving the creation of assessment tasks to the artificial intelligence system. The system then communicates the evaluation results and recommends the most suitable candidates for further interview rounds. Business leaders should identify such game-changing solutions that can make their recruitment much more streamlined – especially if they receive a high number of applications.
- The Customer Experience arena also throws up very exciting AI use cases. We have now gone well beyond just bots answering frequently asked questions. Today, AI-enabled systems can also provide personalized guidance to customers that can help organizations level-up on their customer experience, while maintaining a lower cost of delivering that experience. Booking.com is a case in point. Their chatbot helps customers identify interesting activities and events that they can avail of at their travel destinations. Business leaders should explore such applications that provide the double advantage of improving customer experience, while maintaining strong bottom-line performance.
The possibilities for the new AI-enabled enterprises are as exciting as they are varied. The ideas shared are by no means exhaustive, but hopefully seed in interesting ideas for powering improved business performance. Strategy leaders and business heads need to consider how their AI-led businesses can help disrupt their existing value chains for the better, and unlock new ideas for improving bottom-line and top-line performance. This will usher in a new era of the enterprise, enabled by AI.
(AIQRATE, A bespoke global AI advisory and consulting firm. A first in its genre, AIQRATE provides strategic AI advisory services and consulting offerings across multiple business segments to enable clients on their AI powered transformation & innovation journey and accentuate their decision making and business performance.
AIQRATE works closely with Boards, CXOs and Senior leaders advising them on navigating their Analytics to AI journey with the art of possible or making them jump start to AI progression with AI@scale approach followed by consulting them on embedding AI as core to business strategy within business functions and augmenting the decision-making process with AI. We have proven bespoke AI advisory services to enable CXO’s and Senior Leaders to curate & design building blocks of AI strategy, embed AI@scale interventions and create AI powered organizations. AIQRATE’s path breaking 50+ AI consulting frameworks, assessments, primers, toolkits and playbooks enable Indian & global enterprises, GCCs, Startups, SMBs, VC/PE firms, and Academic Institutions enhance business performance and accelerate decision making.
Visit www.aiqrate.ai to experience our AI advisory services & consulting offerings )
Personal Data Sharing & Protection: Strategic relevance from India’s context
India’s Investments in the digital financial infrastructure—known as “India Stack”—have sped up the large-scale digitization of people’s financial lives. As more and more people begin to conduct transactions online, questions have emerged about how to provide millions of customers adequate data protection and privacy while allowing their data to flow throughout the financial system. Data-sharing among financial services providers (FSPs) can enable providers to more efficiently offer a wider range of financial products better tailored to the needs of customers, including low-income customers.
There are several operational and coordination challenges across these three types of entities: FIPs, FIUs, and AAs. There are also questions around the data-sharing business model of AAs. Since AAs are additional players, they generate costs that must be offset by efficiency gains in the system to mitigate overall cost increases to customers. It remains an open question whether AAs will advance financial inclusion, how they will navigate issues around digital literacy and smartphone access, how the limits of a consent-based model of data protection and privacy play out, what capacity issues will be encountered among regulators and providers, and whether a competitive market of AAs will emerge given that regulations and interoperability arrangements largely define the business.
Account Aggregators (AA’s):
ACCOUNT AGGREGATORS (AAs) is one of the new categories of non banking financial companies (NBFCs) to figure into India Stack—India’s interconnected set of public and nonprofit infrastructure that supports financial services. India Stack has scaled considerably since its creation in 2009, marked by rapid digitization and parallel growth in mobile networks, reliable data connectivity, falling data costs, and continuously increasing smartphone use. Consequently, the creation, storage, use, and analyses of personal data have become increasingly relevant. Following an “open banking “approach, the Reserve Bank of India (RBI) licensed seven AAs in 2018 to address emerging questions around how data can be most effectively leveraged to benefit individuals while ensuring appropriate data protection and privacy, with consent being a key element in this. RBI created AAs to address the challenges posed by the proliferation of data by enabling data-sharing among financial institutions with customer consent. The intent is to provide a method through which customers can consent (or not) to a financial services provider accessing their personal data held by other entities. Providers are interested in these data, in part, because information shared by customers, such as bank statements, will allow providers to better understand customer risk profiles. The hypothesis is that consent-based data-sharing will help poorer customers qualify for a wider range of financial products—and receive financial products better tailored to their needs.
Data Sharing Model : The new perspective:
Paper based data collection is inconvenient , time consuming and costly for customers and providers. Where models for digital-sharing exist, they typically involve transferring data through intermediaries that are not always secure or through specialized agencies that offer little protection for customers. India’s consent-based data-sharing model provides a digital framework that enables individuals to give and withdraw consent on how and how much of their personal data are shared via secure and standardized channels. India’s guiding principles for sharing data with user consent—not only in the financial sector— are outlined in the National Data Sharing and Accessibility Policy (2012) and the Policy for Open Application Programming Interfaces for the Government of India. The Information Technology Act (2000) requires any entity that shares sensitive personal data to obtain consent from the user before the information is shared. The forthcoming Personal Data Protection Bill makes it illegal for institutions to share personal data without consent.
- Identifier : Specifies entities involved in the transaction: who is requesting the data, who is granting permission, who is providing the data, and who is recording consent.
- Data : Describes the type of data being accessed and the permissions for use of the data. Three types of permissions are available: view (read only), store, and query (request for specific data). The artifact structure also specifies the data that are being shared, date range for which they are being requested, duration of storage by the consumer, and frequency of access.
- Purpose : Describes end use, for example, to compute a loan offer.
- Log : Contains logs of who asked for consent, whether it was granted or not, and data flows.
- Digital signature : Identifies the digital signature and digital ID user certificate used by the provider to verify the digital signature. This allows providers to share information in encrypted form
The Approach :
THE AA consent based data sharing model mediates the flow of data between producers and users of data, ensuring that sharing data is subject to granular customer consent. AAs manage only the consent and data flow for the benefit of the consumer, mitigating the risk of an FIU pressuring consumers to consent to access to their data in exchange for a product or service. However, AAs, as entities that sit in the middle of this ecosystem, come with additional costs that will affect the viability of the business model and the cost of servicing consumers. FIUs most likely will urge consumers to go directly to an AA to receive fast, efficient, and low-cost services. However, AAs ultimately must market their services directly to the consumer. While AA services are not an easy sell, the rising levels of awareness among Indian consumers that their data are being sold without their consent or knowledge may give rise to the initial wave of adopters. While the AA model is promising, it remains to be seen how and when it will have a direct impact on the financial lives of consumers.
Differences between Personal Data Protection & GDPR ?
There are some major differences between the two.
First, the bill gives India’s central government the power to exempt any government agency from the bill’s requirements. This exemption can be given on grounds related to national security, national sovereignty, and public order.
While the GDPR offers EU member states similar escape clauses, they are tightly regulated by other EU directives. Without these safeguards, India’s bill potentially gives India’s central government the power to access individual data over and above existing Indian laws such as the Information Technology Act of 2000, which dealt with cyber crime and e-commerce.
Second, unlike the GDPR, India’s bill allows the government to order firms to share any of the non personal data they collect with the government. The bill says this is to improve the delivery of government services. But it does not explain how this data will be used, whether it will be shared with other private businesses, or whether any compensation will be paid for the use of this data.
Third, the GDPR does not require businesses to keep EU data within the EU. They can transfer it overseas, so long as they meet conditions such as standard contractual clauses on data protection, codes of conduct, or certification systems that are approved before the transfer.
The Indian bill allows the transfer of some personal data, but sensitive personal data can only be transferred outside India if it meets requirements that are similar to those of the GDPR. What’s more, this data can only be sent outside India to be processed; it cannot be stored outside India. This will create technical issues in delineating between categories of data that have to meet this requirement, and add to businesses’ compliance costs.