Insights and Inspiration

THOUGHT BOX

BACK
AI for business

AI for Businesses – Adoption Challenges, Ethics and Data

Paul Sweeney | Chief Strategy Officer
Table of Contents
Overcoming Barriers to AI Implementation
The AI Adoption Journey
The Role of Data and Privacy Techniques
How we Interact with Data in the Age of AI is Changing
AI, Ethics, and Governance

 

It’s always a treat when an expert generously shares their expertise. Such was the case when an authority on AI and Automation and digital transformation, Javier Campos, CIO at Finestra, shared his knowledge, lessons learnt and stories with Paul Sweeney (Webio) and Dan Blagojevic (Optima) in this podcast - Pathways to AI Excellence: Ethical, Strategic and Operational Insights. 

This blog draws out some of the nuggets Javier spoke about, such as AI's power for business growth and overcoming barriers to its adoption. 

Overcoming Barriers to AI Implementation


Javier Campos was involved with a survey conducted by a working group organised by the FCA and the Bank of England. They sent a survey to all financial institutions across the UK to explore how to accelerate AI adoption in the financial services industry. It is not surprising that even the biggest institutions with larger budgets were struggling. 
 

The survey report aimed to understand the situation and engage with different stakeholders. It highlighted the potential of AI but also emphasised the growing gap between what is possible and what is actually being implemented in practice.  

The natural conclusion was that there is a need to bridge the divide between AI's potential and its current operational use.  

Furthermore, many aspects unrelated to technical factors are often neglected, leading to project failures. The report pointed out how important it is to address these non-technical barriers that hinder AI adoption, as in the end, even if an organisation has the best model and high-quality data, without solving these crucial aspects internally, the project is likely to fail.

The AI Adoption Journey


As you read all the AI stories in the press and witness the amazing things AI can do, it's easy to get in a panic and feel like you're being left behind. 
 

So, the first challenge for any company is to perform a maturity assessment, where you identify all the things you need to implement AI effectively. Companies aiming to implement AI need a framework that encompasses infrastructure, data readiness, governance, and a clear understanding of AI's potential impact on business processes.  

Before you do anything, make sure to analyse where you are and determine your current position, what you want to achieve, and establish a timeframe. Then, calculate the funding and resources required to make it happen.  

Since resources are finite, you may need to consider stopping or diverting other projects. You might also need to outsource or bring in a third party or consultancy. Moreover, gaining access to the right data and having the right IT staff cannot be overemphasised.

The Role of Data and Privacy Techniques


You need to look for innovative approaches like synthetic data to address the challenges of data availability and privacy. Privacy enhancing methods can help fill data gaps while respecting individuals' privacy and upholding regulatory requirements.
 

Javier shared an example of how he and his team used data during the Covid pandemic to build a model for the NHS in the UK. 

A project was undertaken for the NHS during the initial outbreak of Covid. At that time, the situation was urgent, and many experts predicted a disastrous outcome. However, it became apparent quite early on that the models used for these predictions were largely inaccurate. Despite this, the team's own model proved to be highly accurate due to a deliberate design decision.  

The hurdle at the beginning was to create mathematical models that accurately depicted the spread of diseases, including Covid. While machine learning can provide more accurate results, the process itself is straightforward. The models considered factors such as the contagiousness of the disease and the rate at which people come into contact with each other, leading to either recovery or death.  

These models were generally sound. However, the key issue lay in determining the number of infected individuals. The accuracy of this metric depended on testing. Unfortunately, many governments, driven by panic and politics, frequently changed their testing criteria. This constant alteration led to fluctuations in the infection rate, rendering the metric unreliable. It was a rather embarrassing oversight by those responsible for these models.  

Realising the problem, Javier's team's model disregarded the data from the NHS and instead used two reliable indicators: the 1-1 helpline calls and mobility data provided by Google. They used this data in their formulas to accurately predict the course of the disease and their approach proved to be spot-on in every instance.  

So we can see that the key to successful data analysis lies in focusing on the desired outcome and value, rather than fixating solely on the data at hand.

It is of utmost importance to understand the purpose of using AI and determine the necessary data accordingly. Not all data is valuable, and investing resources in unnecessary data collection can be wasteful. Instead, businesses should identify their goals and then assess the available data, exploring additional sources if needed. The focus should always be on adding value to the business and understanding the world it operares in to better serve customers.
 

Data Privacy and Fairness to Customers 

It is interesting to consider in more detail the talk around vulnerable customers and the connection to the UK's Consumer Duty regulations. When it comes to identifying vulnerable customers, certain questions need to be asked and specific types of data must be accessed. However, this can potentially contradict the idea of customer privacy and the legitimate use of data.

How do we reconcile this conflict? There is a fear that certain data collected on individuals may be exploited for malicious purposes, even though this is usually not the case. How can we find a solution to this dilemma?
 

This is where AI and technology can provide us with an answer. Privacy Enhancing Techniques (PET) are employed to address the issue of privacy. The challenge is to gather sufficient information about an individual without inadvertently discriminating against them or violating their privacy. 

Where Does Synthetic Data Fit In?

AI machines are now capable of generating data, which is certainly helpful. By using this technique to generate this data in an anonymised manner, we can create a virtual version of an individual that is not their actual identity, thus ensuring their protection.

This data can then be used to deliver appropriate interactions with individuals when they, for example, contact a bank, guaranteeing that they are not subjected to discrimination and ensuring access to suitable credit options.

How we Interact with Data in the Age of AI is Changing


Digital behavior generates data. People generally alter their behaviour based on what they think the capabilities of that system are. And then they will adapt their behaviour accordingly. There is behaviour drift and model drift. It's not just about optimising; it's about exploring and adapting to the way the world is starting to work.

For example, we notice that there is a difference in how you use Google to search for something compared to using ChatGPT. In Google, you typically use three words, such as 'Nike', 'Trainers', 'Size 10'. But with ChatGPT, you need to give the system a prompt, talk to it, and ask it to do something. So what works today may not work tomorrow as people start to uncover the value of the data in a system like ChatGPT.  

AI, Ethics, and Governance


Ethical considerations in AI bring to light just how subjective the nature of ethics and fairness really are. Consequently, it is imperative for companies to establish well-defined ethical guidelines and governance structures to navigate these complex issues responsibly.
 

For instance, let's consider the classical example of credit risk prediction. When seeking to determine whether a person is likely to default or not, it is possible to quantitatively assess the probability using mathematical models. However, when it comes to ethics and fairness, the situation is different. These concepts are subjective and dependent on individual perspectives and worldviews.  

A similar scenario arises when individuals are assessed by a bank's system. Many of these systems operate based on implicit or explicit ethical views, shaped by historical data and the bank's own values and considerations of life.  Moreover, compliance with regulations varies across regions and must also be taken into account.

A case in point is the issue of self-driving cars. Imagine a situation where a self-driving car faces an unavoidable accident, and it must make a decision that will result in the loss of the life of an elderly person or a toddler.


If we examine this problem in the context of Western countries, a majority of people would probably prioritise running over an elderly person rather than the toddler, reasoning that they have already lived a full life. However, the perspective is different in Asia, where there is a profound respect for the elderly. And therefore, programming self-driving cars to make such decisions becomes exceedingly complex. 


An intriguing development in the UK is the concept of the Customer Duty of Care
, which encompasses notions of fairness and value delivered to customers. However, determining what constitutes value and whether customers truly comprehend it sparks a profound conversation.
 

Regulatory bodies like the Financial Conduct Authority (FCA) have actively engaged in discussions surrounding these issues. The FCA, operating on a principle-based regulatory approach, outlines the desired outcomes without specifying the exact methods. This approach offers flexibility but also presents difficulties in ensuring compliance.

And so, navigating the ethical dimensions of AI deployment necessitates acknowledging the subjectivity of ethics and fairness.&nbspCompanies must establish robust ethical guidelines and governance structures while considering regional compliance variations.

The Impact of AI on Financial Inclusion 

AI has the potential to enhance financial inclusion by addressing data gaps and offering services to traditionally underserved communities. Examples of innovative solutions are those that leverage AI and data analytics to extend credit and banking services to a broader audience.>

Conclusion


There is a call for breaking down communication barriers between technical and business teams to unlock AI's transformative potential. In the end, over-arching any AI implementation is the importance of strategic planning, ethical considerations, and inclusive practices to realise AI's full benefits for businesses and society.

Everything you need to deliver great customer experiences and business outcomes

Experience the wonder of Conversational AI for Customer Engagement

If you need to improve your customer engagement, talk to us and we'll show you how AI automation via digital messaging apps works.

You will love the Webio experience.

We promise.







 




CONTACT US

DEMO REQUEST

Register for a personalized demo where our team will review how Webio's intelligent customer engagement solutions can positively impact inbound and outbound customer engagement in your business.

METRICS WE’VE ACHIEVED

52%
Uplift in Payment Arrangements
42%
Increase in Agent Productivity
57%
Decrease in Operational Costs
48%
Increase in Customer Engagement

YOUR DETAILS

WhatsApp Contact Button