A year ago, the realm of AI was significantly different. Today, we are deeply engaged in discussions around large language models (LLMs), and these discussions are not just about how we use AI, but also about staying ahead, understanding potential benefits, and leveraging new tools appropriately.
Other important considerations are fairness and transparency, which should be at the core of modelling and analytics functions. More and more, organisations are seeking support in these areas, highlighting the growing importance of these fundamentals.
The Consumer Duty regulations in the UK, although not directly an AI issue, have significant AI implications. Every time an AI or machine learning model is involved, Consumer Duty becomes a primary concern.
Companies are creating AI task forces, comprising of data scientists, legal, commercial, and strategy professionals, to ensure diverse input. These task forces monitor the delicate balance between the exploration of new tools and data’s potential with the responsibility of safeguarding privacy and security.
Large organisations with extensive in-house data face different challenges compared to startups relying on consumer-consented data. The focus varies across industries, for example, credit risk areas are more regulated, leading to cautious adoption of new technologies. On the other hand, sectors like marketing and customer communication are more flexible in experimenting with AI applications.
When it comes to using conversational AI for customer engagement/service there are multiple security considerations. For example, the system may have interactions with credit rating agencies, which brings up the concern of using external data and benchmarking methods.
What does a Large Language Model (LLM) enable you to do? Can it perform tasks beyond basic functions such as credit checks? Indeed, there is almost blue-sky potential, but much work still remains before they are deemed fit for purpose, achieving the necessary level of maturity. On the other hand, existing models have already undergone extensive testing to ensure security and accuracy.
For consumers, having access to their data enables them to make more informed, educated decisions about various providers and products, setting the stage for a more secure future.
Observing data trends today is akin to watching a tsunami approach, with rising expenses yet stagnant salaries. One helpful use case in this environment is data can reveal when consumers are struggling with bill payments, highlighting the impact of the cost-of-living crisis.
What are the uses of data? From a company's perspective, it can enhance interactions with customers, such as integrating credit scoring data with customer profiles and transaction records. And leveraging data can make credit-related discussions more effective.
For example, traditionally, a financial institution might put a customer through an ordeal, like filling in a long form, before getting an answer to their credit request. But if the institution has a life-time record of the client, then they can tell without a form the situation of the person and their financial history. By digitising the process and using AI, you make it easier and more effective – and less painful for the customer.
It is important to understanding and control AI tools to ensure they align with internal and regulatory frameworks. To this end, paying close attention to potential biases helps in developing fairer AI models and tools that are free from prejudice.
Embedded bias is always a concern. Model designers must scrutinise the standards applied to their data sources and AI, comparing them to traditional data gathering methods. With new techniques, actively questioning bias during the governance and building process has made data model designers more focused on rooting out unintentional bias.
Identifying bias can be as straightforward as examining how questions are framed or their sequence. Are the questions open or closed? What is the emotional response at the end of the interaction? Paying attention to these aspects can significantly reduce biases. In the end, effective conversational AI hinges on proper training and managing the conversation to mitigate bias.
Is more data always better? In reality, it is better to focus on the diversity and representation of data, not just sheer volume. Enriching your dataset with varied inputs provides a deeper understanding and helps forecasting future trends.
Moreover, data storage is costly, so it's wise to be judicious about the data you collect and retain. With consumer consent, data can only be stored temporarily and must be used for specific purposes, not merely to accumulate volumes of data. Data collection must serve a legitimate purpose and comply with legislation, considerations that are fundamental in the development of data models.
Government interventions, like energy payment support schemes, influence consumer behaviour. These exogenous factors must be considered in financial modelling and predictions. Additionally, evolving regulations and Consumer Duty requirements are prompting organisations to offer products and services in a more transparent and responsible manner.
It is difficult to predict external influences. Who could have predicted the run on toilet paper as COVID hit?
Consumers are taking more of a central role as their financial literacy increases. A shift is underway towards empowering consumers in the financial ecosystem. With greater control over their data, consumers can now make more informed decisions, leading to a more balanced and equitable financial space. For example, a customer will switch providers to get a better deal.
This empowerment is also reflected in how financial products are marketed and sold. Providers need to make sure that customers understand the products they are being offered, which is part of the Consumer Duty framework.
There is a myriad of marketing techniques that favour the company when selling products. So, what is marketing's role within these new regulations? How does it help customers make decisions? Marketing may see a fundamental change when viewed through the Consumer Duty lense.
The financial industry is changing, with tech companies entering traditional financial spaces. Customers have become used to a certain level of sophistication and ease of use in apps and digital interfaces, so the challenge for traditional institutions (the big old giants) is to improve user experiences to meet consumer expectations.
And so, the future might see significant changes in how financial products are accessed and used.
Generative AI could shake up customer interactions, offering personalised financial advice and plans. However, accountability and responsible use of AI recommendations must underpin all AI activity.
“Think of what happened to the Blackberry when the iPhone came along. Nobody thought the iPhone would destroy the Blackberry, but it did. Consumers were driving the market. iPhone had new uniqueness, like a touch screen and infinite scrolling. It turned out to be an interesting user experience that changed everything.” Paul Sweeney, Webio CSO
And now industry is at a transformative stage, similar to the introduction of the iPhone, but the full impact of this change is still unfolding. We are in an ‘iPhone moment’ but we don't know what the equivalent of the app store will be as we are in early stages.
As the financial sector navigates its digital transformation and the effects of AI, the key to success lies in balancing technological advancements with ethical considerations and consumer education. By doing so, the industry can harness AI's power to offer more personalised, efficient, and equitable services.
If you need to improve your customer engagement, talk to us and we'll show you how AI and automation via digital messaging channels work.
You will love the Webio experience.
We promise.