Paul Sweeney, Webio CSO, and Dan Blakojevic of Optima Partners spoke with Terry Franklin, EVP at QUALCO, in the Credit Shift podcast: Navigating the Intersection of AI, Ethics and Finance.
Below is an overview of what they discussed.
Read Part 2: AI in Finance and Collections: Ethics and Trends
Innovating and integrating AI across the finance supply chain is a complex and intricate task. The fintech space is constantly evolving and a holistic approach is needed that addresses perceptions, model congruency, emotional intelligence, and organisational attitudes.
The use of analytics and deep learning neural networks isn't new, but the recent shift lies in the ability to develop and deploy machine learning and AI in real, or near-real time, and apply it in a meaningful way.
Now, the focus is on using AI analytics as enablers to enhance business efficiency and understand customers better. This application of AI aims to predict customer behaviour, tailor interactions, and address specific needs, which ultimately drives greater efficiency in business processes.
Furthermore, the rapid development of AI over the last two years has significantly impacted the credit risk space and collections industries. As a technology company, the emphasis is on delivering better benefits and outcomes to clients through data analytics, machine learning, and AI while aligning AI with technology platforms. The end goal is to enable SaaS clients help their customers to help themselves in the most effective way.
Perceptions and Concerns about AI
Public perceptions, influenced by social channels and the media, often raise questions about AI's impact on jobs and whether it works in a person’s best interests. Certainly, there are gaps in understanding, or there may be hyper expectations of what AI can do, which get in the way of tech progress. To make AI implementation work, businesses need to address misunderstandings, manage expectations and help their customers understand how AI can genuinely benefit them.
At the moment, everything seems to have AI attached to it. So, we need to ask: is AI integration causing issues with different models talking and driving decisions in different applications? The challenge lies in maintaining congruency between models, ensuring they align with overall goals. If these are not considered, unintended consequences may arise where different modules essentially drive slightly different decisions. Achieving model congruency, where data is consistent and aligns, is proving to be challenging amid widespread AI adoption.
For example, you have a strategy level segmentation based on predictive health that dictates you contact a customer at a certain time. You could also have another model that predicts what's the best time to call this customer. You end up with conflicting interactions with the customer, which results in not delivering the intended outcome you set in your original strategic model.
Is the adoption of AI concentrated in specific stages of the company's processes, or is it a widespread trend throughout the entire business?
For a business to say yes to AI, they need to understand the value it brings. It is about helping companies to get real benefits from AI or deep learning models and it’s about picking real business problems and finding ways to apply AI analysis and insight to make a difference. So, to overcome hesitation, business leaders need to see how to turn AI adoption into practical value that they can put an ROI around.
For example, identifying fraud risk in the supply chain finance journey and rapidly detecting evolving patterns of fraudulent behaviour using AI models to flag and mitigate that risk.
On another side, predictive models for customer contact times or innovative approaches to analysing big data from customer interactions, such as speech and text patterns, enable quick interventions. The goal is to recognise potential financial or health risks, allowing for prompt and improved communication with customers.
The crux of the matter is not just exploring various AI applications but translating them into tangible ROI-driven solutions that bring about meaningful changes and enhance both the business and customer experience.
The notion of picking up on emotional cues is bordering on artificial sentience alongside artificial intelligence. Those in customer communications are asking if there is a growing need for AI solutions to not just predict customer behaviours but also anticipate their emotions.
It's a burning question those in the industry face — how do you use AI in a way that's not spooky for the end customer? For instance, predicting a family size increase with a new baby on the way and then tailoring conversations around it can be seen as strange to the customer on the other end. In cases of a personal nature like this, sensitivity is important to avoid frightening or scaring customers.
Being able to empathise with a customer’s feelings and communicate it through AI chat creates a perception of a sentient being. Customers want organisations to know enough to help but not too much to spook them—it’s a fine line. So, the hard part is identifying vulnerable customers without violating privacy regulation.
While “sentient” systems theoretically can determine positivity or negativity, the complexity and layers of human emotions and nuanced language pose a challenge for hyper-intelligent technology to truly "know" us, or rather, make us feel that we are “known”. We are not seeing this working yet and it will be a huge step for AI to reach this level.
In credit risk, understanding customer attitudes and behaviours, not just cold hard facts, adds great value to customer interactions. However, the risk of profiling and pigeonholing certain population groups to cut corners is a valid concern and creating subgroups should be approached carefully to avoid biases. As any AI conversation designer will tell you, predicting outcomes and shortening the journey to resolution without creating biases is a fine line to walk.
An interesting observation is how there is a diverse range of attitudes towards machine learning and AI even within individual organisations. Certain roles are open to its adoption and actively champion it, seeing productive outcomes. On the other hand, some areas within the same business are cautious and risk-averse, which are roadblocks to progress. Therefore, taking everyone in an enterprise on the machine learning and AI journey can be a tough task, as attitudes can hinder AI adoption and compromise the forward movement of tech progress.
In-dept discussions aim to convince clients that using AI technology will lead to better outcomes, for instance, in data science where you need to look at reams and reams of data, AI can truly make a material difference. In the more traditional credit risk space, the focus shifts to convincing clients to apply AI technology and emphasising the values it brings. It is important that the message that reaches the wider society conveys that AI is not something to be feared; rather, it is meant to create benefits and improve experiences.
For it to work well, strategic AI adoption in credit and collections should focus on solving real business problems for tangible ROI. Integrating analytics-driven fintech and AI in finance offers opportunities but also brings challenges- real-time AI applications boost efficiency, yet public perceptions and model congruency issues create hurdles.
Read Part 2: AI in Finance and Collections: Ethics and Trends
If you need to improve your customer engagement, talk to us and we'll show you how AI automation via digital messaging apps works.
You will love the Webio experience.