Abhinav Tushar, Skit.ai’s Head of Machine Learning, discusses how LLMs are reshaping automated consumer conversations in the collections space and their direct influence on enhancing the overall consumer experience.
Table of Contents
What advancements in Conversational AI are you most excited about currently?
At Skit.ai, we focus on helping businesses derive value from Conversational AI. We’re particularly interested in enhancing LLM capabilities to achieve difficult conversational goals. While LLMs are capable of having high-quality conversations, they still struggle with reliability in multi-turn conversations. We are focused on aligning with the goals of both the users and the businesses we serve.
How have LLMs changed the way we think about Conversational AI?
The current generation of LLMs has solved the problem of handling believable and natural conversations. Despite some factual issues and minor glitches, LLM bots can maintain the flow of a conversation. Apart from this, there are exciting upgrades for spoken conversations, such as the improved ability to model any behavior that can be meaningfully translated into text. This progress aligns with the promises of Artificial General Intelligence (AGI), and it’s exciting to see us move in that direction.
These advancements are prompting a reevaluation of the potential of automation. For a product like ours—goal-oriented bots—we expect a reduction in modeling complexity to increase the extent of automation, even for dialogs that used to be considered the forte of live agents.
How do you envision the future of Conversational AI over the next few years?
Over the next few years, we’ll see a focus on extracting value from this technology. While chat and voice bots have been around for quite some time already, the emergence of LLMs has marked the beginning of a brand new chapter, in which we will see more experimentation with conversational modality added as part of many interfaces. At a lower level, we expect multimodal models to dominate, along with a lot of effort going into integrating these virtual assistants with diverse data sources enabling us to further personalize the interactions with users.
What are some best practices that you follow while working with AI systems?
The most important thing to do is establish clear, measurable, goal-oriented metrics. Safety metrics, like the conversational compliance rate, are crucial in our domain. Without an effective system to monitor business value, we risk deploying products that are either harmful or not useful. This requires a thorough understanding of the Machine Learning (ML) model lifecycle, which remains unchanged despite advancements in LLMs, even though the other intermediate tools have evolved.
What sets Skit.ai’s approach to Conversational AI apart from others in the industry?
Our Conversational AI stack is powered by LLMs connected with speech systems in a complete duplex manner to achieve naturalness, which is an industry standard. However, here are a few elements that set us apart from other providers.
Firstly, at Skit.ai, we prioritize compliance and data security. We use guardrails, flow guarantees, red teaming, reinforcement learning, and other techniques to ensure compliance checks are considered at every stage of model development, deployment, and runtime.
Secondly, for us, goal completion often involves multiple conversations with a user, possibly across multiple channels.
Lastly, we support multi-modality and believe in speech-first Conversational AI. Our approach aims to acknowledge and leverage non-vocal cues from conversations, which have historically been overlooked in real-time Conversational AI.
What are some challenges that companies face in extracting value from AI?
The two most common challenges in my experience have been (a) not thoroughly understanding how business metrics are connected with low-level models and (b) not respecting the model lifecycle once in production. With the rise of LLMs, executives in every company are pressured to incorporate them in some way, often leading to ineffective efforts. What’s needed is a two-way conversation between understanding your product’s value chain and an LLM’s capabilities. Additionally, as this technology evolves rapidly, it’s crucial to have a clear vision of the future to avoid working on problems that may soon become irrelevant.
How do you handle AI’s shortcomings in terms of fairness and bias?
Most of the shortcomings can be handled with a little extra effort. The type of models used, and the nature of the product being developed carry more significant bias implications than the algorithm itself. We ensure that our product’s usage complies with AI ethics regulations and regional deployment guidelines. At a lower level, we monitor fairness metrics, prevent the misuse of protected attributes by any model, and select fair algorithms wherever we need them. This is a challenging objective, and new learnings often emerge as we go. We strive to lead the way as we address bias.
Can AI in collections enhance compliance with regulations? If so, how?
Yes, absolutely. While ongoing efforts are essential to ensure compliance with existing and upcoming AI regulations in the collections space, we’re confident that overall compliance regulations are on the rise and will continue to improve with AI adoption. This is not surprising. In fact, there are solutions built specifically to handle and monitor human compliance. Humans make mistakes naturally, and that’s one of the reasons why automation scales. At Skit.ai, we enhance collections with AI not only by adding and improving communication channels but also by interconnecting them and learning from data to create a superior, error-free engine.
Curious to learn more about how LLMs can enhance your collections strategy? Book a free demo with one of our experts.