listing page or single post https://www.hellersearch.com/blog Heller Blog

Agentic AI, Next-Generation Neural Networks, and the Future of Customer Support

With an evolution from rules-based scripts to nested neural networks, technology is transforming customer voice support from a back-office function to a strategic business enabler.

Clifton 215There is perhaps no more personal an interaction that many brands will have with their customers (and would-be customers) than via voice support. While some consumers may rail against the use of technology in this area, the fact is that computers have long been the foundation of effective customer service. From the inception of computer scripting to the recent advent of agentic AI, technology has profoundly impacted the way organizations serve their customers.

Schwendner 216Past is prologue. Rules-based scripts made way for the earliest AI models which have given rise to more advanced machine learning and generative AI. Today, next-generation neural networks are beginning to influence the voice interactions individuals have with companies.

Today when you deal with any major brand, you’re interacting with multiple technologies that are part of an omni-channel experience. When you’re on a website, there is technology  capturing your login, browsing, and search activity. That data can be used when you interact with a chatbot to enhance a response with your individual preferences. As that model ingests more insights from your interactions, its ability to predict what you want or need grows. A system of intelligent agents can even process tasks for you. Your phone is a great example. Your ability to tell Google or Siri to order your Amazon cart, update the shipping information, and use my Apple pay  to check out in one instruction set is a very real example of agentic AI in everyday use.

The introduction of AI will allow companies to reinvent the customer experience. In this field, we used to define a process, map the customer journey, process-enable tasks, and then build a system to support them. Now, we can put the customer at the core of the journey map and, rather than building a process, we might create a star diagram based on the last 100,000 transactions and predict what the client wants before they start the interaction or at the first inception of a connection.

We are living through what will be a truly transformative era for customer voice support. But technology and business leaders must be prepared to harness these new capabilities while managing the inherent risks associated with them. 

A Brief History of Voice Support Automation

The latest agentic AI systems build upon a history of technical advances in customer service systems.

First came computer scripting to route calls and manage basic voice interactions. These rule-based scripts provided them with predefined responses, enabling businesses to handle routine inquiries more efficiently. However, these rule-based systems lacked flexibility and true intelligence, leading to frustration for users with more complex queries or issues (and the agents struggling to address them).

Then came natural language processing (NLP), which enabled automated systems to interpret and respond to customer queries with greater accuracy, paving the way for chatbots and voice assistants. I’ve personally seen an AI-driven real-time voice language translation solution transform customer interactions, enhancing localization and response efficiency for companies while also cutting costs by up to 50 percent.

Then came machine learning algorithms, which further enhanced AI-assisted customer service by empowering systems to learn from interactions and improve their responses over time. Businesses began integrating AI-powered Interactive Voice Response (IVR) systems capable of recognizing intent, leading to increased efficiency and reduced reliance on human agents. These conversational AI solutions capable of more predictive and personalized interactions.

Taken together, these advances have enabled companies to offer more intuitive interactions, cut costs, and raise customer satisfaction scores. 

The Emergence of Agentic AI and Next-Gen Neural Networks

Today, agentic AI is at the forefront. Unlike traditional AI, which follows pre-programmed responses, Agentic AI can operate with greater degrees of autonomy, problem-solving capability, and adaptability. These AI models use deep reinforcement learning to handle complex customer service interactions, predict user needs, and even personalize responses based on historical data. I’ve seen such systems, working side-by-side with their human counterparts, process over 45 million interactions, contributing to up to 20% productivity gains for the companies that have implemented them.

Modern neural networks (such as GPT-4), when integrated with speech technologies, improve speech processing and the customer support experience. These networks can facilitate:

  • Real-time sentiment analysis for better customer interactions.
  • Advanced speech synthesis for natural and human-like responses.
  • Multilingual interactions to serve global audiences efficiently.

Such models applied in a voice assist capacity can capture and learn terms for a specific industry such as hospitality. They can ingest natural language as an input, but not all words, such as a name of a hotel in France. The augmentation of the models with applied smaller language models enable would-be guests to tell a voice system that they want to stay at the Mandarin Hotel Lutetia in Paris and you get the appropriate response — in French, if needed. 

Risk Management and Governance

Agentic AI, especially in voice support, introduces a range risks due to its autonomy, decision-making power, and ability to mimic human interaction at scale. Comprehensive risk management, is essential to safely leverage these powerful AI tools in voice support environments.

Some primary concerns include:

Risk: security and data threats. Agentic AI systems may require access to sensitive customer data and backend systems, making them prime targets for cyber threats.

Mitigation approach:  Dedicated AI breach protocols, data loss protection controls, rapid remediation plans.

Risk: hallucinations and misinformation. AI agents can generate incorrect or misleading outputs, which can propagate through interconnected systems or over time, compounding the impact.

Mitigation strategy: Data extent guardrails (to control the scope and reach of the data used to train and operate AI models), output monitoring, and thorough testing.

Risk: loss of control. If AI is given too much autonomy without the proper safeguards in place, it can lead to unpredictable or irreversible actions. For example, prompt injection or memory poisoning attacks can manipulate an AI agent or corrupt its decision-making processes.

Mitigation strategy: Penetration testing, human-in-the-loop approaches, continuous security assessment.

Risk: algorithmic bias. Issues in training data can result in unfair, discriminatory, or inappropriate responses, damaging brand reputation, eroding customer trust — and potentially violating regulations.

Mitigation strategy: Well-defined data, continuous ingestion and monitoring of new attributes, moderation analysts to “basket test” model output, an AI governance council to oversee data definition/testing (we’ve seen less than 20% of our clients do this).

Risk: Non-compliance. Rapidly evolving regulations around the use of AI can increase the risk of non-compliance, leading to legal, financial, and reputational impacts.

Mitigation strategy: Policy enforcement, audit trails, regulatory checks.

Newer LLMs, like DeepSeek-R1, which incorporate a combination of reinforcement learning and neural networks, have the potential to further optimize AI testing and deployment. Their use of distillation techniques to transfer knowledge from larger models to smaller, more efficient ones, may improve the performance of the smaller models and potentially reduce the need for extensive testing of larger, more resource-intensive models. The Mixture-of-Experts (MoE) architecture can also facilitate more targeted testing of components of the model and more efficient debugging.

Some of these models can be trained through trial and error so that, instead of relying solely on comparing model outputs against pre-labeled data, testing could involve evaluating AI performance in real-world scenarios or in simulated environments. In addition, DeepSeek’s data curation strategies (including iterative refinement, bias mitigation, and synthetic data generation) are designed to produce higher-quality training data and, ultimately, more robust models from the start.

The Future is Now

The evolution from rigid computer scripts to more responsive, adaptable, and proactive agentic AI and advanced neural networks marks a transformative era for customer service voice support. Businesses leveraging these technologies have the opportunity to not only improve efficiency but also enhance customer satisfaction through more seamless, intelligent interactions.

As AI continues to advance, its potential to revolutionize voice support services appears boundless. The integration of neural nested networks into AI governance frameworks may also help corporations manage attendant risks more effectively, helping to ensure ethical and compliant AI adoption.

 

Michael Clifton is co-CEO of Alorica, a provider of customer experience solutions, where he previously served as Global Growth and Transformation Officer (responsible for the global sales organization and the company’s transformation strategy) and the chief information and digital officer. Prior to Alorica, Clifton held executive technology and operational roles at Cognizant, Federal Home Loan Bank of Boston, Hanover Insurance Group and Nobilis Software.

Max Schwendner, co-CEO of Alorica, spent most of his career on Wall Street including at J.P. Morgan in leadership roles within its investment bank and private equity divisions. As CFO, he was responsible for finance, planning, and oversight of all corporate support teams. Schwendner has served on several corporate boards and was chairman of several limited partnership committees.