The collaboration between OpenAI and LG Uplus marks a significant shift from traditional rule-based Artificial Intelligence Contact Centers (AICC) to agentic, speech-to-speech systems. By leveraging the Realtime API, this partnership has moved a proof-of-concept into production, replacing rigid decision trees with a system capable of reasoning, understanding natural intent, and maintaining context across conversational turns. The native speech-to-speech architecture eliminates the need for intermediate transcription, allowing the model to capture emotional cues like tone and inflection for more human-like interactions. This event-driven framework enables the assistant to trigger backend actions mid-conversation, such as retrieving policies or checking account states, without forcing users into step-by-step flows. As a global blueprint for next-generation contact centers, the focus now shifts toward scaling these modular frameworks and deepening operational monitoring to ensure high performance across diverse enterprise use cases.
Sign in to continue reading, translating and more.
Continue