In last week’s webinar "Hey Car, What’s Next? In-Vehicle Automotive Voice AI Trends for 2025 and Beyond," I had the pleasure of opening the session by exploring how automotive voice technology has evolved and where it's heading next.
Source: SAR Insight & Consulting
Voice control in vehicles has been around for more than two decades, beginning with limited, phrase-dependent systems that often felt more frustrating than helpful. But the landscape has shifted dramatically. The rise of Alexa, Siri, and Google Assistant in consumer devices has set a new standard. People now expect fluid, intuitive, and hands-free interactions wherever they are, including behind the wheel.
However, the current crop of in-vehicle “voice platforms” still falls short. Many users abandon them early, citing rigid interfaces and unmet expectations. The emergence of Large Language Models (LLMs) is beginning to change that. These new AI systems offer contextual understanding, multimodal interaction, and greater versatility. Yet implementation isn't straightforward. A hybrid approach that leverages both edge computing and cloud-based AI is likely the most viable route forward. Automakers now face strategic decisions about data ownership, privacy, and monetization as voice becomes a core pillar of the software-defined vehicle.
If you are interested in reading the full article, published in EE Times, you can find it here.
Next in the webinar, Jeff Rogers Vice President of Sales at Sensory, demonstrated a full-stack voice assistant running entirely on-device.
In the panel Todd Mozer, CEO of Sensory, expanded on this. With privacy and speed as top priorities, Sensory’s technology avoids many of the common pitfalls of cloud-dependent platforms. Mozer emphasized the need for flexible architectures that can evolve with AI trends, supporting biometric personalization, optional wake words, and hybrid agents that operate on-device but can also access cloud capabilities when needed. His key message was that intelligent voice interaction does not need to compromise privacy or performance.
Dr. Ephrem Anthony Chemaly of MediaTek followed by sharing insights into the foundational hardware that enables all of this. As voice AI becomes more sophisticated, scalable and power-efficient chipsets are essential. Chemaly detailed how MediaTek’s automotive solutions are optimized for in-vehicle demands, supporting both traditional voice recognition and new LLM-powered use cases. He noted that cars are becoming software-defined environments, and voice will play a central role in enhancing both utility and user engagement.
Roman Wild of BMW provided an OEM perspective on one of the most critical topics in automotive voice AI—privacy. Wild explained that the success of intelligent assistants depends on trust. Users need to understand what data is being collected, why it's needed, and how it's handled. Without transparency, even the most advanced voice assistant will face adoption hurdles. BMW’s approach is designed to protect user data by maximizing on-device processing and minimizing external data exposure.
Throughout the panel discussion, themes of privacy, performance, and the role of LLMs in shaping the next generation of in-car experiences consistently emerged. Audience questions reflected strong interest in real-world deployment, multi-user support, evolving wake word strategies, and the specifics behind Sensory’s on-device demo. While the road ahead includes many challenges, it’s clear that those who treat voice as an essential layer of the digital cockpit, not a bolt-on feature, will lead the next wave of innovation in mobility.
Please let me know what you think!
The full webinar is available here.
Reply