how talking cars are steering the future

From fiction to freeway: how talking cars are steering the future

In-car speech technology is here, revolutionising the way we experience travel and fostering enhanced user experiences.
  • Revolutionising driving experiences through voice tech
  • Continental and Google Cloud team up to fit cars with gen-AI
  • ChatGPT enhances voice control capabilities in Mercedes-Benz vehicles
  • Autonomous cars can now share their thought processes with passengers

The automotive industry has long been contemplating futuristic transportation, making AI an integral part of its innovative concepts. The advent of generative AI has initiated a new era, impacting the way drivers and passengers interact with their vehicles. Conversational digital assistants are harnessing this cutting-edge technology to refine in-vehicle experiences and establish more tailored relationships between automakers and the consumer. Imagine a scenario where your car isn’t just a machine, but a conversational partner; ready to assist, advise, and accompany you on your journeys. By integrating cars with generative artificial intelligence, this is slowly but surely becoming a reality. This technology allows drivers to engage in a seamless dialogue with their vehicles, be it seeking specific guidance such as optimal tyre pressure or getting advice about places of interest along a specific route. With generative AI at the helm, cars can swiftly gather and present the required information, transforming every ride into an informative, interactive, and entertaining experience.

Revolutionising driving experiences through voice tech

Today, the majority of new cars are equipped with integrated voice technology as a core component. Given that speaking is more efficient than typing, voice emerges as a crucial interface in the evolving landscape of generative AI. It not only fosters safer driving conditions but also minimises distractions. Simple commands like ‘navigate to work’, ‘call John’, or ‘turn the volume down’ are evolving into more intricate instructions like ‘schedule a grocery delivery to arrive just as I return from my yoga class’ or ‘show me the most interesting route to drive to mom and tell me where I can buy flowers on the way’. Refining AI models in the cloud to suit automotive needs can enhance the depth of user engagement and ensure sustained interactions between carmakers, service providers, and end-users throughout the vehicle’s lifecycle.

It’s clear that, for AI developers and providers, the ability to immediately respond to a host of typical inquiries directly and securely on the device (the vehicle) has its benefits, such as improved overall performance, more personalised experiences, user privacy, and cost efficiency. It also enables digital assistants to interact with a wider range of user-specific data and multiple vehicle sensor inputs, such as cameras and lidar. But in order to access up-to-the-minute traffic and weather information, synchronise shared calendars, locate nearby points of interest and so on, having a continuous connection to the cloud is a crucial requirement. Cloud connectivity is also becoming increasingly important to support generative AI features, align with current trends, and meet increasing consumer demands. Principal analyst at TIRIAS Research, Jim McGregor, explains: “Generative AI in auto will also be the proving grounds on how to best create a hybrid AI solution that takes advantage of in-vehicle execution for basic functions that require real-time interaction with the complex models residing in the cloud; that combine information from the vehicle and potentially other vehicles, infrastructure systems and other data sets. In many cases, the solution will incorporate both in-vehicle and cloud AI processing, turning the vehicle into the ultimate intelligent autonomous platform”.

“Together with Google, we are bringing artificial intelligence to the vehicle cockpit, creating an intuitive experience for drivers”.

Philipp von Hirschheydt, member of Continental’s executive board and head of the automotive group sector

Continental and Google Cloud team up to fit cars with gen-AI

German automotive supplier Continental recently entered into a partnership with Google Cloud to equip cars with generative AI. Philipp von Hirschheydt, member of Continental’s executive board and head of the automotive group sector, says: “Together with Google, we are bringing artificial intelligence to the vehicle cockpit and are creating an intuitive experience for drivers. Based on our Smart Cockpit High-Performance Computer, we expect our solution to be ready for production within just 18 months of development time. This is how our vision of software-defined vehicles starts to become a reality.” The AI-enhanced platform provided by Google Cloud enables drivers to have actual conversations with their vehicles. For instance, they can ask their car about anything worthwhile visiting — such as a nature reserve, a fancy restaurant, or 4-star accommodation — along their route. 

The AI platform will then gather comprehensive, real-time information and update the driver — much like a personal tour guide. If the driver needs clarity, he can also ask follow-up questions or request additional information without having to repeat the initial query and context, as this is already correctly interpreted by Google Cloud’s conversational generative AI. The system can give important information about the vehicle, it learns continuously and eventually knows all about the user’s preferences. It can access the operating manual and provide the driver with helpful tips, such as the required tyre pressure under certain conditions, or where the USB port is. The two companies each bring their respective expertise to the partnership — artificial intelligence, cloud computing, software, and automotive — and plan to broaden their partnership to continue to improve in-car connectivity and customer experiences in upcoming projects.

ChatGPT enhances voice control capabilities in Mercedes-Benz vehicles

Mercedes-Benz is expanding its use of AI in its Hey Mercedes voice assistant by adding ChatGPT, making the assistant even more intuitive for more than 900.000 Mercedes-Benz drivers in the US who have the MBUX infotainment system integrated in their vehicles. The car giant is incorporating ChatGPT through Azure OpenAI Service, leveraging the enterprise-grade capabilities of Microsoft’s cloud and AI platform. The carmaker’s voice assistant is already known for its large command portfolio and intuitive operational capabilities. Hey Mercedes already assists drivers and passengers with answering questions about their surroundings, providing traffic, weather, and sports updates, and even controlling their smart homes. The intuitive voice control will now be enhanced by ChatGPT, as this technology — by leveraging a large language model (LLM) — will significantly improve the voice assistant’s understanding of natural language and help it to expand its range of topics that it can comprehend and provide feedback on.

Not only will Hey Mercedes accept and process natural voice commands, but it will also be able to engage in conversations. Imagine Hey Mercedes providing details about a certain destination, answering a complex question with a much more detailed answer, or suggesting a new recipe for dinner — and where to get the groceries for it. In order to safeguard data privacy and protect information from misuse and manipulation, the voice command data that is collected during interactions with Hey Mercedes is stored in the Mercedes-Benz Intelligent Cloud, where it is analysed and anonymised. Furthermore, the vehicle’s entertainment system offers the possibility for drivers to access YouTube when the car is parked or in Level 3 autonomous driving mode. In this mode, drivers can relax their focus from steering on specific roads, provided they can regain control when required. Each vehicle on the new modular architecture platform will also have hyperscreens incorporated that extend throughout the car’s cockpit. Mercedes-Benz also announced that it would offer customers access to new features like Place Details, helping customers to find comprehensive information about over 200 million places and businesses across the globe. 

Autonomous cars can now share their thought processes with passengers

Generative AI is now also being integrated into autonomous vehicles created by AV startup Wayve, which is backed by Microsoft. The technology enables the vehicles to elaborate on their decisions in conversational language, a step that Wayve hopes will fast-track its vehicles’ development and enhance the public’s trust in autonomous cars. A lack of trust in artificial intelligence is a significant hurdle when it comes to the adoption of self-driving cars. In fact, only 9 per cent of respondents to a survey by the American Automobile Association (AAA) indicated that they trust autonomous vehicles. Some 68 per cent of respondents even said they fear these cars. AI is often unable to explain how it does the things it does or makes the decisions it makes — this is called the ‘black box problem’. Even its developers are often unable to figure out how AI arrives at its conclusions or decisions, making it almost impossible to explain its workings or correct mistakes. This also explains why many people are hesitant to trust an AI. If the self-driving car industry is unable to increase this trust, it will be virtually impossible to make our roads safer.

In an attempt to improve this trust, Wayve recently launched its self-driving AI LINGO-1, which can elaborate on its ‘thought process’ in easily understandable natural language. Wayve CEO Alex Kendall explains: “LINGO-1 opens up many possibilities for self-driving, improving the intelligence of our end-to-end AI Driver as well as bridging the gap of public trust — and this is just the beginning of maximising its potential. LINGO-1 can generate a continuous commentary that explains the reasoning behind driving actions. This can help us understand in natural language what the model is paying attention to and what it is doing”. To achieve this, the company added verbal commentary data to its training sets, provided by expert drivers. This commentary data consists of explanations like ‘I was slowing down because a car was merging into my lane’. These developments are meant to help Wayve improve its system and also help passengers feel more comfortable. Instead of feeling worried, they can ask for an explanation on why the car does what it does.

“This unique dialogue between passengers and autonomous vehicles could increase transparency, making it easier for people to understand and trust these systems”, writes Wayve. 

In closing

In-car speech technology is here, providing automakers with a golden opportunity to revolutionise the way we experience travel. The introduction of this advanced technology implies not just a transformation in human-car interaction but also reshapes societal norms around transportation and mobility. This goes beyond mere convenience, reflecting a change in our relationship with vehicles, from static entities to dynamic, interactive platforms. This innovative technology signifies a paradigm shift in automotive user interaction, fostering enhanced user experiences and, eventually, forming an integral component of daily routines. Being uniquely positioned to promote the adoption and showcase the reliability of these systems, car manufacturers can cultivate deeper relationships with customers, understand their preferences more profoundly, and deliver tailor-made experiences, solidifying their role in shaping future mobility solutions. This, in turn, could have far-reaching implications for urban development, commuting patterns, and even environmental sustainability, emphasising the holistic impact of in-car speech technology.

Schedule your free, inspiring session with our expert futurists.

Continue

Related updates

This site is registered on wpml.org as a development site.