We are now used to cars that understand what we say. Experts predict that in future they may also know how we feel – sometimes without us having to say a word.
Nearly 90 per cent* of all new cars are expected to offer voice recognition capability by 2022. The next step for the cars of tomorrow could be to pick up on tiny changes in our facial expression as well as modulations and inflections in our speaking voice, easing the driving experience for consumers.
Advanced systems – equipped with sophisticated microphones and in-car cameras – could learn which songs we like to hear when we are stressed and those occasions we prefer to simply enjoy silence. Interior lighting could also complement our mood.
“We’re well on the road to developing the empathetic car which might tell you a joke to cheer you up, offer advice when you need it, remind you of birthdays and keep you alert on a long drive,” said Fatima Vital, senior director, Marketing Automotive, Nuance Communications, which helped Ford develop voice recognition of the SYNC in-car connectivity system.
Cloud-based voice control is anticipated to be available on 75 per cent* of new cars by 2022, and it is predicted future systems would evolve into personal assistants that shuffle appointments and order takeaways when drivers are held up in traffic jams.
Movie fans will recall in the film Her, Scarlett Johansson’s character Samantha catered to Theodore Twombly’s every command, as a voice recognition system, which with uncanny accuracy, learned his mood, needs and wants – just from the sound of his voice. Someday soon, your car could do something similar.
A research project Ford is currently running with RWTH Aachen University includes using multiple microphones to improve speech processing and reduce the effect of external noise and potential disruptions.
Within the next two years, voice control systems could prompt us with: “Would you like to order flowers for your mum for Mothers’ Day?” “Shall I choose a less congested but slower route home?” and “You’re running low on your favourite chocolate and your favourite store has some in stock. Want to stop by and pick some up?”
Future gesture and eye control would enable drivers to answer calls by nodding their head, adjust the volume with short twisting motions, and set the navigation with a quick glance at their destination on a map.
So your car may learn to know and read you better than your spouse, but is there a danger that, as in the movie Her, you might fall for your advanced voice recognition systems?
“Lots of people already love their cars, but with new in-car systems that learn and adapt, we can expect some seriously strong relationships to form,” said Dominic Watt, senior lecturer, Department of Language and Linguistic Science, University of York. “The car will soon be our assistant, travel companion and sympathetic ear, and you’ll be able to discuss everything and ask anything, to the point many of us might forget we’re even talking to a machine.”
SYNC 3 already offers unique features for the Middle East and North Africa, like user ability to control media and climate, via their Bluetooth-connected phone, just by speaking to SYNC. Of considerable importance to the region’s customers is the language choices** for SYNC 3 navigation, which includes Arabic for the very first time, while customers will also have the opportunity to update their maps – one free per year for five years – from the comfort of their own homes.
Navigation on the latest generation of Ford's innovative communications and entertainment system also includes more than 3.5 million “points of interest”, and over 3.5 million kilometres of road, throughout the MENA region.
SYNC 3 Apple Car Play also now fully supports Anghami streaming, meaning you can listen to your favourite artists, from Nancy Ajram to Ed Sheeran, through Ford’s in-car connectivity system. Anghami is the first legal music streaming platform and digital distribution company in the Arab region, providing unlimited Arabic and international music to stream and download for offline listening.