The 2019 versions of at least two Mercedes-Benz cars – the GLE, the company’s bestselling SUV, and the CLA Coupe – will come with gesture controls. If you extend your hand towards the touchscreen on the dashboard or centre console, the media display changes and individual elements are highlighted so that you can choose the one you want.
In the dark, if the driver reaches over towards the unoccupied front passenger seat to search for something, the area will be illuminated automatically to make it easier to find the item. As soon as the hand leaves the area, the light switches off.
In the dark, the reading lamp can be switched on and off by extending a hand towards the rear-view mirror. A personal favourite function – such as `navigate me home’ – can be programmed and be activated by holding a hand over the centre console with the index and middle finger spread in a V-shape.
The system is even able to distinguish the driver’s hand from that of the front passenger. So the driver and front passenger can each programme a personal favourite function. The seat massage function can be activated with a gesture, and the system will know whose seat the massage function needs to be activated for.
This entire work was done by Mercedes-Benz Research & Development India (MBRDI) in its Bengaluru facility – right from conceptualising to productionising it. “Mercedes-Benz is known for its intelligent exterior, the cameras and radars we have for safety, for cruise control. We are now trying to make the interior as intelligent, because with autonomous driving, it’s becoming the third living space, after your home and workplace,” says Partha Bhattacharya, team leader in UI & UX for infotainment systems in MBRDI.
The gesture feature is part of the larger Mercedes-Benz user experience (MBUX) system which some have called the best infotainment system in the automotive world today. A camera in the overhead console captures the body movements and signals the system.
The team’s focus was to make it highly intuitive, to obviate the need for users to look at manuals. Praveen Bhatt, VP of infotainment & connected car, says the team had a very restricted timeline. “And we were working with a new type of camera in an unreleased interior,” he says.
The work required researchers in deep learning/ AI with excellent computer vision skills to create the necessary algorithms and train them.
It needed lots of people to annotate vast numbers of images appropriately for use in training the algorithms to recognise hand and body movements and positions (such as the V-pose and pointed finger), and the car interiors. And it needed engineers who could port the algorithm to the small hardware device in the car.
Bhattacharya says the biggest challenge was to fit the whole system in the tiny hardware device. Doing deep learning takes a lot of memory and that is usually accomplished by using cloud resources. But cloud computing could mean response delays. So all the compute needed to be done inside the car.
“We had to optimise the algorithm a lot to make it really small. Our biggest accomplishment was to bring deep learning to a very small edge device,” Bhattacharya says.
Future car lines too are expected to have the gesture feature. It can even go into lower-end models because of the low cost of development. “Bengaluru has a big cost advantage over others, be it in image annotation costs or human pose estimation costs,” says Bhattacharya.