JLR says the technology predicts a user’s intended target on the screen early in the pointing task, speeding up the interaction process and reducing time and effort needed to use a touchscreen by up to 50%.
(Subscribe to our Today’s Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)
Cambridge University and Jaguar Land Rover (JLR) have developed a contactless touchscreen technology for car infotainment systems.
The new contactless technology – predictive touch, uses artificial intelligence and sensors to control touchscreen systems in a car “without needing to touch the screen,” Jaguar Land Rover said in a statement.
A gesture tracker uses vision-based or radio frequency-based sensors to combine contextual information, such as user profile, interface design and environmental conditions, with data available from other sensors, such as an eye-gaze tracker, to infer user’s intent in real time, it added.
JLR says the technology predicts a user’s intended target on the screen early in the pointing task, speeding up the interaction process and reducing time and effort needed to use a touchscreen by up to 50%, based on lab tests and on-road trials.
Touchscreen systems are becoming an integral part of a car’s infotainment system.
Individual’s driving style and other factors like road and weather conditions may lead to a few missed attempts to control the screen.
“The technology also offers us the chance to make vehicles safer by reducing the cognitive load on drivers and increasing the amount of time they can spend focused on the road ahead,” Lee Skrypchuk, Human Machine Interface Technical Specialist at JLR, said.
Since it is a software-based solution, it can also be used on existing touchscreens and interactive displays, provided the machine learning algorithm receives accurate sensor data. It can also be personalised for a user or a particular display size owing to the design flexibilities.
“Our technology has numerous advantages over more basic mid-air interaction techniques or conventional gesture recognition, because it supports intuitive interactions with legacy interface designs and doesn’t require any learning on the part of the user,” said Dr. Bashar Ahmad of Cambridge University.