The aim of the project AIMMIT (Automotive Integration of Multi Modal Interaction Technologies) is to explore opportunities and propose solutions for multimodal interactions in vehicles that can reduce time taken for secondary tasks and enable safer driving. The knowledge gained in this project will be applied in coming development of new generations of cockpit HMI that: 1. Enables the driver to perform complex secondary tasks with minimal visual distraction and a high level of user acceptance.
2. Enables an intuitive and user friendly user interface for emerging functions, like autonomous driving and advanced active safety functions.
The integration of multiple modalities to achieve seamless interaction is a challenge, and a systematic way of combining different cues and feedbacks have to be developed. In an automotive setting, a combination of a number of operations makes it possible for the user to perform different functions using the infotainment system. The purpose of the study conducted was to illustrate a pattern of natural interactions that we come across in our daily routine performing very basic tasks, and examine if these interactions (Gestures/ Speech/ Haptics) could be transferred to a vehicle environment. 24 participants were interviewed using interaction scenarios, questionnaires and rating scales.