The use of a multi-modal interface to integrate in-vehicle information presentation

Abstract

Summary: The car of the future will have many new information sources— including telematics systems, navigation systems and Advanced Driver Assistance Systems (ADAS)—that will compete for a driver’s limited cognitive attention. If they are implemented as completely separate systems then cognitive overload and driver distraction are inevitable outcomes. However, if they are implemented as an integrated intelligent system with a multi-modal interface, then the benefits of such functionality will be achieved with much less impact on driving safety. Such a system will support the task of safe driving by filtering and mediating information in response to real-world driving demands. This paper outlines the Human Factors research program being undertaken by Motorola Labs to evaluate key elements of such a multi-modal interface as well as the key human factors issues involved in a multi-modal interface

Similar works

Full text

thumbnail-image

CiteSeerX

redirect
Last time updated on 28/10/2017

This paper was published in CiteSeerX.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.