1 research outputs found

    Fully exploiting the potential of speech dialog in automotive applications

    Get PDF
    International audienceToday users are faced with infotainment devices and applications of increasing complexity. The design of easy-to-use and intuitive interfaces becomes a more and more challenging task. Users are usually not aware of the underlying applications and their restrictions when they want to use certain functionalities. Therefore, hierarchical menu structures are difficult to handle especially in situations where eyes and hands are occupied with other tasks, such as driving. For quite a while speech-enabled interfaces have been used to solve this problem since they allow users to control various applications without occupying hands and eyes. However, state-of-the-art multimodal applications often do not exploit the full potential that speech dialog offers simply because this modality is not well integrated with the "traditional" modalities such as graphics and haptics. The resulting speech interfaces do not run smoothly, exhibit plenty of inconsistencies concerning the GUI and are thus more or less tedious to use. Such kind of interfaces result in low acceptance because users do not see the immediate benefit. In this paper we present an approach that develops multimodal interfaces in an integrated way, thus ensuring highly consistent interfaces that closely couple the involved modalities and are thus easier to use
    corecore