1,242 research outputs found

    Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    Get PDF
    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is soun

    Adaptive User-Centered Multimodal Interaction towards Reliable and Trusted Automotive Interfaces

    Full text link
    With the recently increasing capabilities of modern vehicles, novel approaches for interaction emerged that go beyond traditional touch-based and voice command approaches. Therefore, hand gestures, head pose, eye gaze, and speech have been extensively investigated in automotive applications for object selection and referencing. Despite these significant advances, existing approaches mostly employ a one-model-fits-all approach unsuitable for varying user behavior and individual differences. Moreover, current referencing approaches either consider these modalities separately or focus on a stationary situation, whereas the situation in a moving vehicle is highly dynamic and subject to safety-critical constraints. In this paper, I propose a research plan for a user-centered adaptive multimodal fusion approach for referencing external objects from a moving vehicle. The proposed plan aims to provide an open-source framework for user-centered adaptation and personalization using user observations and heuristics, multimodal fusion, clustering, transfer-of-learning for model adaptation, and continuous learning, moving towards trusted human-centered artificial intelligence

    Multimodal information presentation for high-load human computer interaction

    Get PDF
    This dissertation addresses the question: given an application and an interaction context, how can interfaces present information to users in a way that improves the quality of interaction (e.g. a better user performance, a lower cognitive demand and a greater user satisfaction)? Information presentation is critical to the quality of interaction because it guides, constrains and even determines cognitive behavior. A good presentation is particularly desired in high-load human computer interactions, such as when users are under time pressure, stress, or are multi-tasking. Under a high mental workload, users may not have the spared cognitive capacity to cope with the unnecessary workload induced by a bad presentation. In this dissertation work, the major presentation factor of interest is modality. We have conducted theoretical studies in the cognitive psychology domain, in order to understand the role of presentation modality in different stages of human information processing. Based on the theoretical guidance, we have conducted a series of user studies investigating the effect of information presentation (modality and other factors) in several high-load task settings. The two task domains are crisis management and driving. Using crisis scenario, we investigated how to presentation information to facilitate time-limited visual search and time-limited decision making. In the driving domain, we investigated how to present highly-urgent danger warnings and how to present informative cues that help drivers manage their attention between multiple tasks. The outcomes of this dissertation work have useful implications to the design of cognitively-compatible user interfaces, and are not limited to high-load applications

    From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI

    Full text link
    This paper gives an overview of the ten-year devel- opment of the papers presented at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) from 2009 to 2018. We categorize the topics into two main groups, namely, manual driving-related research and automated driving-related re- search. Within manual driving, we mainly focus on studies on user interfaces (UIs), driver states, augmented reality and head-up displays, and methodology; Within automated driv- ing, we discuss topics, such as takeover, acceptance and trust, interacting with road users, UIs, and methodology. We also discuss the main challenges and future directions for AutoUI and offer a roadmap for the research in this area.https://deepblue.lib.umich.edu/bitstream/2027.42/153959/1/From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdfDescription of From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdf : Main articl

    Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects

    Get PDF
    These are the Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects. Objects that we use in our everyday life are expanding their restricted interaction capabilities and provide functionalities that go far beyond their original functionality. They feature computing capabilities and are thus able to capture information, process and store it and interact with their environments, turning them into smart objects

    PRESTK : situation-aware presentation of messages and infotainment content for drivers

    Get PDF
    The amount of in-car information systems has dramatically increased over the last few years. These potentially mutually independent information systems presenting information to the driver increase the risk of driver distraction. In a first step, orchestrating these information systems using techniques from scheduling and presentation planning avoid conflicts when competing for scarce resources such as screen space. In a second step, the cognitive capacity of the driver as another scarce resource has to be considered. For the first step, an algorithm fulfilling the requirements of this situation is presented and evaluated. For the second step, I define the concept of System Situation Awareness (SSA) as an extension of Endsley’s Situation Awareness (SA) model. I claim that not only the driver needs to know what is happening in his environment, but also the system, e.g., the car. In order to achieve SSA, two paths of research have to be followed: (1) Assessment of cognitive load of the driver in an unobtrusive way. I propose to estimate this value using a model based on environmental data. (2) Developing model of cognitive complexity induced by messages presented by the system. Three experiments support the claims I make in my conceptual contribution to this field. A prototypical implementation of the situation-aware presentation management toolkit PRESTK is presented and shown in two demonstrators.In den letzten Jahren hat die Menge der informationsanzeigenden Systeme im Auto drastisch zugenommen. Da sie potenziell unabhĂ€ngig voneinander ablaufen, erhöhen sie die Gefahr, die Aufmerksamkeit des Fahrers abzulenken. Konflikte entstehen, wenn zwei oder mehr Systeme zeitgleich auf limitierte Ressourcen wie z. B. den Bildschirmplatz zugreifen. Ein erster Schritt, diese Konflikte zu vermeiden, ist die Orchestrierung dieser Systeme mittels Techniken aus dem Bereich Scheduling und PrĂ€sentationsplanung. In einem zweiten Schritt sollte die kognitive KapazitĂ€t des Fahrers als ebenfalls limitierte Ressource berĂŒcksichtigt werden. Der Algorithmus, den ich zu Schritt 1 vorstelle und evaluiere, erfĂŒllt alle diese Anforderungen. Zu Schritt 2 definiere ich das Konzept System Situation Awareness (SSA), basierend auf Endsley’s Konzept der Situation Awareness (SA). Dadurch wird erreicht, dass nicht nur der Fahrer sich seiner Umgebung bewusst ist, sondern auch das System (d.h. das Auto). Zu diesem Zweck mšussen zwei Bereiche untersucht werden: (1) Die kognitive Belastbarkeit des Fahrers unaufdringlich ermitteln. Dazu schlage ich ein Modell vor, das auf Umgebungsinformationen basiert. (2) Ein weiteres Modell soll die KomplexitĂ€t der prĂ€sentierten Informationen bestimmen. Drei Experimente stĂŒtzen die Behauptungen in meinem konzeptuellen Beitrag. Ein Prototyp des situationsbewussten PrĂ€sentationsmanagement-Toolkits PresTK wird vorgestellt und in zwei Demonstratoren gezeigt

    A model for mobile, context-aware in-car communication systems to reduce driver distractions

    Get PDF
    Driver distraction remains a matter of concern throughout the world as the number of car accidents caused by distracted driving is still unacceptably high. Industry and academia are working intensively to design new techniques that will address all types of driver distraction including visual, manual, auditory and cognitive distraction. This research focuses on an existing technology, namely in-car communication systems (ICCS). ICCS allow drivers to interact with their mobile phones without touching or looking at them. Previous research suggests that ICCS have reduced visual and manual distraction. Two problems were identified in this research: existing ICCS are still expensive and only available in limited models of car. As a result of that, only a small number of drivers can obtain a car equipped with an ICCS, especially in developing countries. The second problem is that existing ICCS are not aware of the driving context, which plays a role in distracting drivers. This research project was based on the following thesis statement: A mobile, context-aware model can be designed to reduce driver distraction caused by the use of ICCS. A mobile ICCS is portable and can be used in any car, addressing the first problem. Context-awareness will be used to detect possible situations that contribute to distracting drivers and the interaction with the mobile ICCS will be adapted so as to avert calls and text messages. This will address the second problem. As the driving context is dynamic, drivers may have to deal with critical safety-related tasks while they are using an existing ICCS. The following steps were taken in order to validate the thesis statement. An investigation was conducted into the causes and consequences of driver distraction. A review of literature was conducted on context-aware techniques that could potentially be used. The design of a model was proposed, called the Multimodal Interface for Mobile Info-communication with Context (MIMIC) and a preliminary usability evaluation was conducted in order to assess the feasibility of a speech-based, mobile ICCS. Despite some problems with the speech recognition, the results were satisfying and showed that the proposed model for mobile ICCS was feasible. Experiments were conducted in order to collect data to perform supervised learning to determine the driving context. The aim was to select the most effective machine learning techniques to determine the driving context. Decision tree and instance-based algorithms were found to be the best performing algorithms. Variables such as speed, acceleration and linear acceleration were found to be the most important variables according to an analysis of the decision tree. The initial MIMIC model was updated to include several adaptation effects and the resulting model was implemented as a prototype mobile application, called MIMIC-Prototype

    Automotive user interfaces for the support of non-driving-related activities

    Get PDF
    Driving a car has changed a lot since the first car was invented. Today, drivers do not only maneuver the car to their destination but also perform a multitude of additional activities in the car. This includes for instance activities related to assistive functions that are meant to increase driving safety and reduce the driver’s workload. However, since drivers spend a considerable amount of time in the car, they often want to perform non-driving-related activities as well. In particular, these activities are related to entertainment, communication, and productivity. The driver’s need for such activities has vastly increased, particularly due to the success of smart phones and other mobile devices. As long as the driver is in charge of performing the actual driving task, such activities can distract the driver and may result in severe accidents. Due to these special requirements of the driving environment, the driver ideally performs such activities by using appropriately designed in-vehicle systems. The challenge for such systems is to enable flexible and easily usable non-driving-related activities while maintaining and increasing driving safety at the same time. The main contribution of this thesis is a set of guidelines and exemplary concepts for automotive user interfaces that offer safe, diverse, and easy-to-use means to perform non-driving-related activities besides the regular driving tasks. Using empirical methods that are commonly used in human-computer interaction, we investigate various aspects of automotive user interfaces with the goal to support the design and development of future interfaces that facilitate non-driving-related activities. The first aspect is related to using physiological data in order to infer information about the driver’s workload. As a second aspect, we propose a multimodal interaction style to facilitate the interaction with multiple activities in the car. In addition, we introduce two concepts for the support of commonly used and demanded non-driving-related activities: For communication with the outside world, we investigate the driver’s needs with regard to sharing ride details with remote persons in order to increase driving safety. Finally, we present a concept of time-adjusted activities (e.g., entertainment and productivity) which enable the driver to make use of times where only little attention is required. Starting with manual, non-automated driving, we also consider the rise of automated driving modes.When cars were invented, they allowed the driver and potential passengers to get to a distant location. The only activities the driver was able and supposed to perform were related to maneuvering the vehicle, i.e., accelerate, decelerate, and steer the car. Today drivers perform many activities that go beyond these driving tasks. This includes for example activities related to driving assistance, location-based information and navigation, entertainment, communication, and productivity. To perform these activities, drivers use functions that are provided by in-vehicle information systems in the car. Many of these functions are meant to increase driving safety or to make the ride more enjoyable. The latter is important since people spend a considerable amount of time in their cars and want to perform similar activities like those to which they are accustomed to from using mobile devices. However, as long as the driver is responsible for driving, these activities can be distracting and pose driver, passengers, and the environment at risk. One goal for the development of automotive user interfaces is therefore to enable an easy and appropriate operation of in-vehicle systems such that driving tasks and non-driving-related activities can be performed easily and safely. The main contribution of this thesis is a set of guidelines and exemplary concepts for automotive user interfaces that offer safe, diverse, and easy-to-use means to perform also non-driving-related activities while driving. Using empirical methods that are commonly used in human-computer interaction, we approach various aspects of automotive user interfaces in order to support the design and development of future interfaces that also enable non-driving-related activities. Starting with manual, non-automated driving, we also consider the transition towards automated driving modes. As a first part, we look at the prerequisites that enable non-driving-related activities in the car. We propose guidelines for the design and development of automotive user interfaces that also support non-driving-related activities. This includes for instance rules on how to adapt or interrupt activities when the level of automation changes. To enable activities in the car, we propose a novel interaction concept that facilitates multimodal interaction in the car by combining speech interaction and touch gestures. Moreover, we reveal aspects on how to infer information about the driver's state (especially mental workload) by using physiological data. We conducted a real-world driving study to extract a data set with physiological and context data. This can help to better understand the driver state, to adapt interfaces to the driver and driving situations, and to adapt the route selection process. Second, we propose two concepts for supporting non-driving-related activities that are frequently used and demanded in the car. For telecommunication, we propose a concept to increase driving safety when communicating with the outside world. This concept enables the driver to share different types of information with remote parties. Thereby, the driver can choose between different levels of details ranging from abstract information such as ``Alice is driving right now'' up to sharing a video of the driving scene. We investigated the drivers' needs on the go and derived guidelines for the design of communication-related functions in the car through an online survey and in-depth interviews. As a second aspect, we present an approach to offer time-adjusted entertainment and productivity tasks to the driver. The idea is to allow time-adjusted tasks during periods where the demand for the driver's attention is low, for instance at traffic lights or during a highly automated ride. Findings from a web survey and a case study demonstrate the feasibility of this approach. With the findings of this thesis we envision to provide a basis for future research and development in the domain of automotive user interfaces and non-driving-related activities in the transition from manual driving to highly and fully automated driving.Als das Auto erfunden wurde, ermöglichte es den Insassen hauptsĂ€chlich, entfernte Orte zu erreichen. Die einzigen TĂ€tigkeiten, die Fahrerinnen und Fahrer wĂ€hrend der Fahrt erledigen konnten und sollten, bezogen sich auf die Steuerung des Fahrzeugs. Heute erledigen die Fahrerinnen und Fahrer diverse TĂ€tigkeiten, die ĂŒber die ursprĂŒnglichen Aufgaben hinausgehen und sich nicht unbedingt auf die eigentliche Fahraufgabe beziehen. Dies umfasst unter anderem die Bereiche Fahrerassistenz, standortbezogene Informationen und Navigation, Unterhaltung, Kommunikation und ProduktivitĂ€t. Informationssysteme im Fahrzeug stellen den Fahrerinnen und Fahrern Funktionen bereit, um diese Aufgaben auch wĂ€hrend der Fahrt zu erledigen. Viele dieser Funktionen verbessern die Fahrsicherheit oder dienen dazu, die Fahrt angenehm zu gestalten. Letzteres wird immer wichtiger, da man inzwischen eine betrĂ€chtliche Zeit im Auto verbringt und dabei nicht mehr auf die AktivitĂ€ten und Funktionen verzichten möchte, die man beispielsweise durch die Benutzung von Smartphone und Tablet gewöhnt ist. Solange der Fahrer selbst fahren muss, können solche AktivitĂ€ten von der FahrtĂ€tigkeit ablenken und eine GefĂ€hrdung fĂŒr die Insassen oder die Umgebung darstellen. Ein Ziel bei der Entwicklung automobiler Benutzungsschnittstellen ist daher eine einfache, adĂ€quate Bedienung solcher Systeme, damit Fahraufgabe und NebentĂ€tigkeiten gut und vor allem sicher durchgefĂŒhrt werden können. Der Hauptbeitrag dieser Arbeit umfasst einen Leitfaden und beispielhafte Konzepte fĂŒr automobile Benutzungsschnittstellen, die eine sichere, abwechslungsreiche und einfache DurchfĂŒhrung von TĂ€tigkeiten jenseits der eigentlichen Fahraufgabe ermöglichen. Basierend auf empirischen Methoden der Mensch-Computer-Interaktion stellen wir verschiedene Lösungen vor, die die Entwicklung und Gestaltung solcher Benutzungsschnittstellen unterstĂŒtzen. Ausgehend von der heute ĂŒblichen nicht automatisierten Fahrt betrachten wir dabei auch Aspekte des automatisierten Fahrens. ZunĂ€chst betrachten wir die notwendigen Voraussetzungen, um TĂ€tigkeiten jenseits der Fahraufgabe zu ermöglichen. Wir stellen dazu einen Leitfaden vor, der die Gestaltung und Entwicklung von automobilen Benutzungsschnittstellen unterstĂŒtzt, die das DurchfĂŒhren von Nebenaufgaben erlauben. Dies umfasst zum Beispiel Hinweise, wie AktivitĂ€ten angepasst oder unterbrochen werden können, wenn sich der Automatisierungsgrad wĂ€hrend der Fahrt Ă€ndert. Um AktivitĂ€ten im Auto zu unterstĂŒtzen, stellen wir ein neuartiges Interaktionskonzept vor, das eine multimodale Interaktion im Fahrzeug mit Sprachbefehlen und Touch-Gesten ermöglicht. FĂŒr automatisierte Fahrzeugsysteme und zur Anpassung der Interaktionsmöglichkeiten an die Fahrsituation stellt der Fahrerzustand (insbesondere die mentale Belastung) eine wichtige Information dar. Durch eine Fahrstudie im realen Straßenverkehr haben wir einen Datensatz generiert, der physiologische Daten und Kontextinformationen umfasst und damit RĂŒckschlĂŒsse auf den Fahrerzustand ermöglicht. Mit diesen Informationen ĂŒber Fahrerinnen und Fahrer wird es möglich, den Fahrerzustand besser zu verstehen, Benutzungsschnittstellen an die aktuelle Fahrsituation anzupassen und die Routenwahl anzupassen. Außerdem stellen wir zwei konkrete Konzepte zur UnterstĂŒtzung von NebentĂ€tigkeiten vor, die schon heute regelmĂ€ĂŸig bei der Fahrt getĂ€tigt oder verlangt werden. Im Bereich der Telekommunikation stellen wir dazu ein Konzept vor, das die Fahrsicherheit beim Kommunizieren mit Personen außerhalb des Autos erhöht. Das Konzept erlaubt es dem Fahrer, unterschiedliche Arten von Kontextinformationen mit Kommunikationspartnern zu teilen. Dies reicht von der abstrakten Information, dass man derzeit im Auto unterwegs ist bis hin zum Teilen eines Live-Videos der aktuellen Fahrsituation. DiesbezĂŒglich haben wir ĂŒber eine Web-Umfrage und detaillierte Interviews die BedĂŒrfnisse der Nutzer(innen) erhoben und ausgewertet. Zudem stellen wir ein prototypisches Konzept sowie Richtlinien vor, wie kĂŒnftige Kommunikationsaufgaben im Fahrzeug gestaltet werden sollen. Als ein zweites Konzept betrachten wir zeitbeschrĂ€nkte Aufgaben zur Unterhaltung und ProduktivitĂ€t im Fahrzeug. Die Idee ist hier, zeitlich begrenzte Aufgaben in Zeiten niedriger Belastung zuzulassen, wie zum Beispiel beim Warten an einer Ampel oder wĂ€hrend einer hochautomatisierten (Teil-) Fahrt. Ergebnisse aus einer Web-Umfrage und einer Fallstudie zeigen die Machbarkeit dieses Ansatzes auf. Mit den Ergebnissen dieser Arbeit soll eine Basis fĂŒr kĂŒnftige Forschung und Entwicklung gelegt werden, um im Bereich automobiler Benutzungsschnittstellen insbesondere nicht-fahr-bezogene Aufgaben im Übergang zwischen manuellem Fahren und einer hochautomatisierten Autofahrt zu unterstĂŒtzen

    SiAM-dp : an open development platform for massively multimodal dialogue systems in cyber-physical environments

    Get PDF
    Cyber-physical environments enhance natural environments of daily life such as homes, factories, offices, and cars by connecting the cybernetic world of computers and communication with the real physical world. While under the keyword of Industrie 4.0, cyber-physical environments will take a relevant role in the next industrial revolution, and they will also appear in homes, offices, workshops, and numerous other areas. In this new world, classical interaction concepts where users exclusively interact with a single stationary device, PC or smartphone become less dominant and make room for new occurrences of interaction between humans and the environment itself. Furthermore, new technologies and a rising spectrum of applicable modalities broaden the possibilities for interaction designers to include more natural and intuitive non-verbal and verbal communication. The dynamic characteristic of a cyber-physical environment and the mobility of users confronts developers with the challenge of developing systems that are flexible concerning the connected and used devices and modalities. This implies new opportunities for cross-modal interaction that go beyond dual modalities interaction as is well known nowadays. This thesis addresses the support of application developers with a platform for the declarative and model based development of multimodal dialogue applications, with a focus on distributed input and output devices in cyber-physical environments. The main contributions can be divided into three parts: - Design of models and strategies for the specification of dialogue applications in a declarative development approach. This includes models for the definition of project resources, dialogue behaviour, speech recognition grammars, and graphical user interfaces and mapping rules, which convert the device specific representation of input and output description to a common representation language. - The implementation of a runtime platform that provides a flexible and extendable architecture for the easy integration of new devices and components. The platform realises concepts and strategies of multimodal human-computer interaction and is the basis for full-fledged multimodal dialogue applications for arbitrary device setups, domains, and scenarios. - A software development toolkit that is integrated in the Eclipse rich client platform and provides wizards and editors for creating and editing new multimodal dialogue applications.Cyber-physische Umgebungen (CPEs) erweitern natĂŒrliche Alltagsumgebungen wie Heim, Fabrik, BĂŒro und Auto durch Verbindung der kybernetischen Welt der Computer und Kommunikation mit der realen, physischen Welt. Die möglichen Anwendungsgebiete hierbei sind weitreichend. WĂ€hrend unter dem Stichwort Industrie 4.0 cyber-physische Umgebungen eine bedeutende Rolle fĂŒr die nĂ€chste industrielle Revolution spielen werden, erhalten sie ebenfalls Einzug in Heim, BĂŒro, Werkstatt und zahlreiche weitere Bereiche. In solch einer neuen Welt geraten klassische Interaktionskonzepte, in denen Benutzer ausschließlich mit einem einzigen GerĂ€t, PC oder Smartphone interagieren, immer weiter in den Hintergrund und machen Platz fĂŒr eine neue AusprĂ€gung der Interaktion zwischen dem Menschen und der Umgebung selbst. DarĂŒber hinaus sorgen neue Technologien und ein wachsendes Spektrum an einsetzbaren ModalitĂ€ten dafĂŒr, dass sich im Interaktionsdesign neue Möglichkeiten fĂŒr eine natĂŒrlichere und intuitivere verbale und nonverbale Kommunikation auftun. Die dynamische Natur von cyber-physischen Umgebungen und die MobilitĂ€t der Benutzer darin stellt Anwendungsentwickler vor die Herausforderung, Systeme zu entwickeln, die flexibel bezĂŒglich der verbundenen und verwendeten GerĂ€te und ModalitĂ€ten sind. Dies impliziert auch neue Möglichkeiten in der modalitĂ€tsĂŒbergreifenden Kommunikation, die ĂŒber duale Interaktionskonzepte, wie sie heutzutage bereits ĂŒblich sind, hinausgehen. Die vorliegende Arbeit befasst sich mit der UnterstĂŒtzung von Anwendungsentwicklern mit Hilfe einer Plattform zur deklarativen und modellbasierten Entwicklung von multimodalen Dialogapplikationen mit einem Fokus auf verteilte Ein- und AusgabegerĂ€te in cyber-physischen Umgebungen. Die bearbeiteten Aufgaben können grundlegend in drei Teile gegliedert werden: - Die Konzeption von Modellen und Strategien fĂŒr die Spezifikation von Dialoganwendungen in einem deklarativen Entwicklungsansatz. Dies beinhaltet Modelle fĂŒr das Definieren von Projektressourcen, Dialogverhalten, Spracherkennergrammatiken, graphischen Benutzerschnittstellen und Abbildungsregeln, die die gerĂ€tespezifische Darstellung von Ein- und AusgabegerĂ€ten in eine gemeinsame ReprĂ€sentationssprache transformieren. - Die Implementierung einer Laufzeitumgebung, die eine flexible und erweiterbare Architektur fĂŒr die einfache Integration neuer GerĂ€te und Komponenten bietet. Die Plattform realisiert Konzepte und Strategien der multimodalen Mensch-Maschine-Interaktion und ist die Basis vollwertiger multimodaler Dialoganwendungen fĂŒr beliebige DomĂ€nen, Szenarien und GerĂ€tekonfigurationen. - Eine Softwareentwicklungsumgebung, die in die Eclipse Rich Client Plattform integriert ist und Entwicklern Assistenten und Editoren an die Hand gibt, die das Erstellen und Editieren von neuen multimodalen Dialoganwendungen unterstĂŒtzen

    An intelligent multimodal interface for in-car communication systems

    Get PDF
    In-car communication systems (ICCS) are becoming more frequently used by drivers. ICCS are used in order to minimise the driving distraction due to using a mobile phone while driving. Several usability studies of ICCS utilising speech user interfaces (SUIs) have identified usability issues that can affect the workload, performance, satisfaction and user experience of the driver. This is due to current speech technologies which can be a source of errors that may frustrate the driver and negatively affect the user experience. The aim of this research was to design a new multimodal interface that will manage the interaction between an ICCS and the driver. Unlike the current ICCS, it should make more voice input available, so as to support tasks (e.g. sending text messages; browsing the phone book, etc), which still require a cognitive workload from the driver. An adaptive multimodal interface was proposed in order to address current ICCS issues. The multimodal interface used both speech and manual input; however only the speech channel is used as output. This was done in order to minimise the visual distraction that graphical user interfaces or haptics devices can cause with current ICCS. The adaptive interface was designed to minimise the cognitive distraction of the driver. The adaptive interface ensures that whenever the distraction level of the driver is high, any information communication is postponed. After the design and the implementation of the first version of the prototype interface, called MIMI, a usability evaluation was conducted in order to identify any possible usability issues. Although voice dialling was found to be problematic, the results were encouraging in terms of performance, workload and user satisfaction. The suggestions received from the participants to improve the system usability were incorporated in the next implementation of MIMI. The adaptive module was then implemented to reduce driver distraction based on the driver‟s current context. The proposed architecture showed encouraging results in terms of usability and safety. The adaptive behaviour of MIMI significantly contributed to the reduction of cognitive distraction, because drivers received less information during difficult driving situations
    • 

    corecore