677 research outputs found

    Novel Multimodal Feedback Techniques for In-Car Mid-Air Gesture Interaction

    Get PDF
    This paper presents an investigation into the effects of different feedback modalities on mid-air gesture interaction for infotainment systems in cars. Car crashes and near-crash events are most commonly caused by driver distraction. Mid-air interaction is a way of reducing driver distraction by reducing visual demand from infotainment. Despite a range of available modalities, feedback in mid-air gesture systems is generally provided through visual displays. We conducted a simulated driving study to investigate how different types of multimodal feedback can support in-air gestures. The effects of different feedback modalities on eye gaze behaviour, and the driving and gesturing tasks are considered. We found that feedback modality influenced gesturing behaviour. However, drivers corrected falsely executed gestures more often in non-visual conditions. Our findings show that non-visual feedback can reduce visual distraction significantl

    VANET Applications: Hot Use Cases

    Get PDF
    Current challenges of car manufacturers are to make roads safe, to achieve free flowing traffic with few congestions, and to reduce pollution by an effective fuel use. To reach these goals, many improvements are performed in-car, but more and more approaches rely on connected cars with communication capabilities between cars, with an infrastructure, or with IoT devices. Monitoring and coordinating vehicles allow then to compute intelligent ways of transportation. Connected cars have introduced a new way of thinking cars - not only as a mean for a driver to go from A to B, but as smart cars - a user extension like the smartphone today. In this report, we introduce concepts and specific vocabulary in order to classify current innovations or ideas on the emerging topic of smart car. We present a graphical categorization showing this evolution in function of the societal evolution. Different perspectives are adopted: a vehicle-centric view, a vehicle-network view, and a user-centric view; described by simple and complex use-cases and illustrated by a list of emerging and current projects from the academic and industrial worlds. We identified an empty space in innovation between the user and his car: paradoxically even if they are both in interaction, they are separated through different application uses. Future challenge is to interlace social concerns of the user within an intelligent and efficient driving

    A Framework For Abstracting, Designing And Building Tangible Gesture Interactive Systems

    Get PDF
    This thesis discusses tangible gesture interaction, a novel paradigm for interacting with computer that blends concepts from the more popular fields of tangible interaction and gesture interaction. Taking advantage of the human innate abilities to manipulate physical objects and to communicate through gestures, tangible gesture interaction is particularly interesting for interacting in smart environments, bringing the interaction with computer beyond the screen, back to the real world. Since tangible gesture interaction is a relatively new field of research, this thesis presents a conceptual framework that aims at supporting future work in this field. The Tangible Gesture Interaction Framework provides support on three levels. First, it helps reflecting from a theoretical point of view on the different types of tangible gestures that can be designed, physically, through a taxonomy based on three components (move, hold and touch) and additional attributes, and semantically, through a taxonomy of the semantic constructs that can be used to associate meaning to tangible gestures. Second, it helps conceiving new tangible gesture interactive systems and designing new interactions based on gestures with objects, through dedicated guidelines for tangible gesture definition and common practices for different application domains. Third, it helps building new tangible gesture interactive systems supporting the choice between four different technological approaches (embedded and embodied, wearable, environmental or hybrid) and providing general guidance for the different approaches. As an application of this framework, this thesis presents also seven tangible gesture interactive systems for three different application domains, i.e., interacting with the In-Vehicle Infotainment System (IVIS) of the car, the emotional and interpersonal communication, and the interaction in a smart home. For the first application domain, four different systems that use gestures on the steering wheel as interaction means with the IVIS have been designed, developed and evaluated. For the second application domain, an anthropomorphic lamp able to recognize gestures that humans typically perform for interpersonal communication has been conceived and developed. A second system, based on smart t-shirts, recognizes when two people hug and reward the gesture with an exchange of digital information. Finally, a smart watch for recognizing gestures performed with objects held in the hand in the context of the smart home has been investigated. The analysis of existing systems found in literature and of the system developed during this thesis shows that the framework has a good descriptive and evaluative power. The applications developed during this thesis show that the proposed framework has also a good generative power.Questa tesi discute l’interazione gestuale tangibile, un nuovo paradigma per interagire con il computer che unisce i principi dei più comuni campi di studio dell’interazione tangibile e dell’interazione gestuale. Sfruttando le abilità innate dell’uomo di manipolare oggetti fisici e di comunicare con i gesti, l’interazione gestuale tangibile si rivela particolarmente interessante per interagire negli ambienti intelligenti, riportando l’attenzione sul nostro mondo reale, al di là dello schermo dei computer o degli smartphone. Poiché l’interazione gestuale tangibile è un campo di studio relativamente recente, questa tesi presenta un framework (quadro teorico) che ha lo scopo di assistere lavori futuri in questo campo. Il Framework per l’Interazione Gestuale Tangibile fornisce supporto su tre livelli. Per prima cosa, aiuta a riflettere da un punto di vista teorico sui diversi tipi di gesti tangibili che possono essere eseguiti fisicamente, grazie a una tassonomia basata su tre componenti (muovere, tenere, toccare) e attributi addizionali, e che possono essere concepiti semanticamente, grazie a una tassonomia di tutti i costrutti semantici che permettono di associare dei significati ai gesti tangibili. In secondo luogo, il framework proposto aiuta a concepire nuovi sistemi interattivi basati su gesti tangibili e a ideare nuove interazioni basate su gesti con gli oggetti, attraverso linee guida per la definizione di gesti tangibili e una selezione delle migliore pratiche per i differenti campi di applicazione. Infine, il framework aiuta a implementare nuovi sistemi interattivi basati su gesti tangibili, permettendo di scegliere tra quattro differenti approcci tecnologici (incarnato e integrato negli oggetti, indossabile, distribuito nell’ambiente, o ibrido) e fornendo una guida generale per la scelta tra questi differenti approcci. Come applicazione di questo framework, questa tesi presenta anche sette sistemi interattivi basati su gesti tangibili, realizzati per tre differenti campi di applicazione: l’interazione con i sistemi di infotainment degli autoveicoli, la comunicazione interpersonale delle emozioni, e l’interazione nella casa intelligente. Per il primo campo di applicazione, sono stati progettati, sviluppati e testati quattro differenti sistemi che usano gesti tangibili effettuati sul volante come modalità di interazione con il sistema di infotainment. Per il secondo campo di applicazione, è stata concepita e sviluppata una lampada antropomorfica in grado di riconoscere i gesti tipici dell’interazione interpersonale. Per lo stesso campo di applicazione, un secondo sistema, basato su una maglietta intelligente, riconosce quando due persone si abbracciano e ricompensa questo gesto con uno scambio di informazioni digitali. Infine, per l’interazione nella casa intelligente, è stata investigata la realizzazione di uno smart watch per il riconoscimento di gesti eseguiti con oggetti tenuti nella mano. L’analisi dei sistemi interattivi esistenti basati su gesti tangibili permette di dimostrare che il framework ha un buon potere descrittivo e valutativo. Le applicazioni sviluppate durante la tesi mostrano che il framework proposto ha anche un valido potere generativo

    Adaptive User-Centered Multimodal Interaction towards Reliable and Trusted Automotive Interfaces

    Full text link
    With the recently increasing capabilities of modern vehicles, novel approaches for interaction emerged that go beyond traditional touch-based and voice command approaches. Therefore, hand gestures, head pose, eye gaze, and speech have been extensively investigated in automotive applications for object selection and referencing. Despite these significant advances, existing approaches mostly employ a one-model-fits-all approach unsuitable for varying user behavior and individual differences. Moreover, current referencing approaches either consider these modalities separately or focus on a stationary situation, whereas the situation in a moving vehicle is highly dynamic and subject to safety-critical constraints. In this paper, I propose a research plan for a user-centered adaptive multimodal fusion approach for referencing external objects from a moving vehicle. The proposed plan aims to provide an open-source framework for user-centered adaptation and personalization using user observations and heuristics, multimodal fusion, clustering, transfer-of-learning for model adaptation, and continuous learning, moving towards trusted human-centered artificial intelligence

    Multimodal feedback for mid-air gestures when driving

    Get PDF
    Mid-air gestures in cars are being used by an increasing number of drivers on the road. Us-ability concerns mean good feedback is important, but a balance needs to be found between supporting interaction and reducing distraction in an already demanding environment. Visual feedback is most commonly used, but takes visual attention away from driving. This thesis investigates novel non-visual alternatives to support the driver during mid-air gesture interaction: Cutaneous Push, Peripheral Lights, and Ultrasound feedback. These modalities lack the expressive capabilities of high resolution screens, but are intended to allow drivers to focus on the driving task. A new form of haptic feedback — Cutaneous Push — was defined. Six solenoids were embedded along the rim of the steering wheel, creating three bumps under each palm. Studies 1, 2, and 3 investigated the efficacy of novel static and dynamic Cutaneous Push patterns, and their impact on driving performance. In simulated driving studies, the cutaneous patterns were tested. The results showed pattern identification rates of up to 81.3% for static patterns and 73.5% for dynamic patterns and 100% recognition of directional cues. Cutaneous Push notifications did not impact driving behaviour nor workload and showed very high user acceptance. Cutaneous Push patterns have the potential to make driving safer by providing non-visual and instantaneous messages, for example to indicate an approaching cyclist or obstacle. Studies 4 & 5 looked at novel uni- and bimodal feedback combinations of Visual, Auditory, Cutaneous Push, and Peripheral Lights for mid-air gestures and found that non-visual feedback modalities, especially when combined bimodally, offered just as much support for interaction without negatively affecting driving performance, visual attention and cognitive demand. These results provide compelling support for using non-visual feedback from in-car systems, supporting input whilst letting drivers focus on driving.Studies 6 & 7 investigated the above bimodal combinations as well as uni- and bimodal Ultrasound feedback during the Lane Change Task to assess the impact of gesturing and feedback modality on car control during more challenging driving. The results of study Seven suggests that Visual and Ultrasound feedback are not appropriate for in-car usage,unless combined multimodally. If Ultrasound is used unimodally it is more useful in a binary scenario.Findings from Studies 5, 6, and 7 suggest that multimodal feedback significantly reduces eyes-off-the-road time compared to Visual feedback without compromising driving performance or perceived user workload, thus it can potentially reduce crash risks. Novel design recommendations for providing feedback during mid-air gesture interaction in cars are provided, informed by the experiment findings

    The cockpit for the 21st century

    Get PDF
    Interactive surfaces are a growing trend in many domains. As one possible manifestation of Mark Weiser’s vision of ubiquitous and disappearing computers in everywhere objects, we see touchsensitive screens in many kinds of devices, such as smartphones, tablet computers and interactive tabletops. More advanced concepts of these have been an active research topic for many years. This has also influenced automotive cockpit development: concept cars and recent market releases show integrated touchscreens, growing in size. To meet the increasing information and interaction needs, interactive surfaces offer context-dependent functionality in combination with a direct input paradigm. However, interfaces in the car need to be operable while driving. Distraction, especially visual distraction from the driving task, can lead to critical situations if the sum of attentional demand emerging from both primary and secondary task overextends the available resources. So far, a touchscreen requires a lot of visual attention since its flat surface does not provide any haptic feedback. There have been approaches to make direct touch interaction accessible while driving for simple tasks. Outside the automotive domain, for example in office environments, concepts for sophisticated handling of large displays have already been introduced. Moreover, technological advances lead to new characteristics for interactive surfaces by enabling arbitrary surface shapes. In cars, two main characteristics for upcoming interactive surfaces are largeness and shape. On the one hand, spatial extension is not only increasing through larger displays, but also by taking objects in the surrounding into account for interaction. On the other hand, the flatness inherent in current screens can be overcome by upcoming technologies, and interactive surfaces can therefore provide haptically distinguishable surfaces. This thesis describes the systematic exploration of large and shaped interactive surfaces and analyzes their potential for interaction while driving. Therefore, different prototypes for each characteristic have been developed and evaluated in test settings suitable for their maturity level. Those prototypes were used to obtain subjective user feedback and objective data, to investigate effects on driving and glance behavior as well as usability and user experience. As a contribution, this thesis provides an analysis of the development of interactive surfaces in the car. Two characteristics, largeness and shape, are identified that can improve the interaction compared to conventional touchscreens. The presented studies show that large interactive surfaces can provide new and improved ways of interaction both in driver-only and driver-passenger situations. Furthermore, studies indicate a positive effect on visual distraction when additional static haptic feedback is provided by shaped interactive surfaces. Overall, various, non-exclusively applicable, interaction concepts prove the potential of interactive surfaces for the use in automotive cockpits, which is expected to be beneficial also in further environments where visual attention needs to be focused on additional tasks.Der Einsatz von interaktiven Oberflächen weitet sich mehr und mehr auf die unterschiedlichsten Lebensbereiche aus. Damit sind sie eine mögliche Ausprägung von Mark Weisers Vision der allgegenwärtigen Computer, die aus unserer direkten Wahrnehmung verschwinden. Bei einer Vielzahl von technischen Geräten des täglichen Lebens, wie Smartphones, Tablets oder interaktiven Tischen, sind berührungsempfindliche Oberflächen bereits heute in Benutzung. Schon seit vielen Jahren arbeiten Forscher an einer Weiterentwicklung der Technik, um ihre Vorteile auch in anderen Bereichen, wie beispielsweise der Interaktion zwischen Mensch und Automobil, nutzbar zu machen. Und das mit Erfolg: Interaktive Benutzeroberflächen werden mittlerweile serienmäßig in vielen Fahrzeugen eingesetzt. Der Einbau von immer größeren, in das Cockpit integrierten Touchscreens in Konzeptfahrzeuge zeigt, dass sich diese Entwicklung weiter in vollem Gange befindet. Interaktive Oberflächen ermöglichen das flexible Anzeigen von kontextsensitiven Inhalten und machen eine direkte Interaktion mit den Bildschirminhalten möglich. Auf diese Weise erfüllen sie die sich wandelnden Informations- und Interaktionsbedürfnisse in besonderem Maße. Beim Einsatz von Bedienschnittstellen im Fahrzeug ist die gefahrlose Benutzbarkeit während der Fahrt von besonderer Bedeutung. Insbesondere visuelle Ablenkung von der Fahraufgabe kann zu kritischen Situationen führen, wenn Primär- und Sekundäraufgaben mehr als die insgesamt verfügbare Aufmerksamkeit des Fahrers beanspruchen. Herkömmliche Touchscreens stellen dem Fahrer bisher lediglich eine flache Oberfläche bereit, die keinerlei haptische Rückmeldung bietet, weshalb deren Bedienung besonders viel visuelle Aufmerksamkeit erfordert. Verschiedene Ansätze ermöglichen dem Fahrer, direkte Touchinteraktion für einfache Aufgaben während der Fahrt zu nutzen. Außerhalb der Automobilindustrie, zum Beispiel für Büroarbeitsplätze, wurden bereits verschiedene Konzepte für eine komplexere Bedienung großer Bildschirme vorgestellt. Darüber hinaus führt der technologische Fortschritt zu neuen möglichen Ausprägungen interaktiver Oberflächen und erlaubt, diese beliebig zu formen. Für die nächste Generation von interaktiven Oberflächen im Fahrzeug wird vor allem an der Modifikation der Kategorien Größe und Form gearbeitet. Die Bedienschnittstelle wird nicht nur durch größere Bildschirme erweitert, sondern auch dadurch, dass Objekte wie Dekorleisten in die Interaktion einbezogen werden können. Andererseits heben aktuelle Technologieentwicklungen die Restriktion auf flache Oberflächen auf, so dass Touchscreens künftig ertastbare Strukturen aufweisen können. Diese Dissertation beschreibt die systematische Untersuchung großer und nicht-flacher interaktiver Oberflächen und analysiert ihr Potential für die Interaktion während der Fahrt. Dazu wurden für jede Charakteristik verschiedene Prototypen entwickelt und in Testumgebungen entsprechend ihres Reifegrads evaluiert. Auf diese Weise konnten subjektives Nutzerfeedback und objektive Daten erhoben, und die Effekte auf Fahr- und Blickverhalten sowie Nutzbarkeit untersucht werden. Diese Dissertation leistet den Beitrag einer Analyse der Entwicklung von interaktiven Oberflächen im Automobilbereich. Weiterhin werden die Aspekte Größe und Form untersucht, um mit ihrer Hilfe die Interaktion im Vergleich zu herkömmlichen Touchscreens zu verbessern. Die durchgeführten Studien belegen, dass große Flächen neue und verbesserte Bedienmöglichkeiten bieten können. Außerdem zeigt sich ein positiver Effekt auf die visuelle Ablenkung, wenn zusätzliches statisches, haptisches Feedback durch nicht-flache Oberflächen bereitgestellt wird. Zusammenfassend zeigen verschiedene, untereinander kombinierbare Interaktionskonzepte das Potential interaktiver Oberflächen für den automotiven Einsatz. Zudem können die Ergebnisse auch in anderen Bereichen Anwendung finden, in denen visuelle Aufmerksamkeit für andere Aufgaben benötigt wird

    Development of Human-Computer Interactive Interface for Intelligent Automotive

    Get PDF
    The wide application of information technology and network technology in automobiles has made great changes in the Human-computer interaction. This paper studies the influence of Human-computer interaction modes on driving safety, comfort and efficiency based on physical interaction, touch screen control interaction, augmented reality, speech interaction and somatosensory interaction. The future Human-com-puter interaction modes such as multi-channel Human-computer interaction mode and Human-computer interaction mode based on biometrics and perception techno-logy are also discussed. At last, the method of automobile Human-computer interaction design based on the existing technology is proposed, which has certain guiding significance for the current automobile Human-computer interaction interface design

    From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI

    Full text link
    This paper gives an overview of the ten-year devel- opment of the papers presented at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) from 2009 to 2018. We categorize the topics into two main groups, namely, manual driving-related research and automated driving-related re- search. Within manual driving, we mainly focus on studies on user interfaces (UIs), driver states, augmented reality and head-up displays, and methodology; Within automated driv- ing, we discuss topics, such as takeover, acceptance and trust, interacting with road users, UIs, and methodology. We also discuss the main challenges and future directions for AutoUI and offer a roadmap for the research in this area.https://deepblue.lib.umich.edu/bitstream/2027.42/153959/1/From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdfDescription of From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdf : Main articl
    • …
    corecore