61 research outputs found

    Do That, There: An Interaction Technique for Addressing In-Air Gesture Systems

    Get PDF
    When users want to interact with an in-air gesture system, they must first address it. This involves finding where to gesture so that their actions can be sensed, and how to direct their input towards that system so that they do not also affect others or cause unwanted effects. This is an important problem [6] which lacks a practical solution. We present an interaction technique which uses multimodal feedback to help users address in-air gesture systems. The feedback tells them how (“do that”) and where (“there”) to gesture, using light, audio and tactile displays. By doing that there, users can direct their input to the system they wish to interact with, in a place where their gestures can be sensed. We discuss the design of our technique and three experiments investigating its use, finding that users can “do that” well (93.2%–99.9%) while accurately (51mm–80mm) and quickly (3.7s) finding “there”

    Mid-Air Gestural Interaction with a Large Fogscreen

    Get PDF
    Projected walk-through fogscreens have been created, but there is little research on the evaluation of the interaction performance with fogscreens. The present study investigated mid-air hand gestures for interaction with a large fogscreen. Participants (N = 20) selected objects from a fogscreen using tapping and dwell-based gestural techniques, with and without vibrotactile/haptic feedback. In terms of Fitts’ law, the throughput was about 1.4 bps to 2.6 bps, suggesting that gestural interaction with a large fogscreen is a suitable and effective input method. Our results also suggest that tapping without haptic feedback has good performance and potential for interaction with a fogscreen, and that tactile feedback is not necessary for effective mid-air interaction. These findings have implications for the design of gestural interfaces suitable for interaction with fogscreens.Peer reviewe

    Dynamic motion coupling of body movement for input control

    Get PDF
    Touchless gestures are used for input when touch is unsuitable or unavailable, such as when interacting with displays that are remote, large, public, or when touch is prohibited for hygienic reasons. Traditionally user input is spatially or semantically mapped to system output, however, in the context of touchless gestures these interaction principles suffer from several disadvantages including memorability, fatigue, and ill-defined mappings. This thesis investigates motion correlation as the third interaction principle for touchless gestures, which maps user input to system output based on spatiotemporal matching of reproducible motion. We demonstrate the versatility of motion correlation by using movement as the primary sensing principle, relaxing the restrictions on how a user provides input. Using TraceMatch, a novel computer vision-based system, we show how users can provide effective input through investigation of input performance with different parts of the body, and how users can switch modes of input spontaneously in realistic application scenarios. Secondly, spontaneous spatial coupling shows how motion correlation can bootstrap spatial input, allowing any body movement, or movement of tangible objects, to be appropriated for ad hoc touchless pointing on a per interaction basis. We operationalise the concept in MatchPoint, and demonstrate the unique capabilities through an exploration of the design space with application examples. Finally, we explore how users synchronise with moving targets in the context of motion correlation, revealing how simple harmonic motion leads to better synchronisation. Using the insights gained we explore the robustness of algorithms used for motion correlation, showing how it is possible to successfully detect a user's intent to interact whilst suppressing accidental activations from common spatial and semantic gestures. Finally, we look across our work to distil guidelines for interface design, and further considerations of how motion correlation can be used, both in general and for touchless gestures

    Effectiveness of Lateral Auditory Collision Warnings: Should Warnings Be Toward Danger or Toward Safety?

    Get PDF
    Objective. The present study investigated the design of spatially oriented auditory collision warning signals to facilitate drivers’ responses to potential collisions. Background. Prior studies on collision warnings have mostly focused on manual driving. It is necessary to examine the design of collision warnings for safe take-over actions in semi-autonomous driving. Method. In a video-based semi-autonomous driving scenario, participants responded to pedestrians walking across the road, with a warning tone presented in either the avoidance direction or the collision direction. The time interval between the warning tone and the potential collision was also manipulated. In Experiment 1, pedestrians always started walking from one side of the road to the other side. In Experiment 2, pedestrians appeared in the middle of the road and walked toward either side of the road. Results. In Experiment 1, drivers reacted to the pedestrian faster with collision-direction warnings than with avoidance-direction warnings. In Experiment 2, the difference between the two warning directions became non-significant. In both experiments, shorter time intervals to potential collisions resulted in faster reactions but did not influence the effect of warning direction. Conclusion. The collision-direction warnings were advantageous over the avoidance-direction warnings only when they occurred at the same lateral location as the pedestrian, indicating that this advantage was due to the capture of attention by the auditory warning signals. Application. The present results indicate that drivers would benefit most when warnings occur at the side of potential collision objects rather than the direction of a desirable action during semi-autonomous driving

    Do That, There: An Interaction Technique for Addressing In-Air Gesture Systems

    Get PDF
    When users want to interact with an in-air gesture system, they must first address it. This involves finding where to gesture so that their actions can be sensed, and how to direct their input towards that system so that they do not also affect others or cause unwanted effects. This is an important problem [6] which lacks a practical solution. We present an interaction technique which uses multimodal feedback to help users address in-air gesture systems. The feedback tells them how (“do that”) and where (“there”) to gesture, using light, audio and tactile displays. By doing that there, users can direct their input to the system they wish to interact with, in a place where their gestures can be sensed. We discuss the design of our technique and three experiments investigating its use, finding that users can “do that” well (93.2%–99.9%) while accurately (51mm–80mm) and quickly (3.7s) finding “there”

    SELF-IMAGE MULTIMEDIA TECHNOLOGIES FOR FEEDFORWARD OBSERVATIONAL LEARNING

    Get PDF
    This dissertation investigates the development and use of self-images in augmented reality systems for learning and learning-based activities. This work focuses on self- modeling, a particular form of learning, actively employed in various settings for therapy or teaching. In particular, this work aims to develop novel multimedia systems to support the display and rendering of augmented self-images. It aims to use interactivity (via games) as a means of obtaining imagery for use in creating augmented self-images. Two multimedia systems are developed, discussed and analyzed. The proposed systems are validated in terms of their technical innovation and their clinical efficacy in delivering behavioral interventions for young children on the autism spectrum

    E-Learning

    Get PDF
    Technology development, mainly for telecommunications and computer systems, was a key factor for the interactivity and, thus, for the expansion of e-learning. This book is divided into two parts, presenting some proposals to deal with e-learning challenges, opening up a way of learning about and discussing new methodologies to increase the interaction level of classes and implementing technical tools for helping students to make better use of e-learning resources. In the first part, the reader may find chapters mentioning the required infrastructure for e-learning models and processes, organizational practices, suggestions, implementation of methods for assessing results, and case studies focused on pedagogical aspects that can be applied generically in different environments. The second part is related to tools that can be adopted by users such as graphical tools for engineering, mobile phone networks, and techniques to build robots, among others. Moreover, part two includes some chapters dedicated specifically to e-learning areas like engineering and architecture

    Interactive advertising displays

    Get PDF
    Interactive public displays are the latest development in the field of out-of-home advertising. Throughout history characteristic shapes for billboards evolved such as flat rectangular displays, long displays or cylindrical advertising columns. This work presents novel interactive display designs that are based on these historical role models and allow passers-by to interact with them in a natural, touchless manner. It further pursues a vision where interactive public displays become more active themselves and actively influence passer-by behavior in order to increase their effectiveness, better attract attention and improve public interaction in front of them. First, to overcome the challenge that passers-by often do not expect public displays to be interactive and thus pay no attention to them, this work presents a solution called unaware initial interaction that surprises passers-by and communicates interactivity by giving visual feedback to their initial movements. To be effective, the visual feedback has to be designed considering the specific display shapes, their requirements to contents and the typical approaching trajectories. Second, to overcome the challenge that larger groups of passers-by often crowd together in front of wide public displays or do not take optimal positions for interaction, this work presents a solution to subtly and actively guide users by dynamic and interactive visual cues on the screen in order to better distribute them. To explore these concepts and following an initial analysis of the out-of-home domain and of typical display qualities, interactive counterparts to the classical display shapes are designed such as interactive advertising columns, long banner displays and life-size screens. Then interactive contents and visual feedbacks are designed which implement the presented interactivity concepts, and audience behavior around them is analyzed in several long-term field studies in public space. Finally the observed passer-by and user behavior and the effectiveness of the display and content designs are discussed and takeaways given that are useful for practitioners and researchers in the field of public interaction with out-of-home displays.Interaktive öffentliche Displays sind die neueste Entwicklung im Bereich der Außenwerbung. Im Laufe der Geschichte bildeten sich charakteristische Formen für Werbetafeln heraus wie flache rechteckige Displays, lange Displays oder zylindrische Werbesäulen. Die vorliegende Arbeit stellt neuartige Designs für Displays vor, die auf diesen historischen Vorbildern aufbauen und den Passanten erlauben, mit ihnen auf eine natürliche, berührungslose Art und Weise zu interagieren. Darüber hinaus verfolgt sie eine Vision, in der interaktive öffentliche Displays aktiver werden und entsprechend das Passantenverhalten beeinflussen, um ihre Wirksamkeit zu erhöhen, mehr Aufmerksamkeit auf sich zu ziehen und die öffentliche Interaktion mit ihnen zu verbessern. Zunächst stellt diese Arbeit eine als Unbewusste Initialinteraktion bezeichnete Lösung vor, welche die Passanten überrascht und mittels visuellem Feedback auf ihre anfänglichen Bewegungen Interaktivität übermittelt, um die Herausforderung zu bewältigen, dass Passanten oft nicht erwarten, dass öffentliche Displays interaktiv sind und sie ihnen somit keine Aufmerksamkeit schenken. Um effektiv zu sein, muss das visuelle Feedback dabei so gestaltet werden, dass es die spezifischen Displayformen, ihre Anforderungen an die dargestellten Inhalte und ihre typischen Annäherungswege berücksichtigt. Zweitens stellt sie eine Lösung vor, bei der die Nutzer auf subtile Weise und durch auf dem Bildschirm dargestellte dynamische und interaktive visuelle Reize aktiv geführt werden, um sie besser vor dem Display zu verteilen, um die Herausforderung zu bewältigen, dass größere Gruppen von Passanten sich oft vor breiten öffentlichen Displays zusammendrängen oder keine optimalen Positionen für die Interaktion einnehmen. Zur Erforschung dieser Konzepte werden im Anschluss an eine einführende Analyse von Außenwerbedisplays und ihrer typischen Eigenschaften interaktive Entsprechungen der klassischen Displayformen entwickelt wie interaktive Litfaßsäulen, lange Bannerdisplays und Life-size Screens. Weiter werden für diese Displays interaktive Inhalte und visuelle Feedbacks entwickelt, welche die vorgestellten Interaktivitätskonzepte umsetzen und das Verhalten des anwesenden Publikums in mehreren Langzeit-Feldstudien im öffentlichen Raum untersucht. Schließlich werden das beobachtete Passanten- und Nutzerverhalten und die Effektivität der entwickelten Display-Designs und Inhalte bewertet und nützliche Empfehlungen für Praktiker und Forscher auf dem Gebiet der öffentlichen Interaktion mit Außenwerbedisplays gegeben
    • …
    corecore