6 research outputs found

    Influence of real world and virtual reality on human mid-air pointing accuracy

    Get PDF
    Mid-air pointing is a major gesture for humans to express a direction non-verbally. This work focuses on absolute pointing to reference an object or person which is in sight of the person who performs the pointing gesture. In the future, we see mid-air pointing as one way to interact with objects and smart home environments. However, mid-air pointing could also replace the controller to interact with a virtual environment. Recent work has shown that humans are imprecise while mid-air pointing. Furthermore, previous work has shown a systematic offset while mid-air pointing. In this work, we are reproducing these results and further reveal that the same effect is present in virtual environments. We further show that people point significantly different in a real and virtual environment. Therefore, to correct the systematic offset, we develop different models to determine the actual pointing direction. These models are based on a ground truth study in which we recorded participants' body posture while mid-air pointing. Finally, we validate the models by conducting a second study with 16 new participants. Our results show that we can significantly reduce the offset. We further show that when displaying a cursor indicating the pointing direction the offset can be further reduced. However, when displaying a cursor the pointing time increased in comparison to no cursor.Freihändiges Zeigen ist eine mächtige Geste, um nonverbal Richtungsangaben auszudrücken. Diese Arbeit wird ihren Fokus auf absolutes Zeigen legen um Objekte oder Personen mit direktem Sichtkontakt referenzieren zu können. Wir sehen freihändiges Zeigen für die Zukunft als eine gute Möglichkeit, um mit Objekten und Smart Home Umgebungen zu interagieren. Jedoch könnte freihändiges Zeigen genauso Contoller für die Steuerung virtueller Umgebungen ersetzen. Bisherige Arbeiten haben bereits gezeigt, dass Menschen bei freihändigem Zeigen ungenau sind. Diese Arbeiten konnten weiterhin einen systematischen Fehler bei freihändigem Zeigen nachweisen. Mit dieser Arbeit werden wir diese Fehler reproduzieren und weiter zeigen, dass die gleichen Fehler auch in virtuellen Umgebungen auftreten. Wir werden weiter zeigen, dass Menschen signifkant anders in der Echtwelt und virtuellen Umgebungen zeigen. Daher entwickeln wir unterschiedliche Modelle für das Berechnen der tatsächlichen Zeigerichtung. Diese Modelle bauen auf Daten aus einer ersten Studie auf, während welcher wir Probanden beim Zeigen aufgezeichnet haben. Des weiteren verifzieren wir diese Modelle durch eine zweite Studie mit neuen Probanden. Unsere Ergebnisse zeigen, dass wir den Fehler signifkant reduzieren können. Des weiteren können wir zeigen, dass ein Cursor, welcher die Zeigerichtung anzeigt, den Fehler weiter reduzieren kann. Jedoch steigt die benötigte Zeit für die Zeigegesten durch diesen Cursor an

    Finger orientation as an additional input dimension for touchscreens

    Get PDF
    Since the first digital computer in 1941 and the first personal computer back in 1975, the way we interact with computers has radically changed. The keyboard is still one of the two main input devices for desktop computers which is accompanied most of the time by a mouse or trackpad. However, the interaction with desktop and laptop computers today only make up a small percentage of current interaction with computing devices. Today, we mostly interact with ubiquitous computing devices, and while the first ubiquitous devices were controlled via buttons, this changed with the invention of touchscreens. Moreover, the phone as the most prominent ubiquitous computing device is heavily relying on touch interaction as the dominant input mode. Through direct touch, users can directly interact with graphical user interfaces (GUIs). GUI controls can directly be manipulated by simply touching them. However, current touch devices reduce the richness of touch input to two-dimensional positions on the screen. In this thesis, we investigate the potential of enriching a simple touch with additional information about the finger touching the screen. We propose to use the user’s finger orientation as two additional input dimensions. We investigate four key areas which make up the foundation to fully understand finger orientation as an additional input technique. With these insights, we provide designers with the foundation to design new gestures sets and use cases which take the finger orientation into account. We first investigate approaches to recognize finger orientation input and provide ready-to-deploy models to recognize the orientation. Second, we present design guidelines for a comfortable use of finger orientation. Third, we present a method to analyze applications in social settings to design use cases with possible conversation disruption in mind. Lastly, we present three ways how new interaction techniques like finger orientation input can be communicated to the user. This thesis contributes these four key insights to fully understand finger orientation as an additional input technique. Moreover, we combine the key insights to lay the foundation to evaluate every new interaction technique based on the same in-depth evaluation

    Enriching mobile interaction with garment-based wearable computing devices

    Get PDF
    Wearable computing is on the brink of moving from research to mainstream. The first simple products, such as fitness wristbands and smart watches, hit the mass market and achieved considerable market penetration. However, the number and versatility of research prototypes in the field of wearable computing is far beyond the available devices on the market. Particularly, smart garments as a specific type of wearable computer, have high potential to change the way we interact with computing systems. Due to the proximity to the user`s body, smart garments allow to unobtrusively sense implicit and explicit user input. Smart garments are capable of sensing physiological information, detecting touch input, and recognizing the movement of the user. In this thesis, we explore how smart garments can enrich mobile interaction. Employing a user-centered design process, we demonstrate how different input and output modalities can enrich interaction capabilities of mobile devices such as mobile phones or smart watches. To understand the context of use, we chart the design space for mobile interaction through wearable devices. We focus on the device placement on the body as well as interaction modality. We use a probe-based research approach to systematically investigate the possible inputs and outputs for garment based wearable computing devices. We develop six different research probes showing how mobile interaction benefits from wearable computing devices and what requirements these devices pose for mobile operating systems. On the input side, we look at explicit input using touch and mid-air gestures as well as implicit input using physiological signals. Although touch input is well known from mobile devices, the limited screen real estate as well as the occlusion of the display by the input finger are challenges that can be overcome with touch-enabled garments. Additionally, mid-air gestures provide a more sophisticated and abstract form of input. We present a gesture elicitation study to address the special requirements of mobile interaction and present the resulting gesture set. As garments are worn, they allow different physiological signals to be sensed. We explore how we can leverage these physiological signals for implicit input. We conduct a study assessing physiological information by focusing on the workload of drivers in an automotive setting. We show that we can infer the driver´s workload using these physiological signals. Beside the input capabilities of garments, we explore how garments can be used as output. We present research probes covering the most important output modalities, namely visual, auditory, and haptic. We explore how low resolution displays can serve as a context display and how and where content should be placed on such a display. For auditory output, we investigate a novel authentication mechanism utilizing the closeness of wearable devices to the body. We show that by probing audio cues through the head of the user and re-recording them, user authentication is feasible. Last, we investigate EMS as a haptic feedback method. We show that by actuating the user`s body, an embodied form of haptic feedback can be achieved. From the aforementioned research probes, we distilled a set of design recommendations. These recommendations are grouped into interaction-based and technology-based recommendations and serve as a basis for designing novel ways of mobile interaction. We implement a system based on these recommendations. The system supports developers in integrating wearable sensors and actuators by providing an easy to use API for accessing these devices. In conclusion, this thesis broadens the understanding of how garment-based wearable computing devices can enrich mobile interaction. It outlines challenges and opportunities on an interaction and technological level. The unique characteristics of smart garments make them a promising technology for making the next step in mobile interaction

    Implications of the uncanny valley of avatars and virtual characters for human-computer interaction

    Get PDF
    Technological innovations made it possible to create more and more realistic figures. Such figures are often created according to human appearance and behavior allowing interaction with artificial systems in a natural and familiar way. In 1970, the Japanese roboticist Masahiro Mori observed, however, that robots and prostheses with a very - but not perfect - human-like appearance can elicit eerie, uncomfortable, and even repulsive feelings. While real people or stylized figures do not seem to evoke such negative feelings, human depictions with only minor imperfections fall into the "uncanny valley," as Mori put it. Today, further innovations in computer graphics led virtual characters into the uncanny valley. Thus, they have been subject of a number of disciplines. For research, virtual characters created by computer graphics are particularly interesting as they are easy to manipulate and, thus, can significantly contribute to a better understanding of the uncanny valley and human perception. For designers and developers of virtual characters such as in animated movies or games, it is important to understand how the appearance and human-likeness or virtual realism influence the experience and interaction of the user and how they can create believable and acceptable avatars and virtual characters despite the uncanny valley. This work investigates these aspects and is the next step in the exploration of the uncanny valley. This dissertation presents the results of nine studies examining the effects of the uncanny valley on human perception, how it affects interaction with computing systems, which cognitive processes are involved, and which causes may be responsible for the phenomenon. Furthermore, we examine not only methods for avoiding uncanny or unpleasant effects but also the preferred characteristics of virtual faces. We bring the uncanny valley into context with related phenomena causing similar effects. By exploring the eeriness of virtual animals, we found evidence that the uncanny valley is not only related to the dimension of human-likeness, which significantly change our view on the phenomenon. Furthermore, using advanced hand tracking and virtual reality technologies, we discovered that avatar realism is connected to other factors, which are related to the uncanny valley and depend on avatar realism. Affinity with the virtual ego and the feeling of presence in the virtual world were also affected by gender and deviating body structures such as a reduced number of fingers. Considering the performance while typing on keyboards in virtual reality, we also found that the perception of the own avatar depends on the user's individual task proficiencies. This thesis concludes with implications that not only extends existing knowledge about virtual characters, avatars and the uncanny valley but also provide new design guidelines for human-computer interaction and virtual reality

    Study of the interaction of older adults with touchscreen

    Get PDF
    Utiliser une tablette ou un smartphone est désormais courant. Cependant, les effets de l'âge sur les capacités motrices nécessaires pour l'exécution des gestes d'interaction tactile n'ont pas été suffisamment pris en compte lors de la conception et de l'évaluation des systèmes interactifs, une des raisons qui a empêché l'inclusion numérique de ce groupe d'utilisateurs. L'objectif de cette thèse est d'étudier l'interaction des personnes âgées avec les écrans tactiles afin d'identifier des problèmes d'utilisabilité sur des supports variés (smartphone et tablette, doigt et stylet). Pour cette étude, nous avons conçu un système interactif constitué de jeux de type puzzle numérique tactiles, où le geste d'interaction drag-and-drop (glisser-déposer) est employé pour positionner les cibles. Dans ce contexte, une attention particulière a été portée à l'analyse des mouvements de l'utilisateur. L'analyse des postures du poignet durant l'interaction a permis d'élucider la relation entre les caractéristiques des mouvements des personnes âgées avec leurs performances, à savoir, des temps plus longs et une augmentation du nombre d'erreurs par rapport aux utilisateurs adultes plus jeunes. Prendre en compte la variabilité des capacités motrices des utilisateurs lors des phases de conception et évaluation des systèmes interactifs est nécessaire pour comprendre leurs difficultés et améliorer l'ergonomie et utilisabilité de l'interaction tactile.Tablets and smartphones have become mainstream technologies. However, the aging effects on the motor skills implied on tactile interaction haven't been enough considered during the design and evaluation of tactile interactive systems, what prevent this group of older adult users to be digitally included successfully. This thesis aims to study the interaction of older adults with touchscreens in order to identify usability issues on different devices and input modalities (smartphone and tablet, finger and stylus). To this study, we designed an interactive system consisted of tactile puzzle games and using drag-and-drop interaction for positioning the puzzle pieces into their corresponding targets. In this framework, a special attention was given to the analysis of the movements of the user. The analysis of the postures of the users' wrists during interaction allowed to elucidate the relationship between the characteristics of the movements of older adults and their performances, particularly concerning the longer times needed for executing the gestures of interaction as well as the increased error rates of this group of users when compared to younger adults. Taking into account the variability of users' motor skills during the design and evaluation of interactive systems is necessary to better understand their difficulties as well as to improve the ergonomics and the usability levels of tactile interaction
    corecore