3,182 research outputs found

    Direct interaction with large displays through monocular computer vision

    Get PDF
    Large displays are everywhere, and have been shown to provide higher productivity gain and user satisfaction compared to traditional desktop monitors. The computer mouse remains the most common input tool for users to interact with these larger displays. Much effort has been made on making this interaction more natural and more intuitive for the user. The use of computer vision for this purpose has been well researched as it provides freedom and mobility to the user and allows them to interact at a distance. Interaction that relies on monocular computer vision, however, has not been well researched, particularly when used for depth information recovery. This thesis aims to investigate the feasibility of using monocular computer vision to allow bare-hand interaction with large display systems from a distance. By taking into account the location of the user and the interaction area available, a dynamic virtual touchscreen can be estimated between the display and the user. In the process, theories and techniques that make interaction with computer display as easy as pointing to real world objects is explored. Studies were conducted to investigate the way human point at objects naturally with their hand and to examine the inadequacy in existing pointing systems. Models that underpin the pointing strategy used in many of the previous interactive systems were formalized. A proof-of-concept prototype is built and evaluated from various user studies. Results from this thesis suggested that it is possible to allow natural user interaction with large displays using low-cost monocular computer vision. Furthermore, models developed and lessons learnt in this research can assist designers to develop more accurate and natural interactive systems that make use of human’s natural pointing behaviours

    Peripheral Notifications: Effects of Feature Combination and Task Interference

    Get PDF
    Visual notifications are integral to interactive computing systems. The design of visual notifications entails two main considerations: first, visual notifications should be noticeable, as they usually aim to attract a user`s attention to a location away from their main task; second, their noticeability has to be moderated to prevent user distraction and annoyance. Although notifications have been around for a long time on standard desktop environments, new computing environments such as large screens add new factors that have to be taken into account when designing notifications. With large displays, much of the content is in the user's visual periphery, where human capacity to notice visual effects is diminished. One design strategy for enhancing noticeability is to combine visual features, such as motion and colour. Yet little is known about how feature combinations affect noticeability across the visual field, or about how peripheral noticeability changes when a user is working on an attention-demanding task. We addressed these questions by conducting two studies. We conducted a laboratory study that tested people's ability to detect popout targets that used combinations of three visual variables. After determining that the noticeability of feature combinations were approximately equal to the better of the individual features, we designed an experiment to investigate peripheral noticeability and distraction when a user is focusing on a primary task. Our results suggest that there can be interference between the demands of primary tasks and the visual features in the notifications. Furthermore, primary task performance is adversely affected by motion effects in the peripheral notifications. Our studies contribute to a better understanding of how visual features operate when used as peripheral notifications. We provide new insights, both in terms of combining features, and interactions with primary tasks

    Measuring user experience for virtual reality

    Get PDF
    In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. These technologies have the potential to create new experiences that combine the advantages of reality and virtuality. While the technology for input as well as output devices is market ready, only a few solutions for everyday VR - online shopping, games, or movies - exist, and empirical knowledge about performance and user preferences is lacking. All this makes the development and design of human-centered user interfaces for VR a great challenge. This thesis investigates the evaluation and design of interactive VR experiences. We introduce the Virtual Reality User Experience (VRUX) model based on VR-specific external factors and evaluation metrics such as task performance and user preference. Based on our novel UX evaluation approach, we contribute by exploring the following directions: shopping in virtual environments, as well as text entry and menu control in the context of everyday VR. Along with this, we summarize our findings by design spaces and guidelines for choosing optimal interfaces and controls in VR.In den letzten Jahren haben Virtual Reality (VR) und 3D User Interfaces (3DUI) stark an Popularität gewonnen, insbesondere bei Hard- und Software im Konsumerbereich. Diese Technologien haben das Potenzial, neue Erfahrungen zu schaffen, die die Vorteile von Realität und Virtualität kombinieren. Während die Technologie sowohl für Eingabe- als auch für Ausgabegeräte marktreif ist, existieren nur wenige Lösungen für den Alltag in VR - wie Online-Shopping, Spiele oder Filme - und es fehlt an empirischem Wissen über Leistung und Benutzerpräferenzen. Dies macht die Entwicklung und Gestaltung von benutzerzentrierten Benutzeroberflächen für VR zu einer großen Herausforderung. Diese Arbeit beschäftigt sich mit der Evaluation und Gestaltung von interaktiven VR-Erfahrungen. Es wird das Virtual Reality User Experience (VRUX)- Modell eingeführt, das auf VR-spezifischen externen Faktoren und Bewertungskennzahlen wie Leistung und Benutzerpräferenz basiert. Basierend auf unserem neuartigen UX-Evaluierungsansatz leisten wir einen Beitrag, indem wir folgende interaktive Anwendungsbereiche untersuchen: Einkaufen in virtuellen Umgebungen sowie Texteingabe und Menüsteuerung im Kontext des täglichen VR. Die Ergebnisse werden außerdem mittels Richtlinien zur Auswahl optimaler Schnittstellen in VR zusammengefasst

    Handheld Augmented Reality in education

    Full text link
    [ES] En esta tesis llevamos a cabo una investigación en Realidad Aumentada (AR) orientada a entornos de aprendizaje, donde la interacción con los estudiantes se realiza con dispositivos de mano. A través de tres estudios exploramos las respuestas en el aprendizaje que se pueden obtener usando AR en dispositivos de mano, en un juego que desarrollamos para niños. Exploramos la influencia de AR en Entornos de Aprendizaje de Realidad Virtual (VRLE) y las ventajas que pueden aportar, así como sus límites. También probamos el juego en dos dispositivos de mano distintos (un smartphone y un Tablet PC) y presentamos las conclusiones comparándolos en torno a la satisfación y la interacción. Finalmente, comparamos interfaces táctiles y tangibles en aplicaciones de AR para niños bajo una perspectiva en Interacción Hombre-Máquina.[EN] In this thesis we conduct a research in Augmented Reality (AR) aimed to learning environments, where the interaction with the students is carried out using handheld devices. Through three studies we explore the learning outcomes that can be obtained using handheld AR in a game that we developed for children. We explored the influence of AR in Virtual Reality Learning Environments (VRLE) and the advantages that can involve, as well as the limits. We also tested the game in two different handheld devices (a smartphone and a Tablet PC) and present the conclusions comparing them concerning satisfaction and interaction. Finally, we compare the use tactile and tangible user interfaces in AR applications for children under a Human-Computer Interaction perspective.González Gancedo, S. (2012). Handheld Augmented Reality in education. http://hdl.handle.net/10251/17973Archivo delegad

    Order Picking Supported by Mobile Computing

    Get PDF
    In this dissertation I present the results of a newly developed mobile computing solution with reasonable investment costs that supports the picking process in a high density picking environment with multiple orders. The developed solution is presented on a head-mounted display (HMD). It has a graphical user interface that displays graphical representations of the shelves to pick from. Results show that in a high density picking environment, this solution is faster than paper-pick lists and pick-by-voice and virtually eliminates errors. Using color helps to identify the correct row and some evidence suggests that symbols and partial images as well as context feedback can further improve the error rate. Testing on an assembly line of an automobile manufacturer where normally pick-by-light was used showed some difficulty in user acceptance for HMDs. A tablet-PC mounted on the pick cart was well accepted in this study and may provide similar benefits and performance

    Using Auto-Ordering to Improve Object Transfer between Mobile Devices

    Get PDF
    People frequently form small groups in many social and professional situations: from conference attendees meeting at a coffee break, to siblings gathering at a family barbecue. These ad-hoc gatherings typically form into predictable geometries based on circles or circular arcs (called F-Formations). Because our lives are increasingly stored and represented by data on handheld devices, the desire to be able to share digital objects while in these groupings has increased. Using the relative position in these groups to facilitate file sharing could facilitate intuitive interfaces such as passing or flicking. However, there is no reliable, lightweight, ad-hoc technology for detecting and representing relative locations around a circle. In this thesis, we present three systems that can auto-order locations about a circle based on sensors standard on commodity smartphones. We tested two of these systems using an object passing task in a laboratory environment against unordered and proximity-based systems, and show that our techniques are faster, more accurate, and preferred by users

    Training high performance skills using above real-time training

    Get PDF
    The Above Real-Time Training (ARTT) concept is a unique approach to training high performance skills. ARTT refers to a training paradigm that places the operator in a simulated environment that functions at faster than normal time. Such a training paradigm represents a departure from the intuitive, but not often supported, feeling that the best practice is determined by the training environment with the highest fidelity. This approach is hypothesized to provide greater 'transfer value' per simulation trial, by incorporating training techniques and instructional features into the simulator. These techniques allow individuals to acquire these critical skills faster and with greater retention. ARTT also allows an individual trained in 'fast time' to operate at what appears to be a more confident state, when the same task is performed in a real-time environment. Two related experiments are discussed. The findings appear to be consistent with previous findings that show positive effects of task variation during training. Moreover, ARTT has merit in improving or maintaining transfer with sharp reductions in training time. There are indications that the effectiveness of ARTT varies as a function of task content and possibly task difficulty. Other implications for ARTT are discussed along with future research directions
    corecore