274 research outputs found

    Improving command selection in smart environments by exploiting spatial constancy

    Get PDF
    With the a steadily increasing number of digital devices, our environments are becoming increasingly smarter: we can now use our tablets to control our TV, access our recipe database while cooking, and remotely turn lights on and off. Currently, this Human-Environment Interaction (HEI) is limited to in-place interfaces, where people have to walk up to a mounted set of switches and buttons, and navigation-based interaction, where people have to navigate on-screen menus, for example on a smart-phone, tablet, or TV screen. Unfortunately, there are numerous scenarios in which neither of these two interaction paradigms provide fast and convenient access to digital artifacts and system commands. People, for example, might not want to touch an interaction device because their hands are dirty from cooking: they want device-free interaction. Or people might not want to have to look at a screen because it would interrupt their current task: they want system-feedback-free interaction. Currently, there is no interaction paradigm for smart environments that allows people for these kinds of interactions. In my dissertation, I introduce Room-based Interaction to solve this problem of HEI. With room-based interaction, people associate digital artifacts and system commands with real-world objects in the environment and point toward these real-world proxy objects for selecting the associated digital artifact. The design of room-based interaction is informed by a theoretical analysis of navigation- and pointing-based selection techniques, where I investigated the cognitive systems involved in executing a selection. An evaluation of room-based interaction in three user studies and a comparison with existing HEI techniques revealed that room-based interaction solves many shortcomings of existing HEI techniques: the use of real-world proxy objects makes it easy for people to learn the interaction technique and to perform accurate pointing gestures, and it allows for system-feedback-free interaction; the use of the environment as flat input space makes selections fast; the use of mid-air full-arm pointing gestures allows for device-free interaction and increases awareness of other’s interactions with the environment. Overall, I present an alternative selection paradigm for smart environments that is superior to existing techniques in many common HEI-scenarios. This new paradigm can make HEI more user-friendly, broaden the use cases of smart environments, and increase their acceptance for the average user

    TRAINING AND ASSESSMENT OF HAND-EYE COORDINATION WITH ELECTROENCEPHALOGRAPHY

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Augmented reality device for first response scenarios

    Get PDF
    A prototype of a wearable computer system is proposed and implemented using commercial off-shelf components. The system is designed to allow the user to access location-specific information about an environment, and to provide capability for user tracking. Areas of applicability include primarily first response scenarios, with possible applications in maintenance or construction of buildings and other structures. Necessary preparation of the target environment prior to system\u27s deployment is limited to noninvasive labeling using optical fiducial markers. The system relies on computational vision methods for registration of labels and user position. With the system the user has access to on-demand information relevant to a particular real-world location. Team collaboration is assisted by user tracking and real-time visualizations of team member positions within the environment. The user interface and display methods are inspired by Augmented Reality1 (AR) techniques, incorporating a video-see-through Head Mounted Display (HMD) and fingerbending sensor glove.*. 1Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. At present, most AR research is concerned with the use of live video imagery which is digitally processed and augmented by the addition of computer generated graphics. Advanced research includes the use of motion tracking data, fiducial marker recognition using machine vision, and the construction of controlled environments containing any number of sensors and actuators. (Source: Wikipedia) *This dissertation is a compound document (contains both a paper copy and a CD as part of the dissertation). The CD requires the following system requirements: Adobe Acrobat; Microsoft Office; Windows MediaPlayer or RealPlayer

    Augmented reality and scene examination

    Get PDF
    The research presented in this thesis explores the impact of Augmented Reality on human performance, and compares this technology with Virtual Reality using a head-mounted video-feed for a variety of tasks that relate to scene examination. The motivation for the work was the question of whether Augmented Reality could provide a vehicle for training in crime scene investigation. The Augmented Reality application was developed using fiducial markers in the Windows Presentation Foundation, running on a wearable computer platform; Virtual Reality was developed using the Crytek game engine to present a photo-realistic 3D environment; and a video-feed was provided through head-mounted webcam. All media were presented through head-mounted displays of similar resolution to provide the sole source of visual information to participants in the experiments. The experiments were designed to increase the amount of mobility required to conduct the search task, i.e., from rotation in the horizontal or vertical plane through to movement around a room. In each experiment, participants were required to find objects and subsequently recall their location. It is concluded that human performance is affected not merely via the medium through which the world is perceived but moreover, the constraints governing how movement in the world is controlled

    Investigating Precise Control in Spatial Interactions: Proxemics, Kinesthetics, and Analytics

    Get PDF
    Augmented and Virtual Reality (AR/VR) technologies have reshaped the way in which we perceive the virtual world. In fact, recent technological advancements provide experiences that make the physical and virtual worlds almost indistinguishable. However, the physical world affords subtle sensorimotor cues which we subconsciously utilize to perform simple and complex tasks in our daily lives. The lack of this affordance in existing AR/VR systems makes it difficult for their mainstream adoption over conventional 2D2D user interfaces. As a case in point, existing spatial user interfaces (SUI) lack the intuition to perform tasks in a manner that is perceptually familiar to the physical world. The broader goal of this dissertation lies in facilitating an intuitive spatial manipulation experience, specifically for motor control. We begin by investigating the role of proximity to an action on precise motor control in spatial tasks. We do so by introducing a new SUI called the Clock-Maker's Work-Space (CMWS), with the goal of enabling precise actions close to the body, akin to the physical world. On evaluating our setup in comparison to conventional mixed-reality interfaces, we find CMWS to afford precise actions for bi-manual spatial tasks. We further compare our SUI with a physical manipulation task and observe similarities in user behavior across both tasks. We subsequently narrow our focus on studying precise spatial rotation. We utilize haptics, specifically force-feedback (kinesthetics) for augmenting fine motor control in spatial rotational task. By designing three kinesthetic rotation metaphors, we evaluate precise rotational control with and without haptic feedback for 3D shape manipulation. Our results show that haptics-based rotation algorithms allow for precise motor control in 3D space, also, help reduce hand fatigue. In order to understand precise control in its truest form, we investigate orthopedic surgery training from the point of analyzing bone-drilling tasks. We designed a hybrid physical-virtual simulator for bone-drilling training and collected physical data for analyzing precise drilling action. We also developed a Laplacian based performance metric to help expert surgeons evaluate the resident training progress across successive years of orthopedic residency

    An investigation of performer embodiment and performative interaction on an augmented stage

    Get PDF
    This thesis concerns itself with an investigation of live performance on an augmented stage in front of an audience, where performers witness themselves as projection mapped virtual characters able to interact with projected virtual scenography. An interactive virtual character is projected onto the body of a performer, its movements congruent with the performer. Through visual feedback via a Head Mounted Display (HMD), the performer is virtually embodied in that they witness their virtualised body interacting with the virtual scenery and props of the augmented stage. The research is informed by a theoretical framework derived from theory on intermediality and performance, virtual embodiment and performative interaction. A literature review of theatrical productions and performances utilising projection identifies a research gap of providing the performer with a visual perspective of themselves in relationship to the projected scenography. The visual perspective delivered via the HMD enables the performer to perform towards the audience and away from the interactive projected backdrop. The resultant ‘turn away’ from facing an interactive screen and instead performing towards an audience is encapsulated in the concept of the ‘Embodied Performative Turn’. The practice-based research found that changing the visual perspective presented to the performer impacted differently on performative interaction and virtual embodiment. A second-person or audience perspective, ‘performer-as-observed’ prioritises the perception of the virtual body and enhances performative behaviour but challenges effective performative interaction with the virtual scenography. Conversely, a first-person perspective, ‘performer-as-observer’ prioritises a worldview and enhances performative interaction, but negatively impacts on performative behaviour with the loss of performer-as-observed. The research findings suggest that the presentation of differing perspectives to the performer can be used to selectively enhance performative interaction and performative behaviour on an augmented stage

    Design of virtual reality systems for animal behavior research

    Get PDF
    Virtual reality (VR) experimental behavior setups enable cognitive neuroscientists to study the integration of visual depth cues and self-motion cues into a single percept of three-dimensional space. Rodents can navigate a virtual environment by running on a spherical treadmill, but simulating locomotion in this way can both bias and suppress the frequency of their behaviors as well as introduce vestibulomotor and vestibulovisual sensory conflict during locomotion. Updating the virtual environment via the subject's own freely-moving head movements solves both the naturalistic behavior bias and vestibular conflict issues. In this thesis, I review elements of self-motion and 3D scene perception that contribute to a sense of immersion in virtual environments and suggest a freely-moving CAVE system as a VR solution for low-artifact neuroscience experiments. The manuscripts describing the 3D graphics Python package and the virtual reality setup are included. In this freely-moving CAVE VR setup, freely-moving rats demonstrate immersion in virtual environments by displaying height aversion to virtual cliffs, exploration preference of virtual objects, and spontaneously modify their locomotion trajectories near virtual walls. These experiments help bridge the classic behavior and virtual reality literature by showing that rats display similar behaviors to virtual environment features without training
    corecore