120 research outputs found

    Investigating Embodied Interaction in Near-Field Perception-Action Re-Calibration on Performance in Immersive Virtual Environments

    Get PDF
    Immersive Virtual Environments (IVEs) are becoming more accessible and more widely utilized for training. Previous research has shown that the matching of visual and proprioceptive information is important for calibration. Many state-of-the art Virtual Reality (VR) systems, commonly known as Immersive Virtual Environments (IVE), are created for training users in tasks that require accurate manual dexterity. Unfortunately, these systems can suffer from technical limitations that may force de-coupling of visual and proprioceptive information due to interference, latency, and tracking error. It has also been suggested that closed-loop feedback of travel and locomotion in an IVE can overcome compression of visually perceived depth in medium field distances in the virtual world [33, 47]. Very few experiments have examined the carryover effects of multi-sensory feedback in IVEs during manual dexterous 3D user interaction in overcoming distortions in near-field or interaction space depth perception, and the relative importance of visual and proprioceptive information in calibrating users\u27 distance judgments. In the first part of this work, we examined the recalibration of movements when the visually reached distance is scaled differently than the physically reached distance. We present an empirical evaluation of how visually distorted movements affects users\u27 reach to near field targets in an IVE. In a between subjects design, participants provided manual reaching distance estimates during three sessions; a baseline measure without feedback (open-loop distance estimation), a calibration session with visual and proprioceptive feedback (closed-loop distance estimation), and a post-interaction session without feedback (open-loop distance estimation). Subjects were randomly assigned to one of three visual feedbacks in the closed-loop condition during which they reached to target while holding a tracked stylus: i) Minus condition (-20% gain condition) in which the visual stylus appeared at 80\% of the distance of the physical stylus, ii) Neutral condition (0% or no gain condition) in which the visual stylus was co-located with the physical stylus, and iii) Plus condition (+20% gain condition) in which the visual stylus appeared at 120% of the distance of the physical stylus. In all the conditions, there is evidence of visuo-motor calibration in that users\u27 accuracy in physically reaching to the target locations improved over trials. Scaled visual feedback was shown to calibrate distance judgments within an IVE, with estimates being farthest in the post-interaction session after calibrating to visual information appearing nearer (Minus condition), and nearest after calibrating to visual information appearing further (Plus condition). The same pattern was observed during closed-loop physical reach responses, participants generally tended to physically reach farther in Minus condition and closer in Plus condition to the perceived location of the targets, as compared to Neutral condition in which participants\u27 physical reach was more accurate to the perceived location of the target. We then characterized the properties of human reach motion in the presence or absence of visuo-haptic feedback in real and IVEs within a participant\u27s maximum arm reach. Our goal is to understand how physical reaching actions to the perceived location of targets in the presence or absence of visuo-haptic feedback are different between real and virtual viewing conditions. Typically, participants reach to the perceived location of objects in the 3D environment to perform selection and manipulation actions during 3D interaction in applications such as virtual assembly or rehabilitation. In these tasks, participants typically have distorted perceptual information in the IVE as compared to the real world, in part due to technological limitations such as minimal visual field of view, resolution, latency and jitter. In an empirical evaluation, we asked the following questions; i) how do the perceptual differences between virtual and real world affect our ability to accurately reach to the locations of 3D objects, and ii) how do the motor responses of participants differ between the presence or absence of visual and haptic feedback? We examined factors such as velocity and distance of physical reaching behavior between the real world and IVE, both in the presence or absence of visuo-haptic information. The results suggest that physical reach responses vary systematically between real and virtual environments especially in situations involving presence or absence of visuo-haptic feedback. The implications of our study provide a methodological framework for the analysis of reaching motions for selection and manipulation with novel 3D interaction metaphors and to successfully characterize visuo-haptic versus non-visuo-haptic physical reaches in virtual and real world situations. While research has demonstrated that self-avatars can enhance ones\u27 sense of presence and improve distance perception, the effects of self-avatar fidelity on near field distance estimations has yet to be investigated. Thus, we investigated the effect of visual fidelity of the self-avatar in enhancing the user\u27s depth judgments, reach boundary perception and properties of physical reach motion. Previous research has demonstrated that self-avatar representation of the user enhances the sense of presence [37] and even a static notion of an avatar can improve distance estimation in far distances [59, 48]. In this study, performance with a virtual avatar was also compared to real-world performance. Three levels of fidelity were tested; 1) an immersive self-avatar with realistic limbs, 2) a low-fidelity self-avatar showing only joint locations, and 3) end-effector only. There were four primary hypotheses; First, we hypothesize that just the existence of self-avatar or end-effector position would calibrate users\u27 interaction space depth perception in an IVE. Therefore, participants\u27 distance judgments would be improved after the calibration phase regardless of self-avatars\u27 visual fidelity. Second, the magnitude of the changes from pre-test to post-test would be significantly different based on the visual details of the self-avatar presented to the participants (self-avatar vs low-fidelity self-avatar and end-effector). Third, we predict distance estimation accuracy would be the highest in immersive self-avatar condition and the lowest in end-effector condition. Forth, we predict that the properties of physical reach responses vary systematically between different visual fidelity conditions. The results suggest that reach estimations become more accurate as the visual fidelity of the avatar increases, with accuracy for high fidelity avatars approaching real-world performance as compared to low-fidelity and end-effector conditions. There was also an effect of the phase where the reach estimate became more accurate after receiving feedback in calibration phase. Overall, in all conditions reach estimations became more accurate after receiving feedback during a calibration phase. Lastly, we examined factors such as path length, time to complete the task, average velocity and acceleration of physical reach motion and compared all the IVEs conditions with real-world. The results suggest that physical reach responses vary systematically between the VR viewing conditions and real-world

    How to Build an Embodiment Lab: Achieving Body Representation Illusions in Virtual Reality

    Get PDF
    Advances in computer graphics algorithms and virtual reality (VR) systems, together with the reduction in cost of associated equipment, have led scientists to consider VR as a useful tool for conducting experimental studies in fields such as neuroscience and experimental psychology. In particular virtual body ownership, where the feeling of ownership over a virtual body is elicited in the participant, has become a useful tool in the study of body representation, in cognitive neuroscience and psychology, concerned with how the brain represents the body. Although VR has been shown to be a useful tool for exploring body ownership illusions, integrating the various technologies necessary for such a system can be daunting. In this paper we discuss the technical infrastructure necessary to achieve virtual embodiment. We describe a basic VR system and how it may be used for this purpose, and then extend this system with the introduction of real-time motion capture, a simple haptics system and the integration of physiological and brain electrical activity recordings

    Examining the Effects of Altered Avatars on Perception-Action in Virtual Reality

    Get PDF
    In virtual reality avatars are animated graphical representation of a person embedded in a virtual environment. Previous research has illustrated the benefits of having an avatar when perceiving aspects of virtual reality. We studied the effect that a non-faithful, or altered, avatar had on the perception of one\u27s action capabilities in VR. In Experiment 1, one group of participants acted with a normal, or faithful, avatar and the other group of participants used an avatar with an extended arm, all in virtual reality. In Experiment 2, the same methodology and procedure was used as in Experiment 1, except only the calibration phase occurred in VR, while the remaining reaches were completed in the real world. All participants performed reaches to various distances. The results of these studies show that calibration to altered dimensions of avatars is possible after receiving feedback while acting with the altered avatar. Further, calibration occurred more quickly when feedback was initially used to transition from a normal avatar to an altered avatar than when later transitioning from the altered avatar arm back to the normal avatar arm without feedback. The implications of these findings for training in virtual reality simulations and transfer back to the real world are also discussed

    Embodiment Sensitivity to Movement Distortion and Perspective Taking in Virtual Reality

    Get PDF
    Despite recent technological improvements of immersive technologies, Virtual Reality suffers from severe intrinsic limitations, in particular the immateriality of the visible 3D environment. Typically, any simulation and manipulation in a cluttered environment would ideally require providing feedback of collisions to every body parts (arms, legs, trunk, etc.) and not only to the hands as has been originally explored with haptic feedback. This thesis addresses these limitations by relying on a cross modal perception and cognitive approach instead of haptic or force feedback. We base our design on scientific knowledge of bodily self-consciousness and embodiment. It is known that the instantaneous experience of embodiment emerges from the coherent multisensory integration of bodily signals taking place in the brain, and that altering this mechanism can temporarily change how one perceives properties of their own body. This mechanism is at stake during a VR simulation, and this thesis explores the new venues of interaction design based on these fundamental scientific findings about the embodied self. In particular, we explore the use of third person perspective (3PP) instead of permanently offering the traditional first person perspective (1PP), and we manipulate the user-avatar motor mapping to achieve a broader range of interactions while maintaining embodiment. We are guided by two principles, to explore the extent to which we can enhance VR interaction through the manipulation of bodily aspects, and to identify the extent to which a given manipulation affects the embodiment of a virtual body. Our results provide new evidence supporting strong embodiment of a virtual body even when viewed from 3PP, and in particular that voluntarily alternating point of view between 1PP and 3PP is not detrimental to the experience of ownership over the virtual body. Moreover, detailed analysis of movement quality show highly similar reaching behavior in both perspective conditions, and only obvious advantages or disadvantages of each perspective depending on the situation (e.g. occlusion of target by the body in 3PP, limited field of view in 1PP). We also show that subjects are insensitive to visuo-proprioceptive movement distortions when the nature of the distortion was not made explicit, and that subjects are biased toward self-attributing distorted movements that make the task easier

    Can the Left Hand Benefit from being Right? The Influence of Body Side on Action Estimates

    Get PDF
    Right-handed individuals (RHIs) possess various perceptual, visual, and somatosensory biases that facilitate more precise and efficient control of the right hand. For example, RHIs have asymmetries in the cortical representation of both hands, with the right hand representation in left motor cortex being significantly larger than the left hand representation in right motor cortex. These biases have consequences on RHIs’ perceptions of their bodies, with them estimating their right hand and arm to be visually larger than their left. Subsequently, they estimate that they can reach further distances or grasp larger objects using their right hand than their left, despite no significant differences in the real morphology of the hands. One key question is whether such visual biases RHIs experience sufficiently explain these asymmetries in action perception between the left and right hand. This thesis aims to explore both the visual and non-visual (somatosensory) factors that underlie both the strong right-hand preference and impression of greater capabilities using the right hand in RHIs. To do this, virtual reality (VR) and motion capture technology were used in a series of 9 experiments to isolate visual feedback associated with moving the right hand from the somatosensory feedback that would ordinarily be experienced. Three of these 9 experiments explored the impact of visual feedback specifying handedness on perceptions of reaching and grasping abilities, four explored the ability for people to embody virtual limbs that are visually presented incongruently to somatosensory feedback, and two explored whether the left-hemisphere processing advantage for visually guided actions could be exploited in the context of visual illusions. Overall, the key findings of this thesis were that: 1) visual feedback specifying hand use did not have a significant impact on action estimates; 2) action estimates were based on the physical hand that was being used, and the right hand was estimated as more capable even when viewed as the left; 3) differences in action estimates between the left and right hands is contingent on the complexity of the action being performed. Thus, the findings suggest that RHIs’ perceptions of greater action capabilities with their right hand are primarily rooted in cortical asymmetries that lead to an enlarged sensorimotor representation of the right hand. Moreover, more efficient sensory feedback aside from vision better accounts for differences in action perception than visual feedback specifying the hand being moved during visually guided actions. The findings of this thesis have broader implications for understanding the factors that underlie biases in action perception in RHIs

    Novel Bidirectional Body - Machine Interface to Control Upper Limb Prosthesis

    Get PDF
    Objective. The journey of a bionic prosthetic user is characterized by the opportunities and limitations involved in adopting a device (the prosthesis) that should enable activities of daily living (ADL). Within this context, experiencing a bionic hand as a functional (and, possibly, embodied) limb constitutes the premise for mitigating the risk of its abandonment through the continuous use of the device. To achieve such a result, different aspects must be considered for making the artificial limb an effective support for carrying out ADLs. Among them, intuitive and robust control is fundamental to improving amputees’ quality of life using upper limb prostheses. Still, as artificial proprioception is essential to perceive the prosthesis movement without constant visual attention, a good control framework may not be enough to restore practical functionality to the limb. To overcome this, bidirectional communication between the user and the prosthesis has been recently introduced and is a requirement of utmost importance in developing prosthetic hands. Indeed, closing the control loop between the user and a prosthesis by providing artificial sensory feedback is a fundamental step towards the complete restoration of the lost sensory-motor functions. Within my PhD work, I proposed the development of a more controllable and sensitive human-like hand prosthesis, i.e., the Hannes prosthetic hand, to improve its usability and effectiveness. Approach. To achieve the objectives of this thesis work, I developed a modular and scalable software and firmware architecture to control the Hannes prosthetic multi-Degree of Freedom (DoF) system and to fit all users’ needs (hand aperture, wrist rotation, and wrist flexion in different combinations). On top of this, I developed several Pattern Recognition (PR) algorithms to translate electromyographic (EMG) activity into complex movements. However, stability and repeatability were still unmet requirements in multi-DoF upper limb systems; hence, I started by investigating different strategies to produce a more robust control. To do this, EMG signals were collected from trans-radial amputees using an array of up to six sensors placed over the skin. Secondly, I developed a vibrotactile system to implement haptic feedback to restore proprioception and create a bidirectional connection between the user and the prosthesis. Similarly, I implemented an object stiffness detection to restore tactile sensation able to connect the user with the external word. This closed-loop control between EMG and vibration feedback is essential to implementing a Bidirectional Body - Machine Interface to impact amputees’ daily life strongly. For each of these three activities: (i) implementation of robust pattern recognition control algorithms, (ii) restoration of proprioception, and (iii) restoration of the feeling of the grasped object's stiffness, I performed a study where data from healthy subjects and amputees was collected, in order to demonstrate the efficacy and usability of my implementations. In each study, I evaluated both the algorithms and the subjects’ ability to use the prosthesis by means of the F1Score parameter (offline) and the Target Achievement Control test-TAC (online). With this test, I analyzed the error rate, path efficiency, and time efficiency in completing different tasks. Main results. Among the several tested methods for Pattern Recognition, the Non-Linear Logistic Regression (NLR) resulted to be the best algorithm in terms of F1Score (99%, robustness), whereas the minimum number of electrodes needed for its functioning was determined to be 4 in the conducted offline analyses. Further, I demonstrated that its low computational burden allowed its implementation and integration on a microcontroller running at a sampling frequency of 300Hz (efficiency). Finally, the online implementation allowed the subject to simultaneously control the Hannes prosthesis DoFs, in a bioinspired and human-like way. In addition, I performed further tests with the same NLR-based control by endowing it with closed-loop proprioceptive feedback. In this scenario, the results achieved during the TAC test obtained an error rate of 15% and a path efficiency of 60% in experiments where no sources of information were available (no visual and no audio feedback). Such results demonstrated an improvement in the controllability of the system with an impact on user experience. Significance. The obtained results confirmed the hypothesis of improving robustness and efficiency of a prosthetic control thanks to of the implemented closed-loop approach. The bidirectional communication between the user and the prosthesis is capable to restore the loss of sensory functionality, with promising implications on direct translation in the clinical practice

    Toward New Ecologies of Cyberphysical Representational Forms, Scales, and Modalities

    Get PDF
    Research on tangible user interfaces commonly focuses on tangible interfaces acting alone or in comparison with screen-based multi-touch or graphical interfaces. In contrast, hybrid approaches can be seen as the norm for established mainstream interaction paradigms. This dissertation describes interfaces that support complementary information mediations, representational forms, and scales toward an ecology of systems embodying hybrid interaction modalities. I investigate systems combining tangible and multi-touch, as well as systems combining tangible and virtual reality interaction. For each of them, I describe work focusing on design and fabrication aspects, as well as work focusing on reproducibility, engagement, legibility, and perception aspects

    Thought-controlled games with brain-computer interfaces

    Get PDF
    Nowadays, EEG based BCI systems are starting to gain ground in games for health research. With reduced costs and promising an innovative and exciting new interaction paradigm, attracted developers and researchers to use them on video games for serious applications. However, with researchers focusing mostly on the signal processing part, the interaction aspect of the BCIs has been neglected. A gap between classification performance and online control quality for BCI based systems has been created by this research disparity, resulting in suboptimal interactions that lead to user fatigue and loss of motivation over time. Motor-Imagery (MI) based BCIs interaction paradigms can provide an alternative way to overcome motor-related disabilities, and is being deployed in the health environment to promote the functional and structural plasticity of the brain. A BCI system in a neurorehabilitation environment, should not only have a high classification performance, but should also provoke a high level of engagement and sense of control to the user, for it to be advantageous. It should also maximize the level of control on user’s actions, while not requiring them to be subject to long training periods on each specific BCI system. This thesis has two main contributions, the Adaptive Performance Engine, a system we developed that can provide up to 20% improvement to user specific performance, and NeuRow, an immersive Virtual Reality environment for motor neurorehabilitation that consists of a closed neurofeedback interaction loop based on MI and multimodal feedback while using a state-of-the-art Head Mounted Display.Hoje em dia, os sistemas BCI baseados em EEG estão a começar a ganhar terreno em jogos relacionados com a saúde. Com custos reduzidos e prometendo um novo e inovador paradigma de interação, atraiu programadores e investigadores para usá-los em vídeo jogos para aplicações sérias. No entanto, com os investigadores focados principalmente na parte do processamento de sinal, o aspeto de interação dos BCI foi negligenciado. Um fosso entre o desempenho da classificação e a qualidade do controle on-line para sistemas baseados em BCI foi criado por esta disparidade de pesquisa, resultando em interações subótimas que levam à fadiga do usuário e à perda de motivação ao longo do tempo. Os paradigmas de interação BCI baseados em imagética motora (IM) podem fornecer uma maneira alternativa de superar incapacidades motoras, e estão sendo implementados no sector da saúde para promover plasticidade cerebral funcional e estrutural. Um sistema BCI usado num ambiente de neuro-reabilitação, para que seja vantajoso, não só deve ter um alto desempenho de classificação, mas também deve promover um elevado nível de envolvimento e sensação de controlo ao utilizador. Também deve maximizar o nível de controlo nas ações do utilizador, sem exigir que sejam submetidos a longos períodos de treino em cada sistema BCI específico. Esta tese tem duas contribuições principais, o Adaptive Performance Engine, um sistema que desenvolvemos e que pode fornecer até 20% de melhoria para o desempenho específico do usuário, e NeuRow, um ambiente imersivo de Realidade Virtual para neuro-reabilitação motora, que consiste num circuito fechado de interação de neuro-feedback baseado em IM e feedback multimodal e usando um Head Mounted Display de última geração

    Advancing proxy-based haptic feedback in virtual reality

    Get PDF
    This thesis advances haptic feedback for Virtual Reality (VR). Our work is guided by Sutherland's 1965 vision of the ultimate display, which calls for VR systems to control the existence of matter. To push towards this vision, we build upon proxy-based haptic feedback, a technique characterized by the use of passive tangible props. The goal of this thesis is to tackle the central drawback of this approach, namely, its inflexibility, which yet hinders it to fulfill the vision of the ultimate display. Guided by four research questions, we first showcase the applicability of proxy-based VR haptics by employing the technique for data exploration. We then extend the VR system's control over users' haptic impressions in three steps. First, we contribute the class of Dynamic Passive Haptic Feedback (DPHF) alongside two novel concepts for conveying kinesthetic properties, like virtual weight and shape, through weight-shifting and drag-changing proxies. Conceptually orthogonal to this, we study how visual-haptic illusions can be leveraged to unnoticeably redirect the user's hand when reaching towards props. Here, we contribute a novel perception-inspired algorithm for Body Warping-based Hand Redirection (HR), an open-source framework for HR, and psychophysical insights. The thesis concludes by proving that the combination of DPHF and HR can outperform the individual techniques in terms of the achievable flexibility of the proxy-based haptic feedback.Diese Arbeit widmet sich haptischem Feedback für Virtual Reality (VR) und ist inspiriert von Sutherlands Vision des ultimativen Displays, welche VR-Systemen die Fähigkeit zuschreibt, Materie kontrollieren zu können. Um dieser Vision näher zu kommen, baut die Arbeit auf dem Konzept proxy-basierter Haptik auf, bei der haptische Eindrücke durch anfassbare Requisiten vermittelt werden. Ziel ist es, diesem Ansatz die für die Realisierung eines ultimativen Displays nötige Flexibilität zu verleihen. Dazu bearbeiten wir vier Forschungsfragen und zeigen zunächst die Anwendbarkeit proxy-basierter Haptik durch den Einsatz der Technik zur Datenexploration. Anschließend untersuchen wir in drei Schritten, wie VR-Systeme mehr Kontrolle über haptische Eindrücke von Nutzern erhalten können. Hierzu stellen wir Dynamic Passive Haptic Feedback (DPHF) vor, sowie zwei Verfahren, die kinästhetische Eindrücke wie virtuelles Gewicht und Form durch Gewichtsverlagerung und Veränderung des Luftwiderstandes von Requisiten vermitteln. Zusätzlich untersuchen wir, wie visuell-haptische Illusionen die Hand des Nutzers beim Greifen nach Requisiten unbemerkt umlenken können. Dabei stellen wir einen neuen Algorithmus zur Body Warping-based Hand Redirection (HR), ein Open-Source-Framework, sowie psychophysische Erkenntnisse vor. Abschließend zeigen wir, dass die Kombination von DPHF und HR proxy-basierte Haptik noch flexibler machen kann, als es die einzelnen Techniken alleine können

    Using visual feedback to guide movement: Properties of adaptation in changing environments and Parkinson\u27s disease

    Get PDF
    On a day-to-day basis we use visual information to guide the execution of our movements with great ease. The use of vision allows us to guide and modify our movements by appropriately transforming external sensory information into proper motor commands. Current literature characterizes the process of visuomotor adaptation, but fails to consider the incremental response to sensed errors that comprise a fully adaptive process. We aimed to understand the properties of the trial-by-trial transformation of sensed visual error into subsequent motor adaptation. In this thesis we further aimed to understand how visuomotor learning changes as a function of experienced environment and how it is impacted by Parkinson\u27s disease. Recent experiments in force learning have shown that adaptive strategies can be flexibly and readily modified according to the demands of the environment a person experiences. In Chapter 2, we investigated the properties of visual feedback strategies in response to environments that changed daily. We introduced visual environments that could change as a function of the likelihood of experiencing a visual perturbation, or the direction of the visual perturbation bias across the workspace. By testing subjects in environments with changing statistics across several days, we were able to observe changes in the visuomotor sensitivity across environments. We found that subjects experiencing changes in visual likelihood adopted strategies very similar to those seen in force field learning. However, unlike in haptic learning, we discovered that when subjects experienced different environmental biases, adaptive sensitivity could be effected both within a single training day as well as across training days. In Chapter 3, we investigated the properties of visuomotor adaptation in patients with Parkinson\u27s disease. Previous experiments have suggested that patients with Parkinson\u27s disease have impoverished visuomotor learning when compared to healthy age-matched controls. We tested two aspects of visuomotor adaptation to determine the contribution of visual feedback in Parkinson\u27s disease: visual extent - thought to be mediated by the basal ganglia, and visual direction - thought to be cortically mediated. We found that patients with Parkinson\u27s disease fully adapted to changes in visual direction and showed more complete adaptation compared to control subjects, but adaptation in Parkinson\u27s disease patients was impaired during changes of visual extent. Our results confirm the idea that basal ganglia deficits can alter aspects of visuomotor adaptation. However, we have shown that part of this adaptive process remains intact, in accordance with hypotheses that state visuomotor control of direction and extent are separable processes
    corecore