49 research outputs found

    How Tilting the Head Interferes With Eye-Hand Coordination: The Role of Gravity in Visuo-Proprioceptive, Cross-Modal Sensory Transformations

    Get PDF
    To correctly position the hand with respect to the spatial location and orientation of an object to be reached/grasped, visual information about the target and proprioceptive information from the hand must be compared. Since visual and proprioceptive sensory modalities are inherently encoded in a retinal and musculo-skeletal reference frame, respectively, this comparison requires cross-modal sensory transformations. Previous studies have shown that lateral tilts of the head interfere with the visuo-proprioceptive transformations. It is unclear, however, whether this phenomenon is related to the neck flexion or to the head-gravity misalignment. To answer to this question, we performed three virtual reality experiments in which we compared a grasping-like movement with lateral neck flexions executed in an upright seated position and while lying supine. In the main experiment, the task requires cross-modal transformations, because the target information is visually acquired, and the hand is sensed through proprioception only. In the other two control experiments, the task is unimodal, because both target and hand are sensed through one, and the same, sensory channel (vision and proprioception, respectively), and, hence, cross-modal processing is unnecessary. The results show that lateral neck flexions have considerably different effects in the seated and supine posture, but only for the cross-modal task. More precisely, the subjects’ response variability and the importance associated to the visual encoding of the information significantly increased when supine. We show that these findings are consistent with the idea that head-gravity misalignment interferes with the visuo-proprioceptive cross-modal processing. Indeed, the principle of statistical optimality in multisensory integration predicts the observed results if the noise associated to the visuo-proprioceptive transformations is assumed to be affected by gravitational signals, and not by neck proprioceptive signals per se. This finding is also consistent with the observation of otolithic projections in the posterior parietal cortex, which is involved in the visuo-proprioceptive processing. Altogether these findings represent a clear evidence of the theorized central role of gravity in spatial perception. More precisely, otolithic signals would contribute to reciprocally align the reference frames in which the available sensory information can be encoded.This work was supported by the Centre National d’Etudes Spatiales (DAR 2017/4800000906, DAR 2018/4800000948, 2019/4800001041). JB-E was supported by a Ph.D. fellowship of the École Doctorale Cerveau-Cognition-Comportement (ED3C, n°158, Sorbonne Université and Université de Paris). The research team is supported by the Centre National de la Recherche Scientifique and the Université de Paris. This study contributes to the IdEx Université de Paris ANR-18-IDEX-0001

    Establishing a Framework for the development of Multimodal Virtual Reality Interfaces with Applicability in Education and Clinical Practice

    Get PDF
    The development of Virtual Reality (VR) and Augmented Reality (AR) content with multiple sources of both input and output has led to countless contributions in a great many number of fields, among which medicine and education. Nevertheless, the actual process of integrating the existing VR/AR media and subsequently setting it to purpose is yet a highly scattered and esoteric undertaking. Moreover, seldom do the architectures that derive from such ventures comprise haptic feedback in their implementation, which in turn deprives users from relying on one of the paramount aspects of human interaction, their sense of touch. Determined to circumvent these issues, the present dissertation proposes a centralized albeit modularized framework that thus enables the conception of multimodal VR/AR applications in a novel and straightforward manner. In order to accomplish this, the aforesaid framework makes use of a stereoscopic VR Head Mounted Display (HMD) from Oculus Rift©, a hand tracking controller from Leap Motion©, a custom-made VR mount that allows for the assemblage of the two preceding peripherals and a wearable device of our own design. The latter is a glove that encompasses two core modules in its innings, one that is able to convey haptic feedback to its wearer and another that deals with the non-intrusive acquisition, processing and registering of his/her Electrocardiogram (ECG), Electromyogram (EMG) and Electrodermal Activity (EDA). The software elements of the aforementioned features were all interfaced through Unity3D©, a powerful game engine whose popularity in academic and scientific endeavors is evermore increasing. Upon completion of our system, it was time to substantiate our initial claim with thoroughly developed experiences that would attest to its worth. With this premise in mind, we devised a comprehensive repository of interfaces, amid which three merit special consideration: Brain Connectivity Leap (BCL), Ode to Passive Haptic Learning (PHL) and a Surgical Simulator

    Step into the Void: A Study of Spatial Perception in Virtual Reality

    Get PDF
    The introduction of virtual reality (VR) into the architectural profession offers an unprecedented opportunity to experience unbuilt designs at full scale. The premise of the technology is that it gives users the illusion of being in another place by replacing their field of vision with a digital image. While VR technology, for the most part, can only simulate visual sensations at this point in its development, it has demonstrated in various applications that the immersiveness of the medium can elicit visceral reactions. This potential could be leveraged to expand the capacity of architects to convey the complexities of architectural space in an easily comprehensible form. Because VR is relatively unfamiliar, especially in architecture, there is a need to identify the technology’s strengths and weaknesses so that it can be appropriately utilized in practice. The goal of this thesis is to further the understanding of interior spatial perception in VR. Perception of interior space is affected by many visual factors, like the shape of the space, details, and how crowded the space is. In order to test the impact of these aspects of spatial perception in VR, a set of experiments were conducted at the School of Architecture. Participants engaged in a series of exercises where they would attempt to position the walls and ceilings of a series of rooms to match a given set of dimensions. Each room is designed slightly differently to test the aforementioned aspects of spatial perception. These exercises are completed once with orthogonal architectural drawings and once with VR. Some results from the experiments might indicate that atmospheric design elements may be more impactful when represented in VR, but further research is required. In most cases, participants were more accurate when using orthogonal drawings to complete the exercises. However, participants created rooms that were more similar to each other when completing the exercises in VR, which suggests that VR might be more effective than orthogonal drawings in imparting a common understanding of space to different people, an encouraging sign that VR is an effective medium for communication

    Augmented interaction for custom-fit products by means of interaction devices at low costs

    Get PDF
    This Ph.D thesis refers to a research project that aims at developing an innovative platform to design lower limb prosthesis (both for below and above knee amputation) centered on the virtual model of the amputee and based on a computer-aided and knowledge-guided approach. The attention has been put on the modeling tool of the socket, which is the most critical component of the whole prosthesis. The main aim has been to redesign and develop a new prosthetic CAD tool, named SMA2 (Socket Modelling Assistant2) exploiting a low-cost IT technologies (e.g. hand/finger tracking devices) and making the user’s interaction as much as possible natural and similar to the hand-made manipulation. The research activities have been carried out in six phases as described in the following. First, limits and criticalities of the already available modeling tool (namely SMA) have been identified. To this end, the first version of SMA has been tested with Ortopedia Panini and the orthopedic research group of Salford University in Manchester with real case studies. Main criticalities were related to: (i) automatic reconstruction of the residuum geometric model starting from medical images, (ii) performance of virtual modeling tools to generate the socket shape, and (iii) interaction mainly based on traditional devices (e.g., mouse and keyboard). The second phase lead to the software reengineering of SMA according to the limits identified in the first phase. The software architecture has been re-designed adopting an object-oriented paradigm and its modularity permits to remove or add new features in a very simple way. The new modeling system, i.e. SMA2, has been totally implemented using open source Software Development Kit-SDK (e.g., Visualization ToolKit VTK, OpenCASCADE and Qt SDK) and based on low cost technology. It includes: • A new module to automatically reconstruct the 3D model of the residual limb from MRI images. In addition, a new procedure based on low-cost technology, such as Microsoft Kinect V2 sensor, has been identified to acquire the 3D external shape of the residuum. • An open source software library, named SimplyNURBS, for NURBS modeling and specifically used for the automatic reconstruction of the residuum 3D model from medical images. Even if, SimplyNURBS has been conceived for the prosthetic domain, it can be used to develop NURBS-based modeling tools for a range of applicative domains from health-care to clothing design. • A module for mesh editing to emulate the hand-made operations carried out by orthopedic technicians during traditional socket manufacturing process. In addition several virtual widgets have been implemented to make available virtual tools similar to the real ones used by the prosthetist, such as tape measure and pencil. • A Natural User Interface (NUI) to allow the interaction with the residuum and socket models using hand-tracking and haptic devices. • A module to generate the geometric models for additive manufacturing of the socket. The third phase concerned the study and design of augmented interaction with particular attention to the Natural User Interface (NUI) for the use of hand-tracking and haptic devices into SMA2. The NUI is based on the use of the Leap Motion device. A set of gestures, mainly iconic and suitable for the considered domain, has been identified taking into account ergonomic issues (e.g., arm posture) and ease of use. The modularity of SMA2 permits us to easily generate the software interface for each device for augmented interaction. To this end, a software module, named Tracking plug-in, has been developed to automatically generate the source code of software interfaces for managing the interaction with low cost hand-tracking devices (e.g., Leap Motion and Intel Gesture Camera) and replicate/emulate manual operations usually performed to design custom-fit products, such medical devices and garments. Regarding haptic rendering, two different devices have been considered, the Falcon Novint, and a haptic mouse developed in-house. In the fourth phase, additive manufacturing technologies have been investigated, in particular FDM one. 3D printing has been exploited in order to permit the creation of trial sockets in laboratory to evaluate the potentiality of SMA2. Furthermore, research activities have been done to study new ways to design the socket. An innovative way to build the socket has been developed based on multi-material 3D printing. Taking advantage of flexible material and multi-material print possibility, new 3D printers permit to create object with soft and hard parts. In this phase, issues about infill, materials and comfort have been faced and solved considering different compositions of materials to re-design the socket shape. In the fifth phase the implemented solution, integrated within the whole prosthesis design platform, has been tested with a transfemoral amputee. Following activities have been performed: • 3D acquisition of the residuum using MRI and commercial 3D scanning systems (low cost and professional). • Creation of the residual limb and socket geometry. • Multi-material 3D printing of the socket using FDM technology. • Gait analysis of the amputee wearing the socket using a markerless motion capture system. • Acquisition of contact pressure between residual limb and a trial socket by means of Teskan’s F-Socket System. Acquired data have been combined inside an ad-hoc developed application, which permits to simultaneously visualize pressure data on the 3D model of the residual lower limb and the animation of gait analysis. Results and feedback have been possible thanks to this application that permits to find correlation between several phases of the gait cycle and the pressure data at the same time. Reached results have been considered very interested and several tests have been planned in order to try the system in orthopedic laboratories in real cases. The reached results have been very useful to evaluate the quality of SMA2 as a future instruments that can be exploited for orthopedic technicians in order to create real socket for patients. The solution has the potentiality to begin a potential commercial product, which will be able to substitute the classic procedure for socket design. The sixth phase concerned the evolution of SMA2 as a Mixed Reality environment, named Virtual Orthopedic LABoratory (VOLAB). The proposed solution is based on low cost devices and open source libraries (e.g., OpenCL and VTK). In particular, the hardware architecture consists of three Microsoft Kinect v2 for human body tracking, the head mounted display Oculus Rift SDK 2 for 3D environment rendering, and the Leap Motion device for hand/fingers tracking. The software development has been based on the modular structure of SMA2 and dedicated modules have been developed to guarantee the communication among the devices. At present, two preliminary tests have been carried out: the first to verify real-time performance of the virtual environment and the second one to verify the augmented interaction with hands using SMA2 modeling tools. Achieved results are very promising but, highlighted some limitations of this first version of VOLAB and improvements are necessary. For example, the quality of the 3D real world reconstruction, especially as far as concern the residual limb, could be improved by using two HD-RGB cameras together the Oculus Rift. To conclude, the obtained results have been evaluated very interested and encouraging from the technical staff of orthopedic laboratory. SMA2 will made possible an important change of the process to design the socket of lower limb prosthesis, from a traditional hand-made manufacturing process to a totally virtual knowledge-guided process. The proposed solutions and results reached so far can be exploited in other industrial sectors where the final product heavily depends on the human body morphology. In fact, preliminary software development has been done to create a virtual environment for clothing design by starting from the basic modules exploited in SMA2

    The Perception/Action loop: A Study on the Bandwidth of Human Perception and on Natural Human Computer Interaction for Immersive Virtual Reality Applications

    Get PDF
    Virtual Reality (VR) is an innovating technology which, in the last decade, has had a widespread success, mainly thanks to the release of low cost devices, which have contributed to the diversification of its domains of application. In particular, the current work mainly focuses on the general mechanisms underling perception/action loop in VR, in order to improve the design and implementation of applications for training and simulation in immersive VR, especially in the context of Industry 4.0 and the medical field. On the one hand, we want to understand how humans gather and process all the information presented in a virtual environment, through the evaluation of the visual system bandwidth. On the other hand, since interface has to be a sort of transparent layer allowing trainees to accomplish a task without directing any cognitive effort on the interaction itself, we compare two state of the art solutions for selection and manipulation tasks, a touchful one, the HTC Vive controllers, and a touchless vision-based one, the Leap Motion. To this aim we have developed ad hoc frameworks and methodologies. The software frameworks consist in the creation of VR scenarios, where the experimenter can choose the modality of interaction and the headset to be used and set experimental parameters, guaranteeing experiments repeatability and controlled conditions. The methodology includes the evaluation of performance, user experience and preferences, considering both quantitative and qualitative metrics derived from the collection and the analysis of heterogeneous data, as physiological and inertial sensors measurements, timing and self-assessment questionnaires. In general, VR has been found to be a powerful tool able to simulate specific situations in a realistic and involving way, eliciting user\u2019s sense of presence, without causing severe cybersickness, at least when interaction is limited to the peripersonal and near-action space. Moreover, when designing a VR application, it is possible to manipulate its features in order to trigger or avoid triggering specific emotions and voluntarily create potentially stressful or relaxing situations. Considering the ability of trainees to perceive and process information presented in an immersive virtual environment, results show that, when people are given enough time to build a gist of the scene, they are able to recognize a change with 0.75 accuracy when up to 8 elements are in the scene. For interaction, instead, when selection and manipulation tasks do not require fine movements, controllers and Leap Motion ensure comparable performance; whereas, when tasks are complex, the first solution turns out to be more stable and efficient, also because visual and audio feedback, provided as a substitute of the haptic one, does not substantially contribute to improve performance in the touchless case

    Virtual and Mixed Reality Support for Activities of Daily Living

    Get PDF
    Rehabilitation and training are extremely important process that help people who have suffered some form of trauma to regain their ability to live independently and successfully complete activities of daily living. VR and MR have been used in rehabilitation and training, with examples in a range of areas such as physical and cognitive rehabilitation, and medical training. However, previous research has mainly used non-immersive VR such as using video games on a computer monitor or television. Immersive VR Head-Mounted Displays were first developed in 1965 but the devices were usually large, bulky and expensive. In 2016, the release of low-cost VR HMDs allowed for wider adoption of VR technology. This thesis investigates the impact of these devices in supporting activities of daily living through three novel applications: training driving skills for a powered wheelchair in both VR and MR; and using VR to help with the cognitive rehabilitation of stroke patients. Results from the acceptability study for VR in cognitive rehabilitation showed that patients would be likely to accept VR as a method of rehabilitation. However, factors such as visual issues need to be taken into consideration. The validation study for the Wheelchair-VR project showed promising results in terms of user improvement after the VR training session but the majority of the users experienced symptoms of cybersickness. Wheelchair-MR didn’t show statistically significant results in terms of improvements but did show a mean average improvement compared to the control group. The effects of cybersickness were also greatly reduced compared to VR. We conclude that VR and MR can be used in conjunction with modern games engines to develop virtual environments that can be adapted to accelerate the rehabilitation and training of patients coping with different aspects of daily life

    VR Storytelling

    Get PDF
    The question of cinematic VR production has been on the table for several years. This is due to the peculiarity of VR language which, even if it is de ned by an image that surrounds and immerses the viewer rather than placing them, as in the classic cinematic situation, in front of a screen, relies decisively on an audiovisual basis that cannot help but refer to cinematic practices of constructing visual and auditory experience. Despite this, it would be extremely reductive to consider VR as the mere transposition of elements of cinematic language. The VR medium is endowed with its own speci city, which inevitably impacts its forms of narration. We thus need to investigate the narrative forms it uses that are probably related to cinematic language, and draw their strength from the same basis, drink from the same well, but develop according to di erent trajectories, thus displaying di erent links and a nities

    Assessment of a hand exoskeleton on proximal and distal training in virtual environments for robot mediated upper extremity rehabilitation

    Get PDF
    Stroke is the leading cause of disability in the United States with approximately 800,000 cases per year. This cerebral vascular accident results in neurological impairments that reduce limb function and limit the daily independence of the individual. Evidence suggests that therapeutic interventions with repetitive motor training can aid in functional recovery of the paretic limb. Robotic rehabilitation may present an exercise intervention that can improve training and induce motor plasticity in individuals with stroke. An active (motorized) hand exoskeleton that provides support for wrist flexion/extension, abduction/adduction, pronation/supination, and finger pinch is integrated with a pre-existing 3-Degree of Freedom (DOF) haptic robot (Haptic Master, FCS Moog) to determine the efficacy of increased DOF during proximal and distal training in Upper Extremity (UE) rehabilitation. Subjects are randomly assigned into four groups to evaluate the significance of increased DOF during virtual training: Haptic Master control group (HM), Haptic Master with Gripper (HMG), Haptic Master with Wrist (HMW), and Haptic Master with Gripper and Wrist (HMWG). Each subject group performs a Pick and Place Task in a virtual environment where the distal hand exoskeleton is mapped to the virtual representation of the hand. Subjects are instructed to transport as many virtual cubes as possible to a specified target in the allotted time period of 120s. Three cube sizes are assessed to determine efficacy of the assistive end-effector. An additional virtual task, Mailbox Task, is performed to determine the effect of training and the ability to transfer skills between virtual settings in an unfamiliar environment. The effects of viewing mediums are also investigated to determine the effect of immersion on performance using an Oculus Rift as an HMD compared to conventional projection displays. It is hypothesized that individuals with both proximal and complete distal hand control (HMWG) will see increased benefit during the Pick and Place Task than individuals without the complete distal attachment, as assisted daily living tasks are often accomplished with coordinated arm and hand movement. The purpose of this study is to investigate the additive effect of increased degrees of freedom at the hand through task-specific training of the upper arm in a virtual environment, validate the ability to transfer skills obtained in a virtual environment to an untrained task, and determine the effects of viewing mediums on performance. A feasibility study is conducted in individuals with stroke to determine if the modular gripper can assist pinch movements. These investigations represent a comprehensive investigation to assess the potential benefits of assistive devices in a virtual reality setting to retrain lost function and increase efficacy in motor control in populations with motor impairments
    corecore