1,095 research outputs found

    An Overview of Self-Adaptive Technologies Within Virtual Reality Training

    Get PDF
    This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training

    Towards Skill Transfer via Learning-Based Guidance in Human-Robot Interaction

    Get PDF
    This thesis presents learning-based guidance (LbG) approaches that aim to transfer skills from human to robot. The approaches capture the temporal and spatial information of human motions and teach robot to assist human in human-robot collaborative tasks. In such physical human-robot interaction (pHRI) environments, learning from demonstrations (LfD) enables this transferring skill. Demonstrations can be provided through kinesthetic teaching and/or teleoperation. In kinesthetic teaching, humans directly guide robot’s body to perform a task while in teleoperation, demonstrations can be done through motion/vision-based systems or haptic devices. In this work, the LbG approaches are developed through kinesthetic teaching and teleoperation in both virtual and physical environments. First, this thesis compares and analyzes the capability of two types of statistical models, generative and discriminative, to generate haptic guidance (HG) forces as well as segment and recognize gestures for pHRI that can be used in virtual minimally invasive surgery (MIS) training. In this learning-based approach, the knowledge and experience of experts are modeled to improve the unpredictable motions of novice trainees. Two statistical models, hidden Markov model (HMM) and hidden Conditional Random Fields (HCRF), are used to learn gestures from demonstrations in a virtual MIS related task. The models are developed to automatically recognize and segment gestures as well as generate guidance forces. In practice phase, the guidance forces are adaptively calculated in real time regarding gesture similarities among user motion and the gesture models. Both statistical models can successfully capture the gestures of the user and provide adaptive HG, however, results show the superiority of HCRF, as a discriminative method, compared to HMM, as a generative method, in terms of user performance. In addition, LbG approaches are developed for kinesthetic HRI simulations that aim to transfer the skills of expert surgeons to resident trainees. The discriminative nature of HCRF is incorporated into the approach to produce LbG forces and discriminate the skill levels of users. To experimentally evaluate this kinesthetic-based approach, a femur bone drilling simulation is developed in which residents are provided haptic feedback based on real computed tomography (CT) data that enable them to feel the variable stiffness of bone layers. Orthepaedic surgeons require to adjust drilling force since bone layers have different stiffness. In the learning phase, using the simulation, an expert HCRF model is trained from expert surgeons demonstration to learn the stiffness variations of different bone layers. A novice HCRF model is also developed from the demonstration of novice residents to discriminate the skill levels of a new trainee. During the practice phase, the learning-based approach, which encoded the stiffness variations, guides the trainees to perform training tasks similar to experts motions. Finally, in contrast to other parts of the thesis, an LbG approach is developed through teleoperation in physical environment. The approach assists operators to navigate a teleoperated robot through a haptic steering wheel and a haptic gas pedal. A set of expert operator demonstrations are used to develop maneuvering skill model. The temporal and spatial variation of demonstrations are learned using HMM as the skill model. A modified Gaussian Mixture regression (GMR) in combination with the HMM is also developed to robustly produce the motion during reproduction. The GMR calculates outcome motions from a joint probability density function of data rather than directly model the regression function. In addition, the distance between the robot and obstacles is incorporated into the impedance control to generate guidance forces that also assist operators with avoiding obstacle collisions. Using different forms of variable impedance control, guidance forces are computed in real time with respect to the similarities between the maneuver of users and the skill model. This encourages users to navigate a robot similar to the expert operators. The results show that user performance is improved in terms of number of collisions, task completion time, and average closeness to obstacles

    Development and evaluation of a haptic framework supporting telerehabilitation robotics and group interaction

    Get PDF
    Telerehabilitation robotics has grown remarkably in the past few years. It can provide intensive training to people with special needs remotely while facilitating therapists to observe the whole process. Telerehabilitation robotics is a promising solution supporting routine care which can help to transform face-to-face and one-on-one treatment sessions that require not only intensive human resource but are also restricted to some specialised care centres to treatments that are technology-based (less human involvement) and easy to access remotely from anywhere. However, there are some limitations such as network latency, jitter, and delay of the internet that can affect negatively user experience and quality of the treatment session. Moreover, the lack of social interaction since all treatments are performed over the internet can reduce motivation of the patients. As a result, these limitations are making it very difficult to deliver an efficient recovery plan. This thesis developed and evaluated a new framework designed to facilitate telerehabilitation robotics. The framework integrates multiple cutting-edge technologies to generate playful activities that involve group interaction with binaural audio, visual, and haptic feedback with robot interaction in a variety of environments. The research questions asked were: 1) Can activity mediated by technology motivate and influence the behaviour of users, so that they engage in the activity and sustain a good level of motivation? 2) Will working as a group enhance users’ motivation and interaction? 3) Can we transfer real life activity involving group interaction to virtual domain and deliver it reliably via the internet? There were three goals in this work: first was to compare people’s behaviours and motivations while doing the task in a group and on their own; second was to determine whether group interaction in virtual and reala environments was different from each other in terms of performance, engagement and strategy to complete the task; finally was to test out the effectiveness of the framework based on the benchmarks generated from socially assistive robotics literature. Three studies have been conducted to achieve the first goal, two with healthy participants and one with seven autistic children. The first study observed how people react in a challenging group task while the other two studies compared group and individual interactions. The results obtained from these studies showed that the group interactions were more enjoyable than individual interactions and most likely had more positive effects in terms of user behaviours. This suggests that the group interaction approach has the potential to motivate individuals to make more movements and be more active and could be applied in the future for more serious therapy. Another study has been conducted to measure group interaction’s performance in virtual and real environments and pointed out which aspect influences users’ strategy for dealing with the task. The results from this study helped to form a better understanding to predict a user’s behaviour in a collaborative task. A simulation has been run to compare the results generated from the predictor and the real data. It has shown that, with an appropriate training method, the predictor can perform very well. This thesis has demonstrated the feasibility of group interaction via the internet using robotic technology which could be beneficial for people who require social interaction (e.g. stroke patients and autistic children) in their treatments without regular visits to the clinical centres

    A Framework for Prediction in a Fog-Based Tactile Internet Architecture for Remote Phobia Treatment

    Get PDF
    Tactile Internet, as the next generation of the Internet, aims to transmit the modality of touch in addition to conventional audiovisual signals, thus transforming today’s content-delivery into skill-set delivery networks, which promises ultra-low latency and ultra reliability. Besides voice and data communication driving the design of the current Internet, Tactile Internet enables haptic communications by incorporating 5G networks and edge computing. A novel use-case of immersive, low-latency Tactile Internet applications is haptic-enabled Virtual Reality (VR), where an extremely low latency of less than 50 ms is required, which gives way to the so-called Remote Phobia treatment via VR. It is a greenfield in the telehealth domain with the goal of replicating normal therapy sessions with distant therapists and patients, thereby standing as a cost-efficient and time-saving solution. In this thesis, we consider a recently proposed fog-based haptic-enabled VR system for remote treatment of animal phobia consisting of three main components: (1) therapist-side fog domain, (2) core network, and (3) patient-side fog domain. The patient and therapist domains are located in different fog domains, where their communication takes place through the core network. The therapist tries to cure the phobic patient remotely via a shared haptic virtual reality environment. However, certain haptic sensation messages associated with hand movements might not be reached in time, even in the most reliable networks. In this thesis, a prediction model is proposed to address the problem of excessive packet latency as well as packet loss, which may result in quality-of-experience (QoE) degradation. We aim to use machine learning to decouple the impact of excessive latency and extreme packet loss from the user experience perspective. For which, we propose a predictive framework called Edge Tactile Learner (ETL). Our proposed fog-based framework is responsible for predicting the zones touched by the therapist’s hand, then delivering it immediately to the patient-side fog domain if needed. The proposed ETL builds a model based on Weighted K-Nearest Neighbors (WKNN) to predict the zones touched by the therapist in a VR phobia treatment system. The simulation results indicate that our proposed predictive framework is instrumental in providing accurate and real-time haptic predictions to the patient-side fog domain. This increases patient’s immersion and synchronization between multiple senses such as audio, visual and haptic sensory, which leads to higher user Quality of Experience (QoE)

    Context-aware learning for robot-assisted endovascular catheterization

    Get PDF
    Endovascular intervention has become a mainstream treatment of cardiovascular diseases. However, multiple challenges remain such as unwanted radiation exposures, limited two-dimensional image guidance, insufficient force perception and haptic cues. Fast evolving robot-assisted platforms improve the stability and accuracy of instrument manipulation. The master-slave system also removes radiation to the operator. However, the integration of robotic systems into the current surgical workflow is still debatable since repetitive, easy tasks have little value to be executed by the robotic teleoperation. Current systems offer very low autonomy, potential autonomous features could bring more benefits such as reduced cognitive workloads and human error, safer and more consistent instrument manipulation, ability to incorporate various medical imaging and sensing modalities. This research proposes frameworks for automated catheterisation with different machine learning-based algorithms, includes Learning-from-Demonstration, Reinforcement Learning, and Imitation Learning. Those frameworks focused on integrating context for tasks in the process of skill learning, hence achieving better adaptation to different situations and safer tool-tissue interactions. Furthermore, the autonomous feature was applied to next-generation, MR-safe robotic catheterisation platform. The results provide important insights into improving catheter navigation in the form of autonomous task planning, self-optimization with clinical relevant factors, and motivate the design of intelligent, intuitive, and collaborative robots under non-ionizing image modalities.Open Acces

    Gesture Recognition and Control for Semi-Autonomous Robotic Assistant Surgeons

    Get PDF
    The next stage for robotics development is to introduce autonomy and cooperation with human agents in tasks that require high levels of precision and/or that exert considerable physical strain. To guarantee the highest possible safety standards, the best approach is to devise a deterministic automaton that performs identically for each operation. Clearly, such approach inevitably fails to adapt itself to changing environments or different human companions. In a surgical scenario, the highest variability happens for the timing of different actions performed within the same phases. This thesis explores the solutions adopted in pursuing automation in robotic minimally-invasive surgeries (R-MIS) and presents a novel cognitive control architecture that uses a multi-modal neural network trained on a cooperative task performed by human surgeons and produces an action segmentation that provides the required timing for actions while maintaining full phase execution control via a deterministic Supervisory Controller and full execution safety by a velocity-constrained Model-Predictive Controller

    Sensorimotor experience in virtual environments

    Get PDF
    The goal of rehabilitation is to reduce impairment and provide functional improvements resulting in quality participation in activities of life, Plasticity and motor learning principles provide inspiration for therapeutic interventions including movement repetition in a virtual reality environment, The objective of this research work was to investigate functional specific measurements (kinematic, behavioral) and neural correlates of motor experience of hand gesture activities in virtual environments stimulating sensory experience (VE) using a hand agent model. The fMRI compatible Virtual Environment Sign Language Instruction (VESLI) System was designed and developed to provide a number of rehabilitation and measurement features, to identify optimal learning conditions for individuals and to track changes in performance over time. Therapies and measurements incorporated into VESLI target and track specific impairments underlying dysfunction. The goal of improved measurement is to develop targeted interventions embedded in higher level tasks and to accurately track specific gains to understand the responses to treatment, and the impact the response may have upon higher level function such as participation in life. To further clarify the biological model of motor experiences and to understand the added value and role of virtual sensory stimulation and feedback which includes seeing one\u27s own hand movement, functional brain mapping was conducted with simultaneous kinematic analysis in healthy controls and in stroke subjects. It is believed that through the understanding of these neural activations, rehabilitation strategies advantaging the principles of plasticity and motor learning will become possible. The present research assessed successful practice conditions promoting gesture learning behavior in the individual. For the first time, functional imaging experiments mapped neural correlates of human interactions with complex virtual reality hands avatars moving synchronously with the subject\u27s own hands, Findings indicate that healthy control subjects learned intransitive gestures in virtual environments using the first and third person avatars, picture and text definitions, and while viewing visual feedback of their own hands, virtual hands avatars, and in the control condition, hidden hands. Moreover, exercise in a virtual environment with a first person avatar of hands recruited insular cortex activation over time, which might indicate that this activation has been associated with a sense of agency. Sensory augmentation in virtual environments modulated activations of important brain regions associated with action observation and action execution. Quality of the visual feedback was modulated and brain areas were identified where the amount of brain activation was positively or negatively correlated with the visual feedback, When subjects moved the right hand and saw unexpected response, the left virtual avatar hand moved, neural activation increased in the motor cortex ipsilateral to the moving hand This visual modulation might provide a helpful rehabilitation therapy for people with paralysis of the limb through visual augmentation of skills. A model was developed to study the effects of sensorimotor experience in virtual environments, and findings of the effect of sensorimotor experience in virtual environments upon brain activity and related behavioral measures. The research model represents a significant contribution to neuroscience research, and translational engineering practice, A model of neural activations correlated with kinematics and behavior can profoundly influence the delivery of rehabilitative services in the coming years by giving clinicians a framework for engaging patients in a sensorimotor environment that can optimally facilitate neural reorganization

    Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems

    Get PDF
    The European research project DESERVE (DEvelopment platform for Safe and Efficient dRiVE, 2012-2015) had the aim of designing and developing a platform tool to cope with the continuously increasing complexity and the simultaneous need to reduce cost for future embedded Advanced Driver Assistance Systems (ADAS). For this purpose, the DESERVE platform profits from cross-domain software reuse, standardization of automotive software component interfaces, and easy but safety-compliant integration of heterogeneous modules. This enables the development of a new generation of ADAS applications, which challengingly combine different functions, sensors, actuators, hardware platforms, and Human Machine Interfaces (HMI). This book presents the different results of the DESERVE project concerning the ADAS development platform, test case functions, and validation and evaluation of different approaches. The reader is invited to substantiate the content of this book with the deliverables published during the DESERVE project. Technical topics discussed in this book include:Modern ADAS development platforms;Design space exploration;Driving modelling;Video-based and Radar-based ADAS functions;HMI for ADAS;Vehicle-hardware-in-the-loop validation system
    • …
    corecore