54 research outputs found

    Multi-contact Planning on Humans for Physical Assistance by Humanoid

    Get PDF
    International audienceFor robots to interact with humans in close proximity safely and efficiently, a specialized method to compute whole-body robot posture and plan contact locations is required. In our work, a humanoid robot is used as a caregiver that is performing a physical assistance task. We propose a method for formulating and initializing a non-linear optimization posture generation problem from an intuitive description of the assistance task and the result of a human point cloud processing. The proposed method allows to plan whole-body posture and contact locations on a task-specific surface of a human body, under robot equilibrium, friction cone, torque/joint limits, collision avoidance, and assistance task inherent constraints. The proposed framework can uniformly handle any arbitrary surface generated from point clouds, for autonomously planing the contact locations and interaction forces on potentially moving, movable, and deformable surfaces, which occur in direct physical human-robot interaction. We conclude the paper with examples of posture generation for physical human-robot interaction scenarios

    Prospective Quantitative Neuroimaging Analysis of Putative Temporal Lobe Epilepsy

    Get PDF
    Purpose: A prospective study of individual and combined quantitative imaging applications for lateralizing epileptogenicity was performed in a cohort of consecutive patients with a putative diagnosis of mesial temporal lobe epilepsy (mTLE). Methods: Quantitative metrics were applied to MRI and nuclear medicine imaging studies as part of a comprehensive presurgical investigation. The neuroimaging analytics were conducted remotely to remove bias. All quantitative lateralizing tools were trained using a separate dataset. Outcomes were determined after 2 years. Of those treated, some underwent resection, and others were implanted with a responsive neurostimulation (RNS) device. Results: Forty-eight consecutive cases underwent evaluation using nine attributes of individual or combinations of neuroimaging modalities: 1) hippocampal volume, 2) FLAIR signal, 3) PET profile, 4) multistructural analysis (MSA), 5) multimodal model analysis (MMM), 6) DTI uncertainty analysis, 7) DTI connectivity, and 9) fMRI connectivity. Of the 24 patients undergoing resection, MSA, MMM, and PET proved most effective in predicting an Engel class 1 outcome (\u3e80% accuracy). Both hippocampal volume and FLAIR signal analysis showed 76% and 69% concordance with an Engel class 1 outcome, respectively. Conclusion: Quantitative multimodal neuroimaging in the context of a putative mTLE aids in declaring laterality. The degree to which there is disagreement among the various quantitative neuroimaging metrics will judge whether epileptogenicity can be confined sufficiently to a particular temporal lobe to warrant further study and choice of therapy. Prediction models will improve with continued exploration of combined optimal neuroimaging metrics

    A perspective review on integrating VR/AR with haptics into STEM education for multi-sensory learning

    Get PDF
    As a result of several governments closing educational facilities in reaction to the COVID-19 pandemic in 2020, almost 80% of the world’s students were not in school for several weeks. Schools and universities are thus increasing their efforts to leverage educational resources and provide possibilities for remote learning. A variety of educational programs, platforms, and technologies are now accessible to support student learning; while these tools are important for society, they are primarily concerned with the dissemination of theoretical material. There is a lack of support for hands-on laboratory work and practical experience. This is particularly important for all disciplines related to science, technology, engineering, and mathematics (STEM), where labs and pedagogical assets must be continuously enhanced in order to provide effective study programs. In this study, we describe a unique perspective to achieving multi-sensory learning through the integration of virtual and augmented reality (VR/AR) with haptic wearables in STEM education. We address the implications of a novel viewpoint on established pedagogical notions. We want to encourage worldwide efforts to make fully immersive, open, and remote laboratory learning a reality.European Union through the Erasmus+ Program under Grant 2020-1-NO01-KA203-076540, project title Integrating virtual and AUGMENTED reality with WEARable technology into engineering EDUcation (AugmentedWearEdu), https://augmentedwearedu.uia.no/ [34] (accessed on 27 March 2022). This work was also supported by the Top Research Centre Mechatronics (TRCM), University of Agder (UiA), Norwa

    A Perspective Review on Integrating VR/AR with Haptics into STEM Education for Multi-Sensory Learning

    Get PDF
    As a result of several governments closing educational facilities in reaction to the COVID-19 pandemic in 2020, almost 80% of the world’s students were not in school for several weeks. Schools and universities are thus increasing their efforts to leverage educational resources and provide possibilities for remote learning. A variety of educational programs, platforms, and technologies are now accessible to support student learning; while these tools are important for society, they are primarily concerned with the dissemination of theoretical material. There is a lack of support for hands-on laboratory work and practical experience. This is particularly important for all disciplines related to science, technology, engineering, and mathematics (STEM), where labs and pedagogical assets must be continuously enhanced in order to provide effective study programs. In this study, we describe a unique perspective to achieving multi-sensory learning through the integration of virtual and augmented reality (VR/AR) with haptic wearables in STEM education. We address the implications of a novel viewpoint on established pedagogical notions. We want to encourage worldwide efforts to make fully immersive, open, and remote laboratory learning a reality.publishedVersio

    Bridging Vision and Dynamic Legged Locomotion

    Get PDF
    Legged robots have demonstrated remarkable advances regarding robustness and versatility in the past decades. The questions that need to be addressed in this field are increasingly focusing on reasoning about the environment and autonomy rather than locomotion only. To answer some of these questions visual information is essential. If a robot has information about the terrain it can plan and take preventive actions against potential risks. However, building a model of the terrain is often computationally costly, mainly because of the dense nature of visual data. On top of the mapping problem, robots need feasible body trajectories and contact sequences to traverse the terrain safely, which may also require heavy computations. This computational cost has limited the use of visual feedback to contexts that guarantee (quasi-) static stability, or resort to planning schemes where contact sequences and body trajectories are computed before starting to execute motions. In this thesis we propose a set of algorithms that reduces the gap between visual processing and dynamic locomotion. We use machine learning to speed up visual data processing and model predictive control to achieve locomotion robustness. In particular, we devise a novel foothold adaptation strategy that uses a map of the terrain built from on-board vision sensors. This map is sent to a foothold classifier based on a convolutional neural network that allows the robot to adjust the landing position of the feet in a fast and continuous fashion. We then use the convolutional neural network-based classifier to provide safe future contact sequences to a model predictive controller that optimizes target ground reaction forces in order to track a desired center of mass trajectory. We perform simulations and experiments on the hydraulic quadruped robots HyQ and HyQReal. For all experiments the contact sequences, the foothold adaptations, the control inputs and the map are computed and processed entirely on-board. The various tests show that the robot is able to leverage the visual terrain information to handle complex scenarios in a safe, robust and reliable manner

    Dynamic Simulation and Neuromechanical Coordination of Subject-Specific Balance Recovery to Prevent Falls

    Get PDF
    Falls are the leading cause of fatal and nonfatal injuries in elderly people, resulting in approximately $31 billion in medical costs annually in the U.S. These injuries motivate balance control studies focused on improving stability by identifying prevention strategies for reducing the number of fall events. Experiments provide data about subjects’ kinematic response to loss of balance. However, simulations offer additional insights, and may be used to make predictions about functional outcomes of interventions. Several approaches already exist in biomechanics research to generate accurate models on a subject-by-subject basis. However, these representations typically lack models of the central nervous system, which provides essential feedback that humans use to make decisions and alter movements. Interdisciplinary methods that merge biomechanics with other fields of study may be the solution to fill this gap by developing models that accurately reflect human neuromechanics.Roboticists have developed control systems approaches for humanoid robots simultaneously accomplishing complex goals by coordinating component tasks under priority constraints. Concepts such as the zero-moment point and extrapolated center of mass have been thoroughly evaluated and are commonly used in the design and execution of dynamic robotic systems in order to maintain stability. These established techniques can benefit biomechanical simulations by replacing biological sensory feedback that is unavailable in the virtual environment. Subject-specific simulations can be generated by synthesizing techniques from both robotics and biomechanics and by creating comprehensive models of task-level coordination, including neurofeedback, of movement patterns from experimental data. In this work, we demonstrate how models built on robotic principles that emulate decision making in response to feedback can be trained by biomechanical motion capture data to produce a subject-specific fit. The resulting surrogate can predict a subject’s particular solution to accomplishing the movement goal of recovering balance by controlling component tasks. This research advances biomechanics simulations as we move closer towards the development of a tool capable of anticipating the results of rehabilitation interventions aimed at correcting movement disorders. The novel platform presented here marks the first step towards that goal, and may benefit engineers, researchers, and clinicians interested in balance control and falls in human subjects

    Dynamic interactions between anterior insula and anterior cingulate cortex link perceptual features and heart rate variability during movie viewing

    Get PDF
    AbstractThe dynamic integration of sensory and bodily signals is central to adaptive behaviour. Although the anterior cingulate cortex (ACC) and the anterior insular cortex (AIC) play key roles in this process, their context-dependent dynamic interactions remain unclear. Here, we studied the spectral features and interplay of these two brain regions using high-fidelity intracranial-EEG recordings from five patients (ACC: 13 contacts, AIC: 14 contacts) acquired during movie viewing with validation analyses performed on an independent resting intracranial-EEG dataset. ACC and AIC both showed a power peak and positive functional connectivity in the gamma (30–35 Hz) frequency while this power peak was absent in the resting data. We then used a neurobiologically informed computational model investigating dynamic effective connectivity asking how it linked to the movie’s perceptual (visual, audio) features and the viewer’s heart rate variability (HRV). Exteroceptive features related to effective connectivity of ACC highlighting its crucial role in processing ongoing sensory information. AIC connectivity was related to HRV and audio emphasising its core role in dynamically linking sensory and bodily signals. Our findings provide new evidence for complementary, yet dissociable, roles of neural dynamics between the ACC and the AIC in supporting brain-body interactions during an emotional experience
    • …
    corecore