9 research outputs found

    Converting Biomechanical Models from OpenSim to MuJoCo

    Full text link
    OpenSim is a widely used biomechanics simulator with several anatomically accurate human musculo-skeletal models. While OpenSim provides useful tools to analyse human movement, it is not fast enough to be routinely used for emerging research directions, e.g., learning and simulating motor control through deep neural networks and Reinforcement Learning (RL). We propose a framework for converting OpenSim models to MuJoCo, the de facto simulator in machine learning research, which itself lacks accurate musculo-skeletal human models. We show that with a few simple approximations of anatomical details, an OpenSim model can be automatically converted to a MuJoCo version that runs up to 600 times faster. We also demonstrate an approach to computationally optimize MuJoCo model parameters so that forward simulations of both simulators produce similar results.Comment: Submitted to 5th International Conference on NeuroRehabilitation (ICNR2020

    Kokoonpanotehtävien oppiminen ihmisen havaintoesityksistä

    Get PDF
    This thesis presents a method for learning and reproducing assembly tasks using Learning from Demonstration paradigm and a graph representation of assembly parts and their spatial relations. We show that this graph representation combined with inexact graph matching techniques provide a framework capable of learning assembly tasks, even with uncertain information of assembly operations. In this thesis our method replicated observed assembly tasks, where Lego Quatro bricks were manipulated with pick-and-place operations. We tested our proposed method through a series of experiments. In the experiments, a robot first observed a human teacher demonstrate an assembly task in front of a Kinect sensor. Then, the robot generated a simulation that depicted the learned assembly product. We also introduced uncertainty into the experiments by changing some of the assembly parts, or by not showing the intermediate assembly operations to the robot. In these events, the robot generated a simulated structure that was similar to the observed one. We used inexact graph matching techniques to measure the similarity between assembly structures. In the experiments our method successfully replicated a learned task when the robot was provided with a complete set of assembly parts. Also, the task was repeated relatively well when only one or two assembly parts were replaced with another type of Lego. We conclude that our method provides a convenient platform for a more general assembly method. Also, our method is capable of ``improvising'' in unanticipated situations, where the robot is supplied with imperfect knowledge of the task.Tässä diplomityössä esitämme menetelmän kokoonpanotehtävien oppimiseen ihmisen havaintoesityksistä, käyttäen graafiesitystä kokoonpanossa käytetyistä kappaleista. Osoitamme, että tämä graafiesitys yhdistettynä epätarkkoihin graafinsovitusmenetelmiin mahdollistaa kokoonpanotehtävien oppimisen --- myös silloin, kun kokoonpano-operaatioiden oppimiseen liittyy epävarmuutta. Tässä työssä käytimme menetelmää toistamaan kokoonpanotehtäviä, joissa Lego Quatro -palikoita liitettiin toisiinsa. Testasimme esitettyä menetelmää useilla kokeilla. Kokeissa robotti ensin havaitsi opettajan suorittavan kokoonpanotehtävän Kinect-kameran edessä. Tämän jälkeen robotti toisti opitun tehtävän simulaationa. Lisäsimme myös epävarmuutta kokeisiin vaihtamalla kokoonpanoon tarvittavia kappaleita, tai jättämällä näyttämättä kokoonpanon välivaiheita robotille. Tällöin robotti tuotti simuloidun rakennelman, joka oli mahdollisimman samankaltainen opittuun rakennelmaan nähden. Rakennelmien välistä samankaltaisuutta mitattiin epätarkoista graafinsovitusmenetelmistä tutuilla samankaltaisuusmitoilla. Kokeissa menetelmämme toisti opitut tehtävät onnistuneesti silloin, kun robotille oli annettu sama joukko palikoita kuin mitä havaintoesityksessä oli käytetty. Robotti toisti tehtävät menestyksekkäästi myös silloin, kun vain yksi tai kaksi havaintoesityksessä käytettyä Legoa oli vaihdettu erityyppisiksi. Kokeiden perusteella toteamme, että esittämämme menetelmä luo hyvän pohjan yleisemmälle kokoonpanomenetelmälle. Menetelmämme pystyy myös ''improvisoimaan'' odottamattomissa tilanteissa, joissa robotille näytettyyn havaintoesitykseen liittyy epätäydellistä tietoa

    Spatial water table level modelling with multi-sensor unmanned aerial vehicle data in boreal aapa mires

    Get PDF
    Peatlands have been degrading globally, which is increasing pressure on restoration measures and monitoring. New monitoring methods are needed because traditional methods are time-consuming, typically lack a spatial aspect, and are sometimes even impossible to execute in practice. Remote sensing has been implemented to monitor hydrological patterns and restoration impacts, but there is a lack of studies that combine multi-sensor ultra-high-resolution data to assess the spatial patterns of hydrology in peatlands. We combine optical, thermal, and topographic unmanned aerial vehicle data to spatially model the water table level (WTL) in unditched open peatlands in northern Finland suffering from adjacent drainage. We predict the WTL with a linear regression model with a moderate fit and accuracy (R2 = 0.69, RMSE = 3.85 cm) and construct maps to assess the spatial success of restoration. We demonstrate that thermal-optical trapezoid-based wetness models and optical bands are strongly correlated with the WTL, but topography-based wetness indices do not. We suggest that the developed method could be used for quantitative restoration assessment, but before-after restoration imagery is required to verify our findings

    Breathing Life Into Biomechanical User Models

    Get PDF
    Forward biomechanical simulation in HCI holds great promise as a tool for evaluation, design, and engineering of user interfaces. Although reinforcement learning (RL) has been used to simulate biomechanics in interaction, prior work has relied on unrealistic assumptions about the control problem involved, which limits the plausibility of emerging policies. These assumptions include direct torque actuation as opposed to muscle-based control; direct, privileged access to the external environment, instead of imperfect sensory observations; and lack of interaction with physical input devices. In this paper, we present a new approach for learning muscle-actuated control policies based on perceptual feedback in interaction tasks with physical input devices. This allows modelling of more realistic interaction tasks with cognitively plausible visuomotor control. We show that our simulated user model successfully learns a variety of tasks representing different interaction methods, and that the model exhibits characteristic movement regularities observed in studies of pointing. We provide an open-source implementation which can be extended with further biomechanical models, perception models, and interactive environments.publishedVersio

    Spatial water table level modelling with multi-sensor unmanned aerial vehicle data in boreal aapa mires

    No full text
    Abstract Peatlands have been degrading globally, which is increasing pressure on restoration measures and monitoring. New monitoring methods are needed because traditional methods are time-consuming, typically lack a spatial aspect, and are sometimes even impossible to execute in practice. Remote sensing has been implemented to monitor hydrological patterns and restoration impacts, but there is a lack of studies that combine multi-sensor ultra-high-resolution data to assess the spatial patterns of hydrology in peatlands. We combine optical, thermal, and topographic unmanned aerial vehicle data to spatially model the water table level (WTL) in unditched open peatlands in northern Finland suffering from adjacent drainage. We predict the WTL with a linear regression model with a moderate fit and accuracy (R2 = 0.69, RMSE = 3.85 cm) and construct maps to assess the spatial success of restoration. We demonstrate that thermal-optical trapezoid-based wetness models and optical bands are strongly correlated with the WTL, but topography-based wetness indices do not. We suggest that the developed method could be used for quantitative restoration assessment, but before-after restoration imagery is required to verify our findings

    Breathing Life Into Biomechanical User Models

    Get PDF
    Forward biomechanical simulation in HCI holds great promise as a tool for evaluation, design, and engineering of user interfaces. Although reinforcement learning (RL) has been used to simulate biomechanics in interaction, prior work has relied on unrealistic assumptions about the control problem involved, which limits the plausibility of emerging policies. These assumptions include direct torque actuation as opposed to muscle-based control; direct, privileged access to the external environment, instead of imperfect sensory observations; and lack of interaction with physical input devices. In this paper, we present a new approach for learning muscle-actuated control policies based on perceptual feedback in interaction tasks with physical input devices. This allows modelling of more realistic interaction tasks with cognitively plausible visuomotor control. We show that our simulated user model successfully learns a variety of tasks representing different interaction methods, and that the model exhibits characteristic movement regularities observed in studies of pointing. We provide an open-source implementation which can be extended with further biomechanical models, perception models, and interactive environments
    corecore