890 research outputs found

    Sample Efficient Optimization for Learning Controllers for Bipedal Locomotion

    Full text link
    Learning policies for bipedal locomotion can be difficult, as experiments are expensive and simulation does not usually transfer well to hardware. To counter this, we need al- gorithms that are sample efficient and inherently safe. Bayesian Optimization is a powerful sample-efficient tool for optimizing non-convex black-box functions. However, its performance can degrade in higher dimensions. We develop a distance metric for bipedal locomotion that enhances the sample-efficiency of Bayesian Optimization and use it to train a 16 dimensional neuromuscular model for planar walking. This distance metric reflects some basic gait features of healthy walking and helps us quickly eliminate a majority of unstable controllers. With our approach we can learn policies for walking in less than 100 trials for a range of challenging settings. In simulation, we show results on two different costs and on various terrains including rough ground and ramps, sloping upwards and downwards. We also perturb our models with unknown inertial disturbances analogous with differences between simulation and hardware. These results are promising, as they indicate that this method can potentially be used to learn control policies on hardware.Comment: To appear in International Conference on Humanoid Robots (Humanoids '2016), IEEE-RAS. (Rika Antonova and Akshara Rai contributed equally

    Discovering user mobility and activity in smart lighting environments

    Full text link
    "Smart lighting" environments seek to improve energy efficiency, human productivity and health by combining sensors, controls, and Internet-enabled lights with emerging “Internet-of-Things” technology. Interesting and potentially impactful applications involve adaptive lighting that responds to individual occupants' location, mobility and activity. In this dissertation, we focus on the recognition of user mobility and activity using sensing modalities and analytical techniques. This dissertation encompasses prior work using body-worn inertial sensors in one study, followed by smart-lighting inspired infrastructure sensors deployed with lights. The first approach employs wearable inertial sensors and body area networks that monitor human activities with a user's smart devices. Real-time algorithms are developed to (1) estimate angles of excess forward lean to prevent risk of falls, (2) identify functional activities, including postures, locomotion, and transitions, and (3) capture gait parameters. Two human activity datasets are collected from 10 healthy young adults and 297 elder subjects, respectively, for laboratory validation and real-world evaluation. Results show that these algorithms can identify all functional activities accurately with a sensitivity of 98.96% on the 10-subject dataset, and can detect walking activities and gait parameters consistently with high test-retest reliability (p-value < 0.001) on the 297-subject dataset. The second approach leverages pervasive "smart lighting" infrastructure to track human location and predict activities. A use case oriented design methodology is considered to guide the design of sensor operation parameters for localization performance metrics from a system perspective. Integrating a network of low-resolution time-of-flight sensors in ceiling fixtures, a recursive 3D location estimation formulation is established that links a physical indoor space to an analytical simulation framework. Based on indoor location information, a label-free clustering-based method is developed to learn user behaviors and activity patterns. Location datasets are collected when users are performing unconstrained and uninstructed activities in the smart lighting testbed under different layout configurations. Results show that the activity recognition performance measured in terms of CCR ranges from approximately 90% to 100% throughout a wide range of spatio-temporal resolutions on these location datasets, insensitive to the reconfiguration of environment layout and the presence of multiple users.2017-02-17T00:00:00

    {Mo2Cap2}: Real-time Mobile {3D} Motion Capture with a Cap-mounted Fisheye Camera

    Get PDF
    We propose the first real-time approach for the egocentric estimation of 3D human body pose in a wide range of unconstrained everyday activities. This setting has a unique set of challenges, such as mobility of the hardware setup, and robustness to long capture sessions with fast recovery from tracking failures. We tackle these challenges based on a novel lightweight setup that converts a standard baseball cap to a device for high-quality pose estimation based on a single cap-mounted fisheye camera. From the captured egocentric live stream, our CNN based 3D pose estimation approach runs at 60Hz on a consumer-level GPU. In addition to the novel hardware setup, our other main contributions are: 1) a large ground truth training corpus of top-down fisheye images and 2) a novel disentangled 3D pose estimation approach that takes the unique properties of the egocentric viewpoint into account. As shown by our evaluation, we achieve lower 3D joint error as well as better 2D overlay than the existing baselines

    Fusion of virtual reality and brain-machine interfaces for the assessment and rehabilitation of patients with spinal cord injury

    Get PDF
    La presente tesis está centrada en la utilización de nuevas tecnologías (Interfaces Cerebro-Máquina y Realidad Virtual). En la primera parte de la tesis se describe la definición y la aplicación de un conjunto de métricas para evaluar el estado funcional de los pacientes con lesión medular en el contexto de un sistema de realidad virtual para la rehabilitación de los miembros superiores. El objetivo de este primer estudio es demostrar que la realidad virtual puede utilizarse, en combinación con sensores inerciales para rehabilitar y evaluar simultáneamente. 15 pacientes con lesión medular llevaron a cabo 3 sesiones con el sistema de realidad virtual Toyra y se aplicó el conjunto definido de métricas a las grabaciones obtenidas con los sensores inerciales. Se encontraron correlaciones entre algunas de las métricas definidas y algunas de las escalas clínicas utilizadas con frecuencia en el contexto de la rehabilitación. En la segunda parte de la tesis se ha combinado una retroalimentación virtual con un estimulador eléctrico funcional (en adelante FES, por sus siglas en inglés Functional Electrical Stimulator), ambos controlados por un Interfaz Cerebro-Máquina (BMI por sus siglas en inglés Brain-Machine Interface), para desarrollar un nuevo tipo de enfoque terapéutico para los pacientes. El sistema ha sido utilizado por 4 pacientes con lesión medular que intentaron mover sus manos. Esta intención desencadenó simultáneamente el FES y la retroalimentación virtual, cerrando la mano de los pacientes y mostrándoles una fuente adicional de retroalimentación para complementar la terapia. Este trabajo es, de acuerdo al estado del arte revisado, el primero que integra BMI, FES y realidad virtual como terapia para pacientes con lesión medular. Se han obtenido resultados clínicos prometedores por 4 pacientes con lesión medular después de realizar 5 sesiones de terapia con el sistema, mostrando buenos niveles de precisión en las diferentes sesiones (79,13% en promedio). En la tercera parte de la tesis se ha definido una nueva métrica para estudiar los cambios de conectividad cerebral en los pacientes con lesión medular, que incluye información de las interacciones neuronales entre diferentes áreas. El objetivo de este estudio ha sido extraer información clínicamente relevante de la actividad del EEG cuando se realizan terapias basadas en BMI
    corecore