11 research outputs found

    Material Recognition CNNs and Hierarchical Planning for Biped Robot Locomotion on Slippery Terrain

    Full text link
    In this paper we tackle the problem of visually predicting surface friction for environments with diverse surfaces, and integrating this knowledge into biped robot locomotion planning. The problem is essential for autonomous robot locomotion since diverse surfaces with varying friction abound in the real world, from wood to ceramic tiles, grass or ice, which may cause difficulties or huge energy costs for robot locomotion if not considered. We propose to estimate friction and its uncertainty from visual estimation of material classes using convolutional neural networks, together with probability distribution functions of friction associated with each material. We then robustly integrate the friction predictions into a hierarchical (footstep and full-body) planning method using chance constraints, and optimize the same trajectory costs at both levels of the planning method for consistency. Our solution achieves fully autonomous perception and locomotion on slippery terrain, which considers not only friction and its uncertainty, but also collision, stability and trajectory cost. We show promising friction prediction results in real pictures of outdoor scenarios, and planning experiments on a real robot facing surfaces with different friction

    Використання машинного навчання для класифікації матеріалів за результатами теплового контролю

    Get PDF
    У роботі представлено тепловий метод визначення типу матеріалу об'єкту. Розглянуто систему, яка здатна класифікувати об’єкти з різних матеріалів в режимі реального часу, використовуючи лазерну термографію та класифікатор на базі машинного навчання. Результати дослідження демонструють можливість отримання високої точності автоматизованого визначення типу матеріалу, навіть коли набір даних складається з температурних профілів, отриманих за різних умов контролю

    A Holistic Approach to Human-Supervised Humanoid Robot Operations in Extreme Environments

    Get PDF
    Nuclear energy will play a critical role in meeting clean energy targets worldwide. However, nuclear environments are dangerous for humans to operate in due to the presence of highly radioactive materials. Robots can help address this issue by allowing remote access to nuclear and other highly hazardous facilities under human supervision to perform inspection and maintenance tasks during normal operations, help with clean-up missions, and aid in decommissioning. This paper presents our research to help realize humanoid robots in supervisory roles in nuclear environments. Our research focuses on National Aeronautics and Space Administration (NASA’s) humanoid robot, Valkyrie, in the areas of constrained manipulation and motion planning, increasing stability using support contact, dynamic non-prehensile manipulation, locomotion on deformable terrains, and human-in-the-loop control interfaces

    Modelado matemático de un robot bípedo con equilibrio dinámico

    Get PDF
    La investigación presenta el desarrollo de un modelo matemático configurable de robot bípedo capaz de emular robots comerciales así como otros diseños antropomórficos para la prueba de controles de marcha en un ambiente virtual. El modelo cuenta con sub-sistemas que permiten configurar condiciones especiales de contacto visco elástico entre los pies del robot y el suelo; la acción de fuerzas de perturbación dinámicas son contrarrestadas por el control de equilibrio del robot que mantiene una postura erguida durante las fases de marcha. El modelo se prueba en condiciones de apoyo doble, apoyo simple y marcha para estudiar su respuesta a perturbaciones externas como internas.The project presents the development of a configurable mathematical model for a biped robot capable of emulating commercial robots as well as other anthropomorphic design for the test of gait controls in a virtual environment. The model has sub-systems allowing the set of special conditions of visco elastic contact between the feet of the robot and the ground; the simulation of the dynamic disturbance forces are canceled by the robot’s balance control keeping it on a erect stance . The model is tested under conditions of double support, single support and gait to study its response to external and internal disturbancesMaestrí

    Bio-Inspired Robotics

    Get PDF
    Modern robotic technologies have enabled robots to operate in a variety of unstructured and dynamically-changing environments, in addition to traditional structured environments. Robots have, thus, become an important element in our everyday lives. One key approach to develop such intelligent and autonomous robots is to draw inspiration from biological systems. Biological structure, mechanisms, and underlying principles have the potential to provide new ideas to support the improvement of conventional robotic designs and control. Such biological principles usually originate from animal or even plant models, for robots, which can sense, think, walk, swim, crawl, jump or even fly. Thus, it is believed that these bio-inspired methods are becoming increasingly important in the face of complex applications. Bio-inspired robotics is leading to the study of innovative structures and computing with sensory–motor coordination and learning to achieve intelligence, flexibility, stability, and adaptation for emergent robotic applications, such as manipulation, learning, and control. This Special Issue invites original papers of innovative ideas and concepts, new discoveries and improvements, and novel applications and business models relevant to the selected topics of ``Bio-Inspired Robotics''. Bio-Inspired Robotics is a broad topic and an ongoing expanding field. This Special Issue collates 30 papers that address some of the important challenges and opportunities in this broad and expanding field

    동영상 속 사람 동작의 물리 기반 재구성 및 분석

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 컴퓨터공학부, 2021. 2. 이제희.In computer graphics, simulating and analyzing human movement have been interesting research topics started since the 1960s. Still, simulating realistic human movements in a 3D virtual world is a challenging task in computer graphics. In general, motion capture techniques have been used. Although the motion capture data guarantees realistic result and high-quality data, there is lots of equipment required to capture motion, and the process is complicated. Recently, 3D human pose estimation techniques from the 2D video are remarkably developed. Researchers in computer graphics and computer vision have attempted to reconstruct the various human motions from video data. However, existing methods can not robustly estimate dynamic actions and not work on videos filmed with a moving camera. In this thesis, we propose methods to reconstruct dynamic human motions from in-the-wild videos and to control the motions. First, we developed a framework to reconstruct motion from videos using prior physics knowledge. For dynamic motions such as backspin, the poses estimated by a state-of-the-art method are incomplete and include unreliable root trajectory or lack intermediate poses. We designed a reward function using poses and hints extracted from videos in the deep reinforcement learning controller and learned a policy to simultaneously reconstruct motion and control a virtual character. Second, we simulated figure skating movements in video. Skating sequences consist of fast and dynamic movements on ice, hindering the acquisition of motion data. Thus, we extracted 3D key poses from a video to then successfully replicate several figure skating movements using trajectory optimization and a deep reinforcement learning controller. Third, we devised an algorithm for gait analysis through video of patients with movement disorders. After acquiring the patients joint positions from 2D video processed by a deep learning network, the 3D absolute coordinates were estimated, and gait parameters such as gait velocity, cadence, and step length were calculated. Additionally, we analyzed the optimization criteria of human walking by using a 3D musculoskeletal humanoid model and physics-based simulation. For two criteria, namely, the minimization of muscle activation and joint torque, we compared simulation data with real human data for analysis. To demonstrate the effectiveness of the first two research topics, we verified the reconstruction of dynamic human motions from 2D videos using physics-based simulations. For the last two research topics, we evaluated our results with real human data.컴퓨터 그래픽스에서 인간의 움직임 시뮬레이션 및 분석은 1960 년대부터 다루어진 흥미로운 연구 주제이다. 몇 십년 동안 활발하게 연구되어 왔음에도 불구하고, 3차원 가상 공간 상에서 사실적인 인간의 움직임을 시뮬레이션하는 연구는 여전히 어렵고 도전적인 주제이다. 그동안 사람의 움직임 데이터를 얻기 위해서 모션 캡쳐 기술이 사용되어 왔다. 모션 캡처 데이터는 사실적인 결과와 고품질 데이터를 보장하지만 모션 캡쳐를 하기 위해서 필요한 장비들이 많고, 그 과정이 복잡하다. 최근에 2차원 영상으로부터 사람의 3차원 자세를 추정하는 연구들이 괄목할 만한 결과를 보여주고 있다. 이를 바탕으로 컴퓨터 그래픽스와 컴퓨터 비젼 분야의 연구자들은 비디오 데이터로부터 다양한 인간 동작을 재구성하려는 시도를 하고 있다. 그러나 기존의 방법들은 빠르고 다이나믹한 동작들은 안정적으로 추정하지 못하며 움직이는 카메라로 촬영한 비디오에 대해서는 작동하지 않는다. 본 논문에서는 비디오로부터 역동적인 인간 동작을 재구성하고 동작을 제어하는 방법을 제안한다. 먼저 사전 물리학 지식을 사용하여 비디오에서 모션을 재구성하는 프레임 워크를 제안한다. 공중제비와 같은 역동적인 동작들에 대해서 최신 연구 방법을 동원하여 추정된 자세들은 캐릭터의 궤적을 신뢰할 수 없거나 중간에 자세 추정에 실패하는 등 불완전하다. 우리는 심층강화학습 제어기에서 영상으로부터 추출한 포즈와 힌트를 활용하여 보상 함수를 설계하고 모션 재구성과 캐릭터 제어를 동시에 하는 정책을 학습하였다. 둘 째, 비디오에서 피겨 스케이팅 기술을 시뮬레이션한다. 피겨 스케이팅 기술들은 빙상에서 빠르고 역동적인 움직임으로 구성되어 있어 모션 데이터를 얻기가 까다롭다. 비디오에서 3차원 키 포즈를 추출하고 궤적 최적화 및 심층강화학습 제어기를 사용하여 여러 피겨 스케이팅 기술을 성공적으로 시연한다. 셋 째, 파킨슨 병이나 뇌성마비와 같은 질병으로 인하여 움직임 장애가 있는 환자의 보행을 분석하기 위한 알고리즘을 제안한다. 2차원 비디오로부터 딥러닝을 사용한 자세 추정기법을 사용하여 환자의 관절 위치를 얻어낸 다음, 3차원 절대 좌표를 얻어내어 이로부터 보폭, 보행 속도와 같은 보행 파라미터를 계산한다. 마지막으로, 근골격 인체 모델과 물리 시뮬레이션을 이용하여 인간 보행의 최적화 기준에 대해 탐구한다. 근육 활성도 최소화와 관절 돌림힘 최소화, 두 가지 기준에 대해 시뮬레이션한 후, 실제 사람 데이터와 비교하여 결과를 분석한다. 처음 두 개의 연구 주제의 효과를 입증하기 위해, 물리 시뮬레이션을 사용하여 이차원 비디오로부터 재구성한 여러 가지 역동적인 사람의 동작들을 재현한다. 나중 두 개의 연구 주제는 사람 데이터와의 비교 분석을 통하여 평가한다.1 Introduction 1 2 Background 9 2.1 Pose Estimation from 2D Video . . . . . . . . . . . . . . . . . . . . 9 2.2 Motion Reconstruction from Monocular Video . . . . . . . . . . . . 10 2.3 Physics-Based Character Simulation and Control . . . . . . . . . . . 12 2.4 Motion Reconstruction Leveraging Physics . . . . . . . . . . . . . . 13 2.5 Human Motion Control . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.5.1 Figure Skating Simulation . . . . . . . . . . . . . . . . . . . 16 2.6 Objective Gait Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.7 Optimization for Human Movement Simulation . . . . . . . . . . . . 17 2.7.1 Stability Criteria . . . . . . . . . . . . . . . . . . . . . . . . 18 3 Human Dynamics from Monocular Video with Dynamic Camera Movements 19 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3 Pose and Contact Estimation . . . . . . . . . . . . . . . . . . . . . . 21 3.4 Learning Human Dynamics . . . . . . . . . . . . . . . . . . . . . . . 24 3.4.1 Policy Learning . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.4.2 Network Training . . . . . . . . . . . . . . . . . . . . . . . . 28 3.4.3 Scene Estimator . . . . . . . . . . . . . . . . . . . . . . . . 29 3.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5.1 Video Clips . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5.2 Comparison of Contact Estimators . . . . . . . . . . . . . . . 33 3.5.3 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.5.4 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4 Figure Skating Simulation from Video 42 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.3 Skating Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.3.1 Non-holonomic Constraints . . . . . . . . . . . . . . . . . . 46 4.3.2 Relaxation of Non-holonomic Constraints . . . . . . . . . . . 47 4.4 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.5 Trajectory Optimization and Control . . . . . . . . . . . . . . . . . . 50 4.5.1 Trajectory Optimization . . . . . . . . . . . . . . . . . . . . 50 4.5.2 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5 Gait Analysis Using Pose Estimation Algorithm with 2D-video of Patients 61 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.2.1 Patients and video recording . . . . . . . . . . . . . . . . . . 63 5.2.2 Standard protocol approvals, registrations, and patient consents 66 5.2.3 3D Pose estimation from 2D video . . . . . . . . . . . . . . . 66 5.2.4 Gait parameter estimation . . . . . . . . . . . . . . . . . . . 67 5.2.5 Statistical analysis . . . . . . . . . . . . . . . . . . . . . . . 68 5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 5.3.1 Validation of video-based analysis of the gait . . . . . . . . . 68 5.3.2 gait analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.4.1 Validation with the conventional sensor-based method . . . . 75 5.4.2 Analysis of gait and turning in TUG . . . . . . . . . . . . . . 75 5.4.3 Correlation with clinical parameters . . . . . . . . . . . . . . 76 5.4.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 5.5 Supplementary Material . . . . . . . . . . . . . . . . . . . . . . . . . 77 6 Control Optimization of Human Walking 80 6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 6.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.2.1 Musculoskeletal model . . . . . . . . . . . . . . . . . . . . . 82 6.2.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.2.3 Control co-activation level . . . . . . . . . . . . . . . . . . . 83 6.2.4 Push-recovery experiment . . . . . . . . . . . . . . . . . . . 84 6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 7 Conclusion 90 7.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Docto
    corecore