1,526 research outputs found
An Architecture for Online Affordance-based Perception and Whole-body Planning
The DARPA Robotics Challenge Trials held in December 2013 provided a landmark demonstration of dexterous mobile robots executing a variety of tasks aided by a remote human operator using only data from the robot's sensor suite transmitted over a constrained, field-realistic communications link. We describe the design considerations, architecture, implementation and performance of the software that Team MIT developed to command and control an Atlas humanoid robot. Our design emphasized human interaction with an efficient motion planner, where operators expressed desired robot actions in terms of affordances fit using perception and manipulated in a custom user interface. We highlight several important lessons we learned while developing our system on a highly compressed schedule
Natural user interfaces for interdisciplinary design review using the Microsoft Kinect
As markets demand engineered products faster, waiting on the cyclical design processes of the past is not an option. Instead, industry is turning to concurrent design and interdisciplinary teams. When these teams collaborate, engineering CAD tools play a vital role in conceptualizing and validating designs. These tools require significant user investment to master, due to challenging interfaces and an overabundance of features. These challenges often prohibit team members from using these tools for exploring designs. This work presents a method allowing users to interact with a design using intuitive gestures and head tracking, all while keeping the model in a CAD format. Specifically, Siemens\u27 Teamcenter® Lifecycle Visualization Mockup (Mockup) was used to display design geometry while modifications were made through a set of gestures captured by a Microsoft KinectTM in real time. This proof of concept program allowed a user to rotate the scene, activate Mockup\u27s immersive menu, move the immersive wand, and manipulate the view based on head position.
This work also evaluates gesture usability and task completion time for this proof of concept system. A cognitive model evaluation method was used to evaluate the premise that gesture-based user interfaces are easier to use and learn with regards to time than a traditional mouse and keyboard interface. Using a cognitive model analysis tool allowed the rapid testing of interaction concepts without the significant overhead of user studies and full development cycles. The analysis demonstrated that using the KinectTM is a feasible interaction mode for CAD/CAE programs. In addition, the analysis pointed out limitations in the gesture interfaces ability to compete time wise with easily accessible customizable menu options
Development and Field Testing of the FootFall Planning System for the ATHLETE Robots
The FootFall Planning System is a ground-based planning and decision support system designed to facilitate the control of walking activities for the ATHLETE (All-Terrain Hex-Limbed Extra-Terrestrial Explorer) family of robots. ATHLETE was developed at NASA's Jet Propulsion Laboratory (JPL) and is a large six-legged robot designed to serve multiple roles during manned and unmanned missions to the Moon; its roles include transportation, construction and exploration. Over the four years from 2006 through 2010 the FootFall Planning System was developed and adapted to two generations of the ATHLETE robots and tested at two analog field sites (the Human Robotic Systems Project's Integrated Field Test at Moses Lake, Washington, June 2008, and the Desert Research and Technology Studies (D-RATS), held at Black Point Lava Flow in Arizona, September 2010). Having 42 degrees of kinematic freedom, standing to a maximum height of just over 4 meters, and having a payload capacity of 450 kg in Earth gravity, the current version of the ATHLETE robot is a uniquely complex system. A central challenge to this work was the compliance of the high-DOF (Degree Of Freedom) robot, especially the compliance of the wheels, which affected many aspects of statically-stable walking. This paper will review the history of the development of the FootFall system, sharing design decisions, field test experiences, and the lessons learned concerning compliance and self-awareness
Momentum Control with Hierarchical Inverse Dynamics on a Torque-Controlled Humanoid
Hierarchical inverse dynamics based on cascades of quadratic programs have
been proposed for the control of legged robots. They have important benefits
but to the best of our knowledge have never been implemented on a torque
controlled humanoid where model inaccuracies, sensor noise and real-time
computation requirements can be problematic. Using a reformulation of existing
algorithms, we propose a simplification of the problem that allows to achieve
real-time control. Momentum-based control is integrated in the task hierarchy
and a LQR design approach is used to compute the desired associated closed-loop
behavior and improve performance. Extensive experiments on various balancing
and tracking tasks show very robust performance in the face of unknown
disturbances, even when the humanoid is standing on one foot. Our results
demonstrate that hierarchical inverse dynamics together with momentum control
can be efficiently used for feedback control under real robot conditions.Comment: 21 pages, 11 figures, 4 tables in Autonomous Robots (2015
Modeling and Animating Human Figures in a CAD Environment
With the widespread acceptance of three-dimensional modeling techniques, high-speed hardware, and relatively low-cost computation, modeling and animating one or more human figures for the purposes of design assessment, human factors, task simulation, and human movement understanding has become feasible outside the animation production house environment. This tutorial will address the state-of-the-art in human figure geometric modeling, figure positioning, figure animation, and task simulation
Identifying Important Sensory Feedback for Learning Locomotion Skills
Robot motor skills can be learned through deep reinforcement learning (DRL)
by neural networks as state-action mappings. While the selection of state
observations is crucial, there has been a lack of quantitative analysis to
date. Here, we present a systematic saliency analysis that quantitatively
evaluates the relative importance of different feedback states for motor skills
learned through DRL. Our approach can identify the most essential feedback
states for locomotion skills, including balance recovery, trotting, bounding,
pacing and galloping. By using only key states including joint positions,
gravity vector, base linear and angular velocities, we demonstrate that a
simulated quadruped robot can achieve robust performance in various test
scenarios across these distinct skills. The benchmarks using task performance
metrics show that locomotion skills learned with key states can achieve
comparable performance to those with all states, and the task performance or
learning success rate will drop significantly if key states are missing. This
work provides quantitative insights into the relationship between state
observations and specific types of motor skills, serving as a guideline for
robot motor learning. The proposed method is applicable to differentiable
state-action mapping, such as neural network based control policies, enabling
the learning of a wide range of motor skills with minimal sensing dependencies
- …