881 research outputs found
A framework for safe human-humanoid coexistence
This work is focused on the development of a safety framework for Human-Humanoid coexistence, with emphasis on humanoid locomotion. After a brief introduction to the fundamental concepts of humanoid locomotion, the two most common approaches for gait generation are presented, and are extended with the inclusion of a stability condition to guarantee the boundedness of the generated trajectories. Then the safety framework is presented, with the introduction of different safety behaviors. These behaviors are meant to enhance the overall level of safety during any robot operation. Proactive behaviors will enhance or adapt the current robot operations to reduce the risk of danger, while override behaviors will stop the current robot activity in order to take action against a particularly dangerous situation. A state
machine is defined to control the transitions between the behaviors. The behaviors that are strictly related to locomotion are subsequently detailed, and an implementation is proposed and validated. A possible implementation of the remaining behaviors is proposed through the review of related works that can be found in literature
A behavior-based framework for safe deployment of humanoid robots
We present a complete framework for the safe deployment of humanoid robots in environments containing humans. Proceeding from some general guidelines, we propose several safety behaviors, classified in three categories, i.e., override, temporary override, and proactive. Activation and deactivation of these behaviors is triggered by information coming from the robot sensors and is handled by a state machine. The implementation of our safety framework is discussed with respect to a reference control architecture. In particular, it is shown that an MPC-based gait generator is ideal for realizing all behaviors related to locomotion. Simulation and experimental results on the HRP-4 and NAO humanoids, respectively, are presented to confirm the effectiveness of the proposed method
Overcoming barriers and increasing independence: service robots for elderly and disabled people
This paper discusses the potential for service robots to overcome barriers and increase independence of
elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly
people and advances in technology which will make new uses possible and provides suggestions for some of these new
applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses
the complementarity of assistive service robots and personal assistance and considers the types of applications and
users for which service robots are and are not suitable
Imprecise dynamic walking with time-projection control
We present a new walking foot-placement controller based on 3LP, a 3D model
of bipedal walking that is composed of three pendulums to simulate falling,
swing and torso dynamics. Taking advantage of linear equations and closed-form
solutions of the 3LP model, our proposed controller projects intermediate
states of the biped back to the beginning of the phase for which a discrete LQR
controller is designed. After the projection, a proper control policy is
generated by this LQR controller and used at the intermediate time. This
control paradigm reacts to disturbances immediately and includes rules to
account for swing dynamics and leg-retraction. We apply it to a simulated Atlas
robot in position-control, always commanded to perform in-place walking. The
stance hip joint in our robot keeps the torso upright to let the robot
naturally fall, and the swing hip joint tracks the desired footstep location.
Combined with simple Center of Pressure (CoP) damping rules in the low-level
controller, our foot-placement enables the robot to recover from strong pushes
and produce periodic walking gaits when subject to persistent sources of
disturbance, externally or internally. These gaits are imprecise, i.e.,
emergent from asymmetry sources rather than precisely imposing a desired
velocity to the robot. Also in extreme conditions, restricting linearity
assumptions of the 3LP model are often violated, but the system remains robust
in our simulations. An extensive analysis of closed-loop eigenvalues, viable
regions and sensitivity to push timings further demonstrate the strengths of
our simple controller
Reactive Stepping for Humanoid Robots using Reinforcement Learning: Application to Standing Push Recovery on the Exoskeleton Atalante
State-of-the-art reinforcement learning is now able to learn versatile
locomotion, balancing and push-recovery capabilities for bipedal robots in
simulation. Yet, the reality gap has mostly been overlooked and the simulated
results hardly transfer to real hardware. Either it is unsuccessful in practice
because the physics is over-simplified and hardware limitations are ignored, or
regularity is not guaranteed, and unexpected hazardous motions can occur. This
paper presents a reinforcement learning framework capable of learning robust
standing push recovery for bipedal robots that smoothly transfer to reality,
providing only instantaneous proprioceptive observations. By combining original
termination conditions and policy smoothness conditioning, we achieve stable
learning, sim-to-real transfer and safety using a policy without memory nor
explicit history. Reward engineering is then used to give insights into how to
keep balance. We demonstrate its performance in reality on the lower-limb
medical exoskeleton Atalante
Dialogue management using reinforcement learning
Dialogue has been widely used for verbal communication between human and robot interaction, such as assistant robot in hospital. However, this robot was usually limited by predetermined dialogue, so it will be difficult to understand new words for new desired goal. In this paper, we discussed conversation in Indonesian on entertainment, motivation, emergency, and helping with knowledge growing method. We provided mp3 audio for music, fairy tale, comedy request, and motivation. The execution time for this request was 3.74 ms on average. In emergency situation, patient able to ask robot to call the nurse. Robot will record complaint of pain and inform nurse. From 7 emergency reports, all complaints were successfully saved on database. In helping conversation, robot will walk to pick up belongings of patient. Once the robot did not understand with patient’s conversation, robot will ask until it understands. From asking conversation, knowledge expands from 2 to 10, with learning execution from 1405 ms to 3490 ms. SARSA was faster towards steady state because of higher cumulative rewards. Q-learning and SARSA were achieved desired object within 200 episodes. It concludes that RL method to overcome robot knowledge limitation in achieving new dialogue goal for patient assistant were achieved
- …