6,410 research outputs found

    Automatic Differentiation of Rigid Body Dynamics for Optimal Control and Estimation

    Full text link
    Many algorithms for control, optimization and estimation in robotics depend on derivatives of the underlying system dynamics, e.g. to compute linearizations, sensitivities or gradient directions. However, we show that when dealing with Rigid Body Dynamics, these derivatives are difficult to derive analytically and to implement efficiently. To overcome this issue, we extend the modelling tool `RobCoGen' to be compatible with Automatic Differentiation. Additionally, we propose how to automatically obtain the derivatives and generate highly efficient source code. We highlight the flexibility and performance of the approach in two application examples. First, we show a Trajectory Optimization example for the quadrupedal robot HyQ, which employs auto-differentiation on the dynamics including a contact model. Second, we present a hardware experiment in which a 6 DoF robotic arm avoids a randomly moving obstacle in a go-to task by fast, dynamic replanning

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Motion planning and control of redundant manipulators for dynamical obstacle avoidance

    Get PDF
    none2noThis paper presents a framework for the motion planning and control of redundant manipulators with the added task of collision avoidance. The algorithms that were previously studied and tested by the authors for planar cases are here extended to full mobility redundant manipulators operating in a three-dimensional workspace. The control strategy consists of a combination of off-line path planning algorithms with on-line motion control. The path planning algorithm is used to generate trajectories able to avoid fixed obstacles detected before the robot starts to move; this is based on the potential fields method combined with a smoothing interpolation that exploits BĂ©zier curves. The on-line motion control is designed to compensate for the motion of the obstacles and to avoid collisions along the kinematic chain of the manipulator; this is realized using a velocity control law based on the null space method for redundancy control. Furthermore, an additional term of the control law is introduced which takes into account the speed of the obstacles, as well as their position. In order to test the algorithms, a set of simulations are presented: The redundant collaborative robot KUKA LBR iiwa is controlled in different cases, where fixed or dynamic obstacles interfere with its motion. The simulated data show that the proposed method for the smoothing of the trajectory can give a reduction of the angular accelerations of the motors of the order of 90%, with an increase of less than 15% of the calculation time. Furthermore, the dependence of the on-line control law on the speed of the obstacle can lead to reductions in the maximum speed and acceleration of the joints of approximately 50% and 80%, respectively, without significantly increasing the computational effort that is compatible for transferability to a real system.openPalmieri G.; Scoccia C.Palmieri, G.; Scoccia, C

    Incremental Learning of Humanoid Robot Behavior from Natural Interaction and Large Language Models

    Full text link
    Natural-language dialog is key for intuitive human-robot interaction. It can be used not only to express humans' intents, but also to communicate instructions for improvement if a robot does not understand a command correctly. Of great importance is to endow robots with the ability to learn from such interaction experience in an incremental way to allow them to improve their behaviors or avoid mistakes in the future. In this paper, we propose a system to achieve incremental learning of complex behavior from natural interaction, and demonstrate its implementation on a humanoid robot. Building on recent advances, we present a system that deploys Large Language Models (LLMs) for high-level orchestration of the robot's behavior, based on the idea of enabling the LLM to generate Python statements in an interactive console to invoke both robot perception and action. The interaction loop is closed by feeding back human instructions, environment observations, and execution results to the LLM, thus informing the generation of the next statement. Specifically, we introduce incremental prompt learning, which enables the system to interactively learn from its mistakes. For that purpose, the LLM can call another LLM responsible for code-level improvements of the current interaction based on human feedback. The improved interaction is then saved in the robot's memory, and thus retrieved on similar requests. We integrate the system in the robot cognitive architecture of the humanoid robot ARMAR-6 and evaluate our methods both quantitatively (in simulation) and qualitatively (in simulation and real-world) by demonstrating generalized incrementally-learned knowledge.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Submitted to the 2023 IEEE/RAS International Conference on Humanoid Robots (Humanoids). Supplementary video available at https://youtu.be/y5O2mRGtsL

    CALVIN: A Benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks

    Full text link
    General-purpose robots coexisting with humans in their environment must learn to relate human language to their perceptions and actions to be useful in a range of daily tasks. Moreover, they need to acquire a diverse repertoire of general-purpose skills that allow composing long-horizon tasks by following unconstrained language instructions. In this paper, we present CALVIN (Composing Actions from Language and Vision), an open-source simulated benchmark to learn long-horizon language-conditioned tasks. Our aim is to make it possible to develop agents that can solve many robotic manipulation tasks over a long horizon, from onboard sensors, and specified only via human language. CALVIN tasks are more complex in terms of sequence length, action space, and language than existing vision-and-language task datasets and supports flexible specification of sensor suites. We evaluate the agents in zero-shot to novel language instructions and to novel environments and objects. We show that a baseline model based on multi-context imitation learning performs poorly on CALVIN, suggesting that there is significant room for developing innovative agents that learn to relate human language to their world models with this benchmark.Comment: Accepted for publication at IEEE Robotics and Automation Letters (RAL). Code, models and dataset available at http://calvin.cs.uni-freiburg.d
    • 

    corecore