34 research outputs found
Methodology for Task Development in Humanoid Robots Using ArmarX Framework
In this bachelor thesis, a guide on how to take the first steps in the ArmarX
framework. This software provides a complete robot development environment.
In particular, how to use the humanoid robot TEO of the investigation group
RoboticsLab from University Carlos III in this software.
In order to complete this objective, the several stages were followed. First,
a initial research about the software ArmarX was performed, learning its main
features and how to use this tool. Then, the 3D model of TEO was adapted to be
used in this environment. When it was complete, a task was created and adapted
to the robot. Finally, it was exported to the real-world TEO and the results were
compared.
During this process, an extensive documentation was performed, in order to
pursue the objective previously mentioned, the formulation a guide for future
researchers interested in ArmarX.Ingeniería en Tecnologías Industriale
BlueSky: Combining Task Planning and Activity-Centric Access Control for Assistive Humanoid Robots
In the not too distant future, assistive humanoid robots will provide versatile assistance for coping with everyday life. In their interactions with humans, not only safety, but also security and privacy issues need to be considered. In this Blue Sky paper, we therefore argue that it is time to bring task planning and execution as a well-established field of robotics with access and usage control in the field of security and privacy closer together. In particular, the recently proposed activity-based view on access and usage control provides a promising approach to bridge the gap between these two perspectives. We argue that humanoid robots provide for specific challenges due to their task-universality and their use in both, private and public spaces. Furthermore, they are socially connected to various parties and require policy creation at runtime due to learning. We contribute first attempts on the architecture and enforcement layer as well as on joint modeling, and discuss challenges and a research roadmap also for the policy and objectives layer. We conclude that the underlying combination of decentralized systems\u27 and smart environments\u27 research aspects provides for a rich source of challenges that need to be addressed on the road to deployment
Incremental Learning of Humanoid Robot Behavior from Natural Interaction and Large Language Models
Natural-language dialog is key for intuitive human-robot interaction. It can
be used not only to express humans' intents, but also to communicate
instructions for improvement if a robot does not understand a command
correctly. Of great importance is to endow robots with the ability to learn
from such interaction experience in an incremental way to allow them to improve
their behaviors or avoid mistakes in the future. In this paper, we propose a
system to achieve incremental learning of complex behavior from natural
interaction, and demonstrate its implementation on a humanoid robot. Building
on recent advances, we present a system that deploys Large Language Models
(LLMs) for high-level orchestration of the robot's behavior, based on the idea
of enabling the LLM to generate Python statements in an interactive console to
invoke both robot perception and action. The interaction loop is closed by
feeding back human instructions, environment observations, and execution
results to the LLM, thus informing the generation of the next statement.
Specifically, we introduce incremental prompt learning, which enables the
system to interactively learn from its mistakes. For that purpose, the LLM can
call another LLM responsible for code-level improvements of the current
interaction based on human feedback. The improved interaction is then saved in
the robot's memory, and thus retrieved on similar requests. We integrate the
system in the robot cognitive architecture of the humanoid robot ARMAR-6 and
evaluate our methods both quantitatively (in simulation) and qualitatively (in
simulation and real-world) by demonstrating generalized incrementally-learned
knowledge.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible. Submitted to the 2023 IEEE/RAS International Conference
on Humanoid Robots (Humanoids). Supplementary video available at
https://youtu.be/y5O2mRGtsL
Learning and Execution of Object Manipulation Tasks on Humanoid Robots
Equipping robots with complex capabilities still requires a great amount of effort. In this work, a novel approach is proposed to understand, to represent and to execute object manipulation tasks learned from observation by combining methods of data analysis, graphical modeling and artificial intelligence. Employing this approach enables robots to reason about how to solve tasks in dynamic environments and to adapt to unseen situations
A Contribution to Resource-Aware Architectures for Humanoid Robots
The goal of this work is to provide building blocks for resource-aware robot architectures. The topic of these blocks are data-driven generation of context-sensitive resource models, prediction of future resource utilizations, and resource-aware computer vision and motion planning algorithms. The implementation of these algorithms is based on resource-aware concepts and methodologies originating from the Transregional Collaborative Research Center "Invasive Computing" (SFB/TR 89)
Human-Inspired Balancing and Recovery Stepping for Humanoid Robots
Robustly maintaining balance on two legs is an important challenge for humanoid robots. The work presented in this book represents a contribution to this area. It investigates efficient methods for the decision-making from internal sensors about whether and where to step, several improvements to efficient whole-body postural balancing methods, and proposes and evaluates a novel method for efficient recovery step generation, leveraging human examples and simulation-based reinforcement learning
XBot: A Cross-Robot Software Framework for Real-Time Control
The widespread use of robotics in new application domains outside the industrial workplace settings requires robotic systems which demonstrate functionalities far beyond that of classical industrial robotic machines. The implementation of these capabilities inevitably increases the complexity of the robotic hardware, control a and software components. This chapter introduces the XBot software architecture for robotics, which is capable of Real-Time (RT) performance with minimum jitter at relatively high control frequency while demonstrating enhanced flexibility and abstraction features making it suitable for the control of robotic systems of diverse hardware embodiment and complexity. A key feature of the XBot is its cross-robot compatibility, which makes possible the use of the framework on different robots, without code modifications, based only on a set of configuration files. The design of the framework ensures easy interoperability and built-in integration with other existing software tools for robotics, such as ROS, YARP or OROCOS, thanks to a robot agnostic API called XBotInterface. The framework has been successfully used and validated as a software infrastructure for collaborative robotic arms as KUKA lbr iiwa/lwr 4+ and Franka Emika Panda, other than humanoid robots such as WALK-MAN and COMAN+, and quadruped centaur-like robots as CENTAURO