21,633 research outputs found
From High-Level Task Descriptions to Executable Robot Code
For robots to be productive co-workers in the manufacturing industry, it is necessary that their human colleagues can interact with them and instruct them in a simple manner. The goal of our research is to lower the threshold for humans to instruct manipulation tasks, especially sensorcontrolled assembly. In our previous work we have presented tools for high-level task instruction, while in this paper we present how these symbolic descriptions of object manipulation are translated into executable code for our hybrid industrial robot controllers
A Model-Driven Engineering Approach for ROS using Ontological Semantics
This paper presents a novel ontology-driven software engineering approach for
the development of industrial robotics control software. It introduces the
ReApp architecture that synthesizes model-driven engineering with semantic
technologies to facilitate the development and reuse of ROS-based components
and applications. In ReApp, we show how different ontological classification
systems for hardware, software, and capabilities help developers in discovering
suitable software components for their tasks and in applying them correctly.
The proposed model-driven tooling enables developers to work at higher
abstraction levels and fosters automatic code generation. It is underpinned by
ontologies to minimize discontinuities in the development workflow, with an
integrated development environment presenting a seamless interface to the user.
First results show the viability and synergy of the selected approach when
searching for or developing software with reuse in mind.Comment: Presented at DSLRob 2015 (arXiv:1601.00877), Stefan Zander, Georg
Heppner, Georg Neugschwandtner, Ramez Awad, Marc Essinger and Nadia Ahmed: A
Model-Driven Engineering Approach for ROS using Ontological Semantic
MaestROB: A Robotics Framework for Integrated Orchestration of Low-Level Control and High-Level Reasoning
This paper describes a framework called MaestROB. It is designed to make the
robots perform complex tasks with high precision by simple high-level
instructions given by natural language or demonstration. To realize this, it
handles a hierarchical structure by using the knowledge stored in the forms of
ontology and rules for bridging among different levels of instructions.
Accordingly, the framework has multiple layers of processing components;
perception and actuation control at the low level, symbolic planner and Watson
APIs for cognitive capabilities and semantic understanding, and orchestration
of these components by a new open source robot middleware called Project Intu
at its core. We show how this framework can be used in a complex scenario where
multiple actors (human, a communication robot, and an industrial robot)
collaborate to perform a common industrial task. Human teaches an assembly task
to Pepper (a humanoid robot from SoftBank Robotics) using natural language
conversation and demonstration. Our framework helps Pepper perceive the human
demonstration and generate a sequence of actions for UR5 (collaborative robot
arm from Universal Robots), which ultimately performs the assembly (e.g.
insertion) task.Comment: IEEE International Conference on Robotics and Automation (ICRA) 2018.
Video: https://www.youtube.com/watch?v=19JsdZi0TW
Bayesian robot Programming
We propose a new method to program robots based on Bayesian inference and learning. The capacities of this programming method are demonstrated through a succession of increasingly complex experiments. Starting from the learning of simple reactive behaviors, we present instances of behavior combinations, sensor fusion, hierarchical behavior composition, situation recognition and temporal sequencing. This series of experiments comprises the steps in the incremental development of a complex robot program. The advantages and drawbacks of this approach are discussed along with these different experiments and summed up as a conclusion. These different robotics programs may be seen as an illustration of probabilistic programming applicable whenever one must deal with problems based on uncertain or incomplete knowledge. The scope of possible applications is obviously much broader than robotics
Robot Introspection with Bayesian Nonparametric Vector Autoregressive Hidden Markov Models
Robot introspection, as opposed to anomaly detection typical in process
monitoring, helps a robot understand what it is doing at all times. A robot
should be able to identify its actions not only when failure or novelty occurs,
but also as it executes any number of sub-tasks. As robots continue their quest
of functioning in unstructured environments, it is imperative they understand
what is it that they are actually doing to render them more robust. This work
investigates the modeling ability of Bayesian nonparametric techniques on
Markov Switching Process to learn complex dynamics typical in robot contact
tasks. We study whether the Markov switching process, together with Bayesian
priors can outperform the modeling ability of its counterparts: an HMM with
Bayesian priors and without. The work was tested in a snap assembly task
characterized by high elastic forces. The task consists of an insertion subtask
with very complex dynamics. Our approach showed a stronger ability to
generalize and was able to better model the subtask with complex dynamics in a
computationally efficient way. The modeling technique is also used to learn a
growing library of robot skills, one that when integrated with low-level
control allows for robot online decision making.Comment: final version submitted to humanoids 201
Interactive Robot Learning of Gestures, Language and Affordances
A growing field in robotics and Artificial Intelligence (AI) research is
human-robot collaboration, whose target is to enable effective teamwork between
humans and robots. However, in many situations human teams are still superior
to human-robot teams, primarily because human teams can easily agree on a
common goal with language, and the individual members observe each other
effectively, leveraging their shared motor repertoire and sensorimotor
resources. This paper shows that for cognitive robots it is possible, and
indeed fruitful, to combine knowledge acquired from interacting with elements
of the environment (affordance exploration) with the probabilistic observation
of another agent's actions.
We propose a model that unites (i) learning robot affordances and word
descriptions with (ii) statistical recognition of human gestures with vision
sensors. We discuss theoretical motivations, possible implementations, and we
show initial results which highlight that, after having acquired knowledge of
its surrounding environment, a humanoid robot can generalize this knowledge to
the case when it observes another agent (human partner) performing the same
motor actions previously executed during training.Comment: code available at https://github.com/gsaponaro/glu-gesture
Control of intelligent robots in space
In view of space activities like International Space Station, Man-Tended-Free-Flyer (MTFF) and free flying platforms, the development of intelligent robotic systems is gaining increasing importance. The range of applications that have to be performed by robotic systems in space includes e.g., the execution of experiments in space laboratories, the service and maintenance of satellites and flying platforms, the support of automatic production processes or the assembly of large network structures. Some of these tasks will require the development of bi-armed or of multiple robotic systems including functional redundancy. For the development of robotic systems which are able to perform this variety of tasks a hierarchically structured modular concept of automation is required. This concept is characterized by high flexibility as well as by automatic specialization to the particular sequence of tasks that have to be performed. On the other hand it has to be designed such that the human operator can influence or guide the system on different levels of control supervision, and decision. This leads to requirements for the hardware and software concept which permit a range of application of the robotic systems from telemanipulation to autonomous operation. The realization of this goal requires strong efforts in the development of new methods, software and hardware concepts, and the integration into an automation concept
Automated sequence and motion planning for robotic spatial extrusion of 3D trusses
While robotic spatial extrusion has demonstrated a new and efficient means to
fabricate 3D truss structures in architectural scale, a major challenge remains
in automatically planning extrusion sequence and robotic motion for trusses
with unconstrained topologies. This paper presents the first attempt in the
field to rigorously formulate the extrusion sequence and motion planning (SAMP)
problem, using a CSP encoding. Furthermore, this research proposes a new
hierarchical planning framework to solve the extrusion SAMP problems that
usually have a long planning horizon and 3D configuration complexity. By
decoupling sequence and motion planning, the planning framework is able to
efficiently solve the extrusion sequence, end-effector poses, joint
configurations, and transition trajectories for spatial trusses with
nonstandard topologies. This paper also presents the first detailed computation
data to reveal the runtime bottleneck on solving SAMP problems, which provides
insight and comparing baseline for future algorithmic development. Together
with the algorithmic results, this paper also presents an open-source and
modularized software implementation called Choreo that is machine-agnostic. To
demonstrate the power of this algorithmic framework, three case studies,
including real fabrication and simulation results, are presented.Comment: 24 pages, 16 figure
- …