927 research outputs found
RUR53: an Unmanned Ground Vehicle for Navigation, Recognition and Manipulation
This paper proposes RUR53: an Unmanned Ground Vehicle able to autonomously
navigate through, identify, and reach areas of interest; and there recognize,
localize, and manipulate work tools to perform complex manipulation tasks. The
proposed contribution includes a modular software architecture where each
module solves specific sub-tasks and that can be easily enlarged to satisfy new
requirements. Included indoor and outdoor tests demonstrate the capability of
the proposed system to autonomously detect a target object (a panel) and
precisely dock in front of it while avoiding obstacles. They show it can
autonomously recognize and manipulate target work tools (i.e., wrenches and
valve stems) to accomplish complex tasks (i.e., use a wrench to rotate a valve
stem). A specific case study is described where the proposed modular
architecture lets easy switch to a semi-teleoperated mode. The paper
exhaustively describes description of both the hardware and software setup of
RUR53, its performance when tests at the 2017 Mohamed Bin Zayed International
Robotics Challenge, and the lessons we learned when participating at this
competition, where we ranked third in the Gran Challenge in collaboration with
the Czech Technical University in Prague, the University of Pennsylvania, and
the University of Lincoln (UK).Comment: This article has been accepted for publication in Advanced Robotics,
published by Taylor & Franci
Fuzzy optimisation based symbolic grounding for service robots
A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophySymbolic grounding is a bridge between task level planning and actual robot sensing and actuation. Uncertainties raised by unstructured environments make a bottleneck for integrating traditional artificial intelligence with service robotics. In this research, a fuzzy optimisation based symbolic grounding approach is presented. This approach can handle uncertainties and helps service robots to determine the most comfortable base region for grasping objects in a fetch and carry task. Novel techniques are applied to establish fuzzy objective function, to model fuzzy constraints and to perform fuzzy optimisation. The approach does not have the short comings of othersâ work and the computation time is dramatically reduced in compare with other methods. The advantages of the proposed fuzzy optimisation based approach are evidenced by experiments that were undertaken in Care-O-bot 3 (COB 3) and Robot Operating System (ROS) platforms
Bimanual robotic manipulation based on potential fields
openDual manipulation is a natural skill for humans but not so easy to achieve for a robot. The presence of two end effectors implies the need to consider the temporal and spatial constraints they generate while moving together. Consequently, synchronization between the arms is required to perform coordinated actions (e.g., lifting a box) and to avoid self-collision between the manipulators. Moreover, the challenges increase in dynamic environments, where the arms must be able to respond quickly to changes in the position of obstacles or target objects. To meet these demands, approaches like optimization-based motion planners and imitation learning can be employed but they have limitations such as high computational costs, or the need to create a large dataset. Sampling-based motion planners can be a viable solution thanks to their speed and low computational costs but, in their basic implementation, the environment is assumed to be static. An alternative approach relies on improved Artificial Potential Fields (APF). They are intuitive, with low computational, and, most importantly, can be used in dynamic environments. However, they do not have the precision to perform manipulation actions, and dynamic goals are not considered. This thesis proposes a system for bimanual robotic manipulation based on a combination of improved Artificial Potential Fields (APF) and the sampling-based motion planner RRTConnect. The basic idea is to use improved APF to bring the end effectors near their target goal while reacting to changes in the surrounding environment. Only then RRTConnect is triggered to perform the manipulation task. In this way, it is possible to take advantage of the strengths of both methods. To improve this system APF have been extended to consider dynamic goals and a self-collision avoidance system has been developed. The conducted experiments demonstrate that the proposed system adeptly responds to changes in the position of obstacles and target objects. Moreover, the self-collision avoidance system enables faster dual manipulation routines compared to sequential arm movements
Intuitive Instruction of Industrial Robots : A Knowledge-Based Approach
With more advanced manufacturing technologies, small and medium sized enterprises can compete with low-wage labor by providing customized and high quality products. For small production series, robotic systems can provide a cost-effective solution. However, for robots to be able to perform on par with human workers in manufacturing industries, they must become flexible and autonomous in their task execution and swift and easy to instruct. This will enable small businesses with short production series or highly customized products to use robot coworkers without consulting expert robot programmers. The objective of this thesis is to explore programming solutions that can reduce the programming effort of sensor-controlled robot tasks. The robot motions are expressed using constraints, and multiple of simple constrained motions can be combined into a robot skill. The skill can be stored in a knowledge base together with a semantic description, which enables reuse and reasoning. The main contributions of the thesis are 1) development of ontologies for knowledge about robot devices and skills, 2) a user interface that provides simple programming of dual-arm skills for non-experts and experts, 3) a programming interface for task descriptions in unstructured natural language in a user-specified vocabulary and 4) an implementation where low-level code is generated from the high-level descriptions. The resulting system greatly reduces the number of parameters exposed to the user, is simple to use for non-experts and reduces the programming time for experts by 80%. The representation is described on a semantic level, which means that the same skill can be used on different robot platforms. The research is presented in seven papers, the first describing the knowledge representation and the second the knowledge-based architecture that enables skill sharing between robots. The third paper presents the translation from high-level instructions to low-level code for force-controlled motions. The two following papers evaluate the simplified programming prototype for non-expert and expert users. The last two present how program statements are extracted from unstructured natural language descriptions
Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery
State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control
System Architectures for Cooperative Teams of Unmanned Aerial Vehicles Interacting Physically with the Environment
Unmanned Aerial Vehicles (UAVs) have become quite a useful tool for a wide range of
applications, from inspection & maintenance to search & rescue, among others. The
capabilities of a single UAV can be extended or complemented by the deployment
of more UAVs, so multi-UAV cooperative teams are becoming a trend. In that case,
as di erent autopilots, heterogeneous platforms, and application-dependent software
components have to be integrated, multi-UAV system architectures that are fexible
and can adapt to the team's needs are required.
In this thesis, we develop system architectures for cooperative teams of UAVs,
paying special attention to applications that require physical interaction with the
environment, which is typically unstructured. First, we implement some layers to
abstract the high-level components from the hardware speci cs. Then we propose
increasingly advanced architectures, from a single-UAV hierarchical navigation architecture
to an architecture for a cooperative team of heterogeneous UAVs. All
this work has been thoroughly tested in both simulation and eld experiments in
di erent challenging scenarios through research projects and robotics competitions.
Most of the applications required physical interaction with the environment, mainly
in unstructured outdoors scenarios. All the know-how and lessons learned throughout
the process are shared in this thesis, and all relevant code is publicly available.Los vehĂculos aĂ©reos no tripulados (UAVs, del inglĂ©s Unmanned Aerial Vehicles) se han
convertido en herramientas muy valiosas para un amplio espectro de aplicaciones, como
inspecciĂłn y mantenimiento, u operaciones de rescate, entre otras. Las capacidades de un
Ășnico UAV pueden verse extendidas o complementadas al utilizar varios de estos vehĂculos
simultĂĄneamente, por lo que la tendencia actual es el uso de equipos cooperativos con
mĂșltiples UAVs. Para ello, es fundamental la integraciĂłn de diferentes autopilotos,
plataformas heterogéneas, y componentes software -que dependen de la aplicación-, por lo
que se requieren arquitecturas multi-UAV que sean flexibles y adaptables a las necesidades
del equipo.
En esta tesis, se desarrollan arquitecturas para equipos cooperativos de UAVs, prestando
una especial atenciĂłn a aplicaciones que requieran de interacciĂłn fĂsica con el entorno,
cuya naturaleza es tĂpicamente no estructurada. Primero se proponen capas para abstraer a
los componentes de alto nivel de las particularidades del hardware. Luego se desarrollan
arquitecturas cada vez mĂĄs avanzadas, desde una arquitectura de navegaciĂłn para un
Ășnico UAV, hasta una para un equipo cooperativo de UAVs heterogĂ©neos. Todo el trabajo ha
sido minuciosamente probado, tanto en simulaciĂłn como en experimentos reales, en
diferentes y complejos escenarios motivados por proyectos de investigaciĂłn y
competiciones de robĂłtica. En la mayorĂa de las aplicaciones se requerĂa de interacciĂłn
fĂsica con el entorno, que es normalmente un escenario en exteriores no estructurado. A lo
largo de la tesis, se comparten todo el conocimiento adquirido y las lecciones aprendidas en
el proceso, y el cĂłdigo relevante estĂĄ publicado como open-source
- âŠ