496 research outputs found

    Innovative Mobile Manipulator Solution for Modern Flexible Manufacturing Processes

    Get PDF
    There is a paradigm shift in current manufacturing needs that is causing a change from the current mass-production-based approach to a mass customization approach where production volumes are smaller and more variable. Current processes are very adapted to the previous paradigm and lack the required flexibility to adapt to the new production needs. To solve this problem, an innovative industrial mobile manipulator is presented. The robot is equipped with a variety of sensors that allow it to perceive its surroundings and perform complex tasks in dynamic environments. Following the current needs of the industry, the robot is capable of autonomous navigation, safely avoiding obstacles. It is flexible enough to be able to perform a wide variety of tasks, being the change between tasks done easily thanks to skills-based programming and the ability to change tools autonomously. In addition, its security systems allow it to share the workspace with human operators. This prototype has been developed as part of THOMAS European project, and it has been tested and demonstrated in real-world manufacturing use cases.This research was funded by the EC research project “THOMAS—Mobile dual arm robotic workers with embedded cognition for hybrid and dynamically reconfigurable manufacturing systems” (Grant Agreement: 723616) (www.thomas-project.eu/)

    Vision-based Safe Autonomous UAV Docking with Panoramic Sensors

    Full text link
    The remarkable growth of unmanned aerial vehicles (UAVs) has also sparked concerns about safety measures during their missions. To advance towards safer autonomous aerial robots, this work presents a vision-based solution to ensuring safe autonomous UAV landings with minimal infrastructure. During docking maneuvers, UAVs pose a hazard to people in the vicinity. In this paper, we propose the use of a single omnidirectional panoramic camera pointing upwards from a landing pad to detect and estimate the position of people around the landing area. The images are processed in real-time in an embedded computer, which communicates with the onboard computer of approaching UAVs to transition between landing, hovering or emergency landing states. While landing, the ground camera also aids in finding an optimal position, which can be required in case of low-battery or when hovering is no longer possible. We use a YOLOv7-based object detection model and a XGBooxt model for localizing nearby people, and the open-source ROS and PX4 frameworks for communication, interfacing, and control of the UAV. We present both simulation and real-world indoor experimental results to show the efficiency of our methods

    Sobi: An Interactive Social Service Robot for Long-Term Autonomy in Open Environments

    Get PDF
    Long-term autonomy in service robotics is a current research topic, especially for dynamic, large-scale environments that change over time. We present Sobi, a mobile service robot developed as an interactive guide for open environments, such as public places with indoor and outdoor areas. The robot will serve as a platform for environmental modeling and human-robot interaction. Its main hardware and software components, which we freely license as a documented open source project, are presented. Another key focus is Sobi’s monitoring system for long-term autonomy, which restores system components in a targeted manner in order to extend the total system lifetime without unplanned intervention. We demonstrate first results of the long-term autonomous capabilities in a 16-day indoor deployment, in which the robot patrols a total of 66.6 km with an average of 5.5 hours of travel time per weekday, charging autonomously in between. In a user study with 12 participants, we evaluate the appearance and usability of the user interface, which allows users to interactively query information about the environment and directions.© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works

    Visual Perception System for Aerial Manipulation: Methods and Implementations

    Get PDF
    La tecnología se evoluciona a gran velocidad y los sistemas autónomos están empezado a ser una realidad. Las compañías están demandando, cada vez más, soluciones robotizadas para mejorar la eficiencia de sus operaciones. Este también es el caso de los robots aéreos. Su capacidad única de moverse libremente por el aire los hace excelentes para muchas tareas que son tediosas o incluso peligrosas para operadores humanos. Hoy en día, la gran cantidad de sensores y drones comerciales los hace soluciones muy tentadoras. Sin embargo, todavía se requieren grandes esfuerzos de obra humana para customizarlos para cada tarea debido a la gran cantidad de posibles entornos, robots y misiones. Los investigadores diseñan diferentes algoritmos de visión, hardware y sensores para afrontar las diferentes tareas. Actualmente, el campo de la robótica manipuladora aérea está emergiendo con el objetivo de extender la cantidad de aplicaciones que estos pueden realizar. Estas pueden ser entre otras, inspección, mantenimiento o incluso operar válvulas u otras máquinas. Esta tesis presenta un sistema de manipulación aérea y un conjunto de algoritmos de percepción para la automatización de las tareas de manipulación aérea. El diseño completo del sistema es presentado y una serie de frameworks son presentados para facilitar el desarrollo de este tipo de operaciones. En primer lugar, la investigación relacionada con el análisis de objetos para manipulación y planificación de agarre considerando diferentes modelos de objetos es presentado. Dependiendo de estos modelos de objeto, se muestran diferentes algoritmos actuales de análisis de agarre y algoritmos de planificación para manipuladores simples y manipuladores duales. En Segundo lugar, el desarrollo de algoritmos de percepción para detección de objetos y estimación de su posicione es presentado. Estos permiten al sistema identificar objetos de cualquier tipo en cualquier escena para localizarlos para efectuar las tareas de manipulación. Estos algoritmos calculan la información necesaria para los análisis de manipulación descritos anteriormente. En tercer lugar. Se presentan algoritmos de visión para localizar el robot en el entorno al mismo tiempo que se elabora un mapa local, el cual es beneficioso para las tareas de manipulación. Estos mapas se enriquecen con información semántica obtenida en los algoritmos de detección. Por último, se presenta el desarrollo del hardware relacionado con la plataforma aérea, el cual incluye unos manipuladores de bajo peso y la invención de una herramienta para realizar tareas de contacto con superficies rígidas que sirve de estimador de la posición del robot. Todas las técnicas presentadas en esta tesis han sido validadas con extensiva experimentación en plataformas reales.Technology is growing fast, and autonomous systems are becoming a reality. Companies are increasingly demanding robotized solutions to improve the efficiency of their operations. It is also the case for aerial robots. Their unique capability of moving freely in the space makes them suitable for many tasks that are tedious and even dangerous for human operators. Nowadays, the vast amount of sensors and commercial drones makes them highly appealing. However, it is still required a strong manual effort to customize the existing solutions to each particular task due to the number of possible environments, robot designs and missions. Different vision algorithms, hardware devices and sensor setups are usually designed by researchers to tackle specific tasks. Currently, aerial manipulation is being intensively studied to allow aerial robots to extend the number of applications. These could be inspection, maintenance, or even operating valves or other machines. This thesis presents an aerial manipulation system and a set of perception algorithms for the automation aerial manipulation tasks. The complete design of the system is presented and modular frameworks are shown to facilitate the development of these kind of operations. At first, the research about object analysis for manipulation and grasp planning considering different object models is presented. Depend on the model of the objects, different state of art grasping analysis are reviewed and planning algorithms for both single and dual manipulators are shown. Secondly, the development of perception algorithms for object detection and pose estimation are presented. They allows the system to identify many kind of objects in any scene and locate them to perform manipulation tasks. These algorithms produce the necessary information for the manipulation analysis described in the previous paragraph. Thirdly, it is presented how to use vision to localize the robot in the environment. At the same time, local maps are created which can be beneficial for the manipulation tasks. These maps are are enhanced with semantic information from the perception algorithm mentioned above. At last, the thesis presents the development of the hardware of the aerial platform which includes the lightweight manipulators and the invention of a novel tool that allows the aerial robot to operate in contact with static objects. All the techniques presented in this thesis have been validated throughout extensive experimentation with real aerial robotic platforms

    A Self-Guided Docking Architecture for Autonomous Surface Vehicles

    Get PDF
    Autonomous Surface Vehicles (ASVs) provide the ideal platform to further explore the many opportunities in the cargo shipping industry, by making it more profitable and safer. Information retrieved from a 3D LIDAR, IMU, GPS, and Camera is combined to extract the geometric features of the floating platform and to estimate the relative position and orientation of the moor to the ASV. Then, a trajectory is planned to a specific target position, guaranteeing that the ASV will not collide with the mooring facility. To ensure that the sensors are within range of operation, a module has been developed to generate a trajectory that will deliver the ASV to a catch zone where it is able to function properly.A High-Level controler is also implemented, resorting to an heuristic to evaluate if the ASV is within this operating range and also its current orientation relative to the docking platform

    A practical multirobot localization system

    Get PDF
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the software is a novel and efficient algorithm for black and white pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost cameras, the core algorithm is able to process hundreds of images per second while tracking hundreds of objects with a millimeter precision. In addition, we present the method's mathematical model, which allows to estimate the expected localization precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions is verified in several experiments. Apart from the method description, we also make its source code public at \emph{http://purl.org/robotics/whycon}; so, it can be used as an enabling technology for various mobile robotic problems

    Guidance, Navigation and Control for UAV Close Formation Flight and Airborne Docking

    Get PDF
    Unmanned aerial vehicle (UAV) capability is currently limited by the amount of energy that can be stored onboard or the small amount that can be gathered from the environment. This has historically lead to large, expensive vehicles with considerable fuel capacity. Airborne docking, for aerial refueling, is a viable solution that has been proven through decades of implementation with manned aircraft, but had not been successfully tested or demonstrated with UAVs. The prohibitive challenge is the highly accurate and reliable relative positioning performance that is required to dock with a small target, in the air, amidst external disturbances. GNSS-based navigation systems are well suited for reliable absolute positioning, but fall short for accurate relative positioning. Direct, relative sensor measurements are precise, but can be unreliable in dynamic environments. This work proposes an experimentally verified guidance, navigation and control solution that enables a UAV to autonomously rendezvous and dock with a drogue that is being towed by another autonomous UAV. A nonlinear estimation framework uses precise air-to-air visual observations to correct onboard sensor measurements and produce an accurate relative state estimate. The state of the drogue is estimated using known geometric and inertial characteristics and air-to-air observations. Setpoint augmentation algorithms compensate for leader turn dynamics during formation flight, and drogue physical constraints during docking. Vision-aided close formation flight has been demonstrated over extended periods; as close as 4 m; in wind speeds in excess of 25 km/h; and at altitudes as low as 15 m. Docking flight tests achieved numerous airborne connections over multiple flights, including five successful docking manoeuvres in seven minutes of a single flight. To the best of our knowledge, these are the closest formation flights performed outdoors and the first UAV airborne docking
    corecore