131 research outputs found

    Detection and location of domestic waste for planning its collection using an autonomous robot

    Get PDF
    Paper submitted to the 8th International Conference on Control, Automation and Robotics (ICCAR), Xiamen, China, April 8-10, 2022.This paper presents an approach of a detection and location system for waste recognition in outdoor environments that can be usable on an autonomous robot for garbage collection. It is composed of a camera and a LiDAR. For the detection task, some YOLO models were trained and tested for classification of waste by using a own dataset acquired from the camera. The image coordinates predicted by the best detector are used in order to compute the location relative to the camera. Then, we used the LiDAR to get a global waste location relative to the robot, transforming the coordinates of the center of each trash instance. Our detection approach was tested in outdoor environments obtaining a [email protected] around 0.99 and a [email protected] over 0.84, and an average time of detection less than 40 ms., being able to make it in real time. The location method was also tested in presence of objects at a maximum distance of 8 m., obtaining an average error smaller than 0.25 m.This research was funded by Spanish Government through the project RTI2018-094279-B-I00. Besides, computer facilities were provided by Valencian Government and FEDER through the IDIFEFER/2020/003

    Assistance Robotics and Biosensors 2019

    Get PDF
    This Special Issue is focused on breakthrough developments in the field of assistive and rehabilitation robotics. The selected contributions include current scientific progress from biomedical signal processing and cover applications to myoelectric prostheses, lower-limb and upper-limb exoskeletons and assistive robotics

    Using a RGB-D camera for 6DoF SLAM

    Get PDF
    This paper presents a method for fast calculation of the egomotion done by a robot using visual features. The method is part of a complete system for automatic map building and Simultaneous Localization and Mapping (SLAM). The method uses optical flow in order to determine if the robot has done a movement. If so, some visual features which do not accomplish several criteria (like intersection, unicity, etc,) are deleted, and then the egomotion is calculated. We use a state-of-the-art algorithm (TORO) in order to rectify the map and solve the SLAM problem. The proposed method provides better efficiency that other current methods.These authors want to express their gratitude to Spanish Ministry of Science and Technology (MYCIT) and the Research and Innovation Vice-president Office of the University of Alicante for their financial support through the projects DPI2009-07144 and GRE10-16, respectively

    Detection and depth estimation for domestic waste in outdoor environments by sensors fusion

    Full text link
    In this work, we estimate the depth in which domestic waste are located in space from a mobile robot in outdoor scenarios. As we are doing this calculus on a broad range of space (0.3 - 6.0 m), we use RGB-D camera and LiDAR fusion. With this aim and range, we compare several methods such as average, nearest, median and center point, applied to those which are inside a reduced or non-reduced Bounding Box (BB). These BB are obtained from segmentation and detection methods which are representative of these techniques like Yolact, SOLO, You Only Look Once (YOLO)v5, YOLOv6 and YOLOv7. Results shown that, applying a detection method with the average technique and a reduction of BB of 40%, returns the same output as segmenting the object and applying the average method. Indeed, the detection method is faster and lighter in comparison with the segmentation one. The committed median error in the conducted experiments was 0.0298 ±{\pm} 0.0544 m.Comment: This work has been submitted to IFAC WC 2023 for possible publicatio

    Vision and Tactile Robotic System to Grasp Litter in Outdoor Environments

    Get PDF
    The accumulation of litter is increasing in many places and is consequently becoming a problem that must be dealt with. In this paper, we present a manipulator robotic system to collect litter in outdoor environments. This system has three functionalities. Firstly, it uses colour images to detect and recognise litter comprising different materials. Secondly, depth data are combined with pixels of waste objects to compute a 3D location and segment three-dimensional point clouds of the litter items in the scene. The grasp in 3 Degrees of Freedom (DoFs) is then estimated for a robot arm with a gripper for the segmented cloud of each instance of waste. Finally, two tactile-based algorithms are implemented and then employed in order to provide the gripper with a sense of touch. This work uses two low-cost visual-based tactile sensors at the fingertips. One of them addresses the detection of contact (which is obtained from tactile images) between the gripper and solid waste, while another has been designed to detect slippage in order to prevent the objects grasped from falling. Our proposal was successfully tested by carrying out extensive experimentation with different objects varying in size, texture, geometry and materials in different outdoor environments (a tiled pavement, a surface of stone/soil, and grass). Our system achieved an average score of 94% for the detection and Collection Success Rate (CSR) as regards its overall performance, and of 80% for the collection of items of litter at the first attempt.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Research work was funded by the Valencian Regional Government and FEDER through the PROMETEO/2021/075 project. The computer facilities were provided through the IDIFEFER/2020/003 project

    Assistance Robotics and Biosensors

    Get PDF
    This Special Issue is focused on breakthrough developments in the field of biosensors and current scientific progress in biomedical signal processing. The papers address innovative solutions in assistance robotics based on bioelectrical signals, including: Affordable biosensor technology, affordable assistive-robotics devices, new techniques in myoelectric control and advances in brain–machine interfacing

    LiLO: Lightweight and low-bias LiDAR Odometry method based on spherical range image filtering

    Full text link
    In unstructured outdoor environments, robotics requires accurate and efficient odometry with low computational time. Existing low-bias LiDAR odometry methods are often computationally expensive. To address this problem, we present a lightweight LiDAR odometry method that converts unorganized point cloud data into a spherical range image (SRI) and filters out surface, edge, and ground features in the image plane. This substantially reduces computation time and the required features for odometry estimation in LOAM-based algorithms. Our odometry estimation method does not rely on global maps or loop closure algorithms, which further reduces computational costs. Experimental results generate a translation and rotation error of 0.86\% and 0.0036{\deg}/m on the KITTI dataset with an average runtime of 78ms. In addition, we tested the method with our data, obtaining an average closed-loop error of 0.8m and a runtime of 27ms over eight loops covering 3.5Km.Comment: This paper is under review at the journal "Autonomous Robots" (Springer

    Detection and depth estimation for domestic waste in outdoor environments by sensors fusion

    Get PDF
    In this work, we estimate the depth in which domestic waste are located in space from a mobile robot in outdoor scenarios. As we are doing this calculus on a broad range of space (0.3 - 6.0 m), we use RGB-D camera and LiDAR fusion. With this aim and range, we compare several methods such as average, nearest, median and center point, applied to those which are inside a reduced or non-reduced Bounding Box (BB). These BB are obtained from segmentation and detection methods which are representative of these techniques like Yolact, SOLO, You Only Look Once (YOLO)v5, YOLOv6 and YOLOv7. Results shown that, applying a detection method with the average technique and a reduction of BB of 40%, returns the same output as segmenting the object and applying the average method. Indeed, the detection method is faster and lighter in comparison with the segmentation one. The committed median error in the conducted experiments was 0.0298 ± 0.0544 m.Research work was funded by the Valencian Regional Government and FEDER through the PROMETEO/2021/075 project and the Spanish Government through the Formación del Personal Investigador [Research Staff Formation (FPI)] under Grant PRE2019-088069. The computer facilities were provided through the IDIFEFER/2020/003 project

    Visual Servoing NMPC Applied to UAVs for Photovoltaic Array Inspection

    Full text link
    The photovoltaic (PV) industry is seeing a significant shift toward large-scale solar plants, where traditional inspection methods have proven to be time-consuming and costly. Currently, the predominant approach to PV inspection using unmanned aerial vehicles (UAVs) is based on photogrammetry. However, the photogrammetry approach presents limitations, such as an increased amount of useless data during flights, potential issues related to image resolution, and the detection process during high-altitude flights. In this work, we develop a visual servoing control system applied to a UAV with dynamic compensation using a nonlinear model predictive control (NMPC) capable of accurately tracking the middle of the underlying PV array at different frontal velocities and height constraints, ensuring the acquisition of detailed images during low-altitude flights. The visual servoing controller is based on the extraction of features using RGB-D images and the Kalman filter to estimate the edges of the PV arrays. Furthermore, this work demonstrates the proposal in both simulated and real-world environments using the commercial aerial vehicle (DJI Matrice 100), with the purpose of showcasing the results of the architecture. Our approach is available for the scientific community in: https://github.com/EPVelasco/VisualServoing_NMPCComment: This paper is under review at the journal "IEEE Robotics and Automation Letters

    Virtualization of Robotic Hands Using Mobile Devices

    Get PDF
    This article presents a multiplatform application for the tele-operation of a robot hand using virtualization in Unity 3D. This approach grants usability to users that need to control a robotic hand, allowing supervision in a collaborative way. This paper focuses on a user application designed for the 3D virtualization of a robotic hand and the tele-operation architecture. The designed system allows for the simulation of any robotic hand. It has been tested with the virtualization of the four-fingered Allegro Hand of SimLab with 16 degrees of freedom, and the Shadow hand with 24 degrees of freedom. The system allows for the control of the position of each finger by means of joint and Cartesian co-ordinates. All user control interfaces are designed using Unity 3D, such that a multiplatform philosophy is achieved. The server side allows the user application to connect to a ROS (Robot Operating System) server through a TCP/IP socket, to control a real hand or to share a simulation of it among several users. If a real robot hand is used, real-time control and feedback of all the joints of the hand is communicated to the set of users. Finally, the system has been tested with a set of users with satisfactory results.This research was funded by Ministerio de Ciencia, Innovación y Universidades grant number RTI2018-094279-B-100
    corecore