677 research outputs found

    Detecting trash and valuables with machine vision in passenger vehicles

    Get PDF
    The research conducted here will determine the possibility of implementing a machine vision based detection system to identify the presence of trash or valuables in passenger vehicles using a custom designed in-car camera module. The detection system was implemented to capture images of the rear seating compartment of a car intended to be used in shared vehicle fleets. Onboard processing of the image was done by a Raspberry Pi computer while the image classification was done by a remote server. Two vision based algorithmic models were created for the purpose of classifying the images: a convolutional neural network (CNN) and a background subtraction model. The CNN was a fine-tuned VGG16 model and it produced a final prediction accuracy of 91.43% on a batch of 140 test images. For the output analysis, a confusion matrix was used to identify the correlation between correct and false predictions, and the certainties of the three classes for each classified image were examined as well. The estimated execution time of the system from image capture to displaying the results ranged between 5.7 seconds and 11.5 seconds. The background subtraction model failed for the application here due to its inability to form a stable background estimate. The incorrect classifications of the CNN were evident due to the external sources of variation in the images such as extreme shadows and lack of contrast between the objects and its neighbouring background. Improvements in changing the camera location and expanding the training image set were proposed as possible future research

    Garbage Collection and Sorting with a Mobile Manipulator using Deep Learning and Whole-Body Control

    Get PDF
    Domestic garbage management is an important aspect of a sustainable environment. This paper presents a novel garbage classification and localization system for grasping and placement in the correct recycling bin, integrated on a mobile manipulator. In particular, we first introduce and train a deep neural network (namely, GarbageNet) to detect different recyclable types of garbage. Secondly, we use a grasp localization method to identify a suitable grasp pose to pick the garbage from the ground. Finally, we perform grasping and sorting of the objects by the mobile robot through a whole-body control framework. We experimentally validate the method, both on visual RGB-D data and indoors on a real full-size mobile manipulator for collection and recycling of garbage items placed on the ground

    An Energy Saving Road Sweeper Using Deep Vision for Garbage Detection

    Get PDF
    Road sweepers are ubiquitous machines that help preserve our cities cleanliness and health by collecting road garbage and sweeping out dirt from our streets and sidewalks. They are often very mechanical instruments, needing to operate in harsh conditions dealing with all sorts of abandoned trash and natural garbage. They are usually composed of rotating brushes, collector belts and bins, and sometimes water or air streams. All of these mechanical tools are usually high in power demand and strongly subject to wear and tear. Moreover, due to the simple working logic often implied by these cleaning machines, these tools work in an “always on”/“max power” state, and any further regulation is left to the pilot. Therefore, adding artificial intelligence able to correctly operate these tools in a semi-automatic way would be greatly beneficial. In this paper, we propose an automatic road garbage detection system, able to locate with great precision most types of road waste, and to correctly instruct a road sweeper in order to handle them. With this simple addition to an existing sweeper, we will be able to save more than 80% electrical power currently absorbed by the cleaning systems and reduce by the same amount brush weariness (prolonging their lifetime). This is done by choosing when to use the brushes and when not to, with how much strength, and where. The only hardware components needed by the system will be a camera and a PC board able to read the camera output (and communicate via CanBus). The software of the system will be mainly composed of a deep neural network for semantic segmentation of images, and a real-time software program to control the sweeper actuators with the appropriate timings. To prove the claimed results, we run extensive tests onboard of such a truck, as well as benchmark tests for accuracy, sensitivity, specificity and inference speed of the system

    Visual Material Characteristics Learning for Circular Healthcare

    Full text link
    The linear take-make-dispose paradigm at the foundations of our traditional economy is proving to be unsustainable due to waste pollution and material supply uncertainties. Hence, increasing the circularity of material flows is necessary. In this paper, we make a step towards circular healthcare by developing several vision systems targeting three main circular economy tasks: resources mapping and quantification, waste sorting, and disassembly. The performance of our systems demonstrates that representation-learning vision can improve the recovery chain, where autonomous systems are key enablers due to the contamination risks. We also published two fully-annotated datasets for image segmentation and for key-point tracking in disassembly operations of inhalers and glucose meters. The datasets and source code are publicly available.Comment: To be submitte

    Vision and Tactile Robotic System to Grasp Litter in Outdoor Environments

    Get PDF
    The accumulation of litter is increasing in many places and is consequently becoming a problem that must be dealt with. In this paper, we present a manipulator robotic system to collect litter in outdoor environments. This system has three functionalities. Firstly, it uses colour images to detect and recognise litter comprising different materials. Secondly, depth data are combined with pixels of waste objects to compute a 3D location and segment three-dimensional point clouds of the litter items in the scene. The grasp in 3 Degrees of Freedom (DoFs) is then estimated for a robot arm with a gripper for the segmented cloud of each instance of waste. Finally, two tactile-based algorithms are implemented and then employed in order to provide the gripper with a sense of touch. This work uses two low-cost visual-based tactile sensors at the fingertips. One of them addresses the detection of contact (which is obtained from tactile images) between the gripper and solid waste, while another has been designed to detect slippage in order to prevent the objects grasped from falling. Our proposal was successfully tested by carrying out extensive experimentation with different objects varying in size, texture, geometry and materials in different outdoor environments (a tiled pavement, a surface of stone/soil, and grass). Our system achieved an average score of 94% for the detection and Collection Success Rate (CSR) as regards its overall performance, and of 80% for the collection of items of litter at the first attempt.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Research work was funded by the Valencian Regional Government and FEDER through the PROMETEO/2021/075 project. The computer facilities were provided through the IDIFEFER/2020/003 project

    A Multi-Level Approach to Waste Object Segmentation

    Full text link
    We address the problem of localizing waste objects from a color image and an optional depth image, which is a key perception component for robotic interaction with such objects. Specifically, our method integrates the intensity and depth information at multiple levels of spatial granularity. Firstly, a scene-level deep network produces an initial coarse segmentation, based on which we select a few potential object regions to zoom in and perform fine segmentation. The results of the above steps are further integrated into a densely connected conditional random field that learns to respect the appearance, depth, and spatial affinities with pixel-level accuracy. In addition, we create a new RGBD waste object segmentation dataset, MJU-Waste, that is made public to facilitate future research in this area. The efficacy of our method is validated on both MJU-Waste and the Trash Annotation in Context (TACO) dataset.Comment: Paper appears in Sensors 2020, 20(14), 381

    Trash and recyclable material identification using convolutional neural networks (CNN)

    Get PDF
    The aim of this research is to improve municipal trash collection using image processing algorithms and deep learning technologies for detecting trash in public spaces. This research will help to improve trash management systems and create a smart city. Two Convolutional Neural Networks (CNN), both based on the AlexNet network architecture, were developed to search for trash objects in an image and separate recyclable items from the landfill trash objects, respectively. The two-stage CNN system was first trained and tested on the benchmark TrashNet indoor image dataset and achieved great performance to prove the concept. Then the system was trained and tested on outdoor images taken by the authors in the intended usage environment. Using the outdoor image dataset, the first CNN achieved a preliminary 93.6% accuracy to identify trash and non-trash items on an image database of assorted trash items. A second CNN was then trained to distinguish trash that will go to a landfill from the recyclable items with an accuracy ranging from 89.7% to 93.4% and overall, 92%. A future goal is to integrate this image processing-based trash identification system in a smart trashcan robot with a camera to take real-time photos that can detect and collect the trash all around it

    An Automatic Detection of River Garbage Using 360-degree Camera

    Get PDF
    • …
    corecore