11 research outputs found

    Collaborative mobile industrial manipulator : a review of system architecture and applications

    Get PDF
    This paper provides a comprehensive review of the development of Collaborative Mobile Industrial Manipulator (CMIM), which is currently in high demand. Such a review is necessary to have an overall understanding about CMIM advanced technology. This is the first review to combine the system architecture and application which is necessary in order to gain a full understanding of the system. The classical framework of CMIM is firstly discussed, including hardware and software. Subsystems that are typically involved in hardware such as mobile platform, manipulator, end-effector and sensors are presented. With regards to software, planner, controller, perception, interaction and so on are also described. Following this, the common applications (logistics, manufacturing and assembly) in industry are surveyed. Finally, the trends are predicted and issues are indicated as references for CMIM researchers. Specifically, more research is needed in the areas of interaction, fully autonomous control, coordination and standards. Besides, experiments in real environment would be performed more and novel collaborative robotic systems would be proposed in future. Additionally, some advanced technology in other areas would also be applied into the system. In all, the system would become more intelligent, collaborative and autonomous

    Searching for Uncollected Litter with Computer Vision

    Full text link
    This study combines photo metadata and computer vision to quantify where uncollected litter is present. Images from the Trash Annotations in Context (TACO) dataset were used to teach an algorithm to detect 10 categories of garbage. Although it worked well with smartphone photos, it struggled when trying to process images from vehicle mounted cameras. However, increasing the variety of perspectives and backgrounds in the dataset will help it improve in unfamiliar situations. These data are plotted onto a map which, as accuracy improves, could be used for measuring waste management strategies and quantifying trends.Comment: 17 pages, 6 figure

    An Automatic Detection of River Garbage Using 360-degree Camera

    Get PDF

    IMPLEMENTACIÓN DE REDES NEURONALES PARA LA CLASIFICACIÓN DE DESECHOS DENTRO DE UN CESTO INTELIGENTE

    Get PDF
    This article introduces the implementation of hardware tools like (Raspberry Pi, cameras, sensors, motors, drivers) and software like (convolutional neural networks, mobile app) for debris classification. In the future, the implementation of the proposed collector and classifier will contribute to caring for the environment and environmental education. The innovation of the project lies in the automation of the waste classification process through the integration of neural networks, the generation of notifications automatically from the prototype that are transmitted through the web server to the mobile application developed when a container is full. And the flexibility of the prototype so that it can be implemented in various environments from educational, office, industrial, among others. The advances that are presented are Creation of a mobile application that allows to visualize the level of the containers, design of the prototype, results of the training of the selected neural networks, evaluation of the final network with test images.Este artículo presenta la implementación de herramientas de hardware como (Raspberry Pi, cámaras, sensores, motores, controladores) y software como (redes neuronales convolucionales, aplicación móvil) para la clasificación de desechos. A futuro, la implementación del recolector y clasificador propuesto contribuirá al cuidado del ambiente y a la educación ambiental. La innovación del proyecto radica en la automatización del proceso de clasificación de desechos mediante la integración de redes neuronales, la generación de notificaciones de forma automática desde el prototipo que son transmitidas por medio del servidor web a la aplicación móvil desarrollada cuando un contenedor se encuentre lleno y la flexibilidad del prototipo de manera que puede implementarse en diversos entornos desde educativos, oficinas, industrias, entre otros. Los avances que se presentan son: Creación de una aplicación móvil que permite visualizar el nivel de los contenedores, diseño del prototipo, resultados del entrenamiento de las redes neuronales seleccionadas, evaluación de la red final con imágenes de prueba

    DESIGN AND IMPLEMENTATION OF AN AUTONOMOUS VEHICLE FOR WASTE MATERIAL COLLECTION AND FIRE DETECTION

    Get PDF
    Autonomous vehicles are becoming increasingly popular in a variety of applications, including waste collection and fire detection. In this work, we present the design and implementation of an autonomous vehicle for these tasks in urban environments. The vehicle is equipped with sensors and control algorithms to navigate, detect and collect plastic bottle wastes, and detect fires in real-time. The system uses an off-the-shelf, small-sized, battery-operated vehicle, a simple conveyor belt, and a vision-based, computerized system. Machine learning (ML-) based vision tasks are implemented to direct the vehicle to waste locations and initiate the waste removal process. A fire detection and alarm system are also incorporated, using a camera and machine learning algorithms to detect flames automatically. The vehicle was tested in a simulated urban environment, and the results demonstrate its effectiveness in waste material collection and fire detection. The proposed system has the potential to improve the efficiency and safety of such tasks in urban areas

    Mobile robot navigation in dynamic environments using reinforcement learning

    Get PDF
    The navigation system is one of the most important and crucial concerns in the research of mobile robots. Perception, cognition, action, human-robot interaction, and control systems are among the difficulties that have been resolved. Each navigation system must handle the aforementioned common designs to ensure that all duties may be completed. The navigation system is built on learning techniques that provide the ability to reason in the face of environmental uncertainty. However, the design will be difficult to build due to a number of factors, including inherent uncertainties in the unorganized environment. A more expensive design cost, computational resources, and larger memory are all required in this case. Navigating an autonomous robot in an uncontrolled environment is difficult because it necessitates the cooperation of a number of subsystems. Mobile robots must be intelligent in order to adapt to navigation in unfamiliar environments, such as environmental cognition, behavioral decisions, and learning. The robot will then navigate around these obstacles without collapsing and arrive at a specific destination point.Combining two processes, such as environmental mapping and robot behaviors, can result in behavior-based navigation. Obstacle avoidance, wall following, corridor following, and target seeking are some examples. If only one of the two processes is used, the system should be used in two ways. When this approach is used, two major issues are bound to arise: I the combination of two simple behaviors to form a complex one, and (ii) the integration of more than two behaviors. Behavior induced by multiple concurrent goals can be smoothly blended into a dynamic sequence of control action. This study is concerned with the automatic navigation of a mobile robot from its starting point to its destination point. To solve a few sub-problems associated with automatic navigation in an uncontrolled environment. Monte Carlo simulation is used to evaluate the algorithm’s performance and show under what conditions the algorithm performs better and worse. Obtaining position mapping to optimize action on mobile robots using a reinforcement learning framework. Reinforcement learning necessitates a large number of training samples, making it difficult to apply directly to real-world mobile robot navigation scenarios. To address this issue, the robot is trained in a Gazebo platform middleware Robot Operating System (ROS) simulation environment, followed by Q-Learning training on mobile robots

    A Multi-Level Approach to Waste Object Segmentation

    Full text link
    We address the problem of localizing waste objects from a color image and an optional depth image, which is a key perception component for robotic interaction with such objects. Specifically, our method integrates the intensity and depth information at multiple levels of spatial granularity. Firstly, a scene-level deep network produces an initial coarse segmentation, based on which we select a few potential object regions to zoom in and perform fine segmentation. The results of the above steps are further integrated into a densely connected conditional random field that learns to respect the appearance, depth, and spatial affinities with pixel-level accuracy. In addition, we create a new RGBD waste object segmentation dataset, MJU-Waste, that is made public to facilitate future research in this area. The efficacy of our method is validated on both MJU-Waste and the Trash Annotation in Context (TACO) dataset.Comment: Paper appears in Sensors 2020, 20(14), 381

    Trash and recyclable material identification using convolutional neural networks (CNN)

    Get PDF
    The aim of this research is to improve municipal trash collection using image processing algorithms and deep learning technologies for detecting trash in public spaces. This research will help to improve trash management systems and create a smart city. Two Convolutional Neural Networks (CNN), both based on the AlexNet network architecture, were developed to search for trash objects in an image and separate recyclable items from the landfill trash objects, respectively. The two-stage CNN system was first trained and tested on the benchmark TrashNet indoor image dataset and achieved great performance to prove the concept. Then the system was trained and tested on outdoor images taken by the authors in the intended usage environment. Using the outdoor image dataset, the first CNN achieved a preliminary 93.6% accuracy to identify trash and non-trash items on an image database of assorted trash items. A second CNN was then trained to distinguish trash that will go to a landfill from the recyclable items with an accuracy ranging from 89.7% to 93.4% and overall, 92%. A future goal is to integrate this image processing-based trash identification system in a smart trashcan robot with a camera to take real-time photos that can detect and collect the trash all around it
    corecore