2,351 research outputs found

    Classification and Grip of Occluded Objects

    Get PDF
    The present paper exposes a system for detection, classification, and grip of occluded objects by machine vision, artificial intelligence, and an anthropomorphic robot, to generate a solution for the subjection of elements that present occlusions. The deep learning algorithm used is based on Convolutional Neural Networks (CNN), specifically Fast R-CNN (Fast Region-Based CNN) and DAG-CNN (Directed Acyclic Graph CNN) for pattern recognition, the three-dimensional information of the environment was collected through Kinect V1, and tests simulations by the tool VRML. A sequence of detection, classification, and grip was programmed to determine which elements present occlusions and which type of tool generates the occlusion. According to the user's requirements, the desired elements are delivered (occluded or not), and the unwanted elements are removed. It was possible to develop a program with 88.89% accuracy in gripping and delivering occluded objects using networks Fast R-CNN and DAG-CNN with achieving of 70.9% and 96.2% accuracy respectively, detecting elements without occlusions for the first net and classifying the objects into five tools (Scalpel, Scissor, Screwdriver, Spanner, and Pliers), with the second net. The grip of occluded objects requires accurate detection of the element located at the top of the pile of objects to remove it without affecting the rest of the environment. Additionally, the detection process requires that a part of the occluded tool be visible to determine the existence of occlusions in the stac

    Visual control system for grip of glasses oriented to assistance robotics

    Get PDF
    Assistance robotics is presented as a means of improving the quality of life of people with disabilities, an application case is presented in assisted feeding. This paper presents the development of a system based on artificial intelligence techniques, for the grip of a glass, so that it does not slip during its manipulation by means of a robotic arm, as the liquid level varies. A faster R-CNN is used for the detection of the glass and the arm's gripper, and from the data obtained by the network, the mass of the beverage is estimated, and a delta of distance between the gripper and the liquid. These estimated values are used as inputs for a fuzzy system which has as output the torque that the motor that drives the gripper must exert. It was possible to obtain a 97.3% accuracy in the detection of the elements of interest in the environment with the faster R-CNN, and a 76% performance in the grips of the glass through the fuzzy algorithm

    Algorithm of detection, classification and gripping of occluded objects by CNN techniques and Haar classifiers

    Get PDF
    The following paper presents the development of an algorithm, in charge of detecting, classifying and grabbing occluded objects, using artificial intelligence techniques, machine vision for the recognition of the environment, an anthropomorphic manipulator for the manipulation of the elements. 5 types of tools were used for their detection and classification, where the user selects one of them, so that the program searches for it in the work environment and delivers it in a specific area, overcoming difficulties such as occlusions of up to 70%. These tools were classified using two CNN (convolutional neural network) type networks, a fast R-CNN (fast region-based CNN) for the detection and classification of occlusions, and a DAG-CNN (directed acyclic graph-CNN) for the classification tools. Furthermore, a Haar classifier was trained in order to compare its ability to recognize occlusions with respect to the fast R-CNN. Fast R-CNN and DAG-CNN achieved 70.9% and 96.2% accuracy, respectively, Haar classifiers with about 50% accuracy, and an accuracy of grip and delivery of occluded objects of 90% in the application, was achieved

    Object gripping algorithm for robotic assistance by means of deep learning

    Get PDF
    This paper exposes the use of recent deep learning techniques in the state of the art, little addressed in robotic applications, where a new algorithm based on Faster R-CNN and CNN regression is exposed. The machine vision systems implemented, tend to require multiple stages to locate an object and allow a robot to take it, increasing the noise in the system and the processing times. The convolutional networks based on regions allow one to solve this problem, it is used for it two convolutional architectures, one for classification and location of three types of objects and one to determine the grip angle for a robotic gripper. Under the establish virtual environment, the grip algorithm works up to 5 frames per second with a 100% object classification, and with the implementation of the Faster R-CNN, it allows obtain 100% accuracy in the classifications of the test database, and over a 97% of average precision locating the generated boxes in each element, gripping successfully the objects

    Paper biological risk detection through deep learning and fuzzy system

    Get PDF
    Given the recent events worldwide due to viral diseases that affect human health, automatic monitoring systems are one of the strong points of research that has gained strength, where the detection of biohazardous waste of a sanitary nature is highlighted related to viral diseases stands out. It is essential in this field to generate developments aimed at saving lives, where robotic systems can operate as assistants in various fields. In this work an artificial intelligence algorithm based on two stages is presented, one is the recognition of paper debris using a ResNet-50, chosen for its object localization capacity, and the other is a fuzzy inference system for the generation of alarm states due to biological risk by such debris, where fuzzy logic helps to establish a model for a non-predictive system as the one exposed. A biohazard detection algorithm for paper waste is described, oriented to operate on an assistive robot in a residential environment. The training parameters of the network, which achieve 100% accuracy with confidence levels between 82% for very small waste and 100% in direct view, are presented. Timing cycles are established for validation of the exposure time of the waste, where through the fuzzy system, risk alarms are generated, which allows establishing a system with an average reliability of 98%

    Virtual environment for assistant mobile robot

    Get PDF
    This paper shows the development of a virtual environment for a mobile robotic system with the ability to recognize basic voice commands, which are oriented to the recognition of a valid command of bring or take an object from a specific destination in residential spaces. The recognition of the voice command and the objects with which the robot will assist the user, is performed by a machine vision system based on the capture of the scene, where the robot is located. In relation to each captured image, a convolutional network based on regions is used with transfer learning, to identify the objects of interest. For human-robot interaction through voice, a convolutional neural network (CNN) of 6 convolution layers is used, oriented to recognize the commands to carry and bring specific objects inside the residential virtual environment. The use of convolutional networks allowed the adequate recognition of words and objects, which by means of the associated robot kinematics give rise to the execution of carry/bring commands, obtaining a navigation algorithm that operates successfully, where the manipulation of the objects exceeded 90%. Allowing the robot to move in the virtual environment even with the obstruction of objects in the navigation path.&lt

    Robotic navigation algorithm with machine vision

    Get PDF
    In the field of robotics, it is essential to know the work area in which the agent is going to develop, for that reason, different methods of mapping and spatial location have been developed for different applications. In this article, a machine vision algorithm is proposed, which is responsible for identifying objects of interest within a work area and determining the polar coordinates to which they are related to the observer, applicable either with a fixed camera or in a mobile agent such as the one presented in this document. The developed algorithm was evaluated in two situations, determining the position of six objects in total around the mobile agent. These results were compared with the real position of each of the objects, reaching a high level of accuracy with an average error of 1.3271% in the distance and 2.8998% in the angle

    Obstacle Evasion Algorithm Using Convolutional Neural Networks and Kinect-V1

    Get PDF
    The following paper presents the development of an algorithm for the evasion of static obstacles during the process of gripping the desired object, using an anthropomorphic robot, artificial intelligence, and machine vision systems. The algorithm has developed to detect a variable number of obstacles (between 1 and 15) and the grip desired element, using a robot with 3 degrees of freedom (DoF). A Kinect V1 was used to capture the RGB-D information of the environment and Convolutional Neural Networks for the detection and classification of each element. The capture of the three-dimensional information of the detected objects allows comparing the distance between the obstacles and the robot, to make decisions regarding the movement of the gripper to evade elements present in the path and hold the desired object without colliding. Obstacles of less than 18 cm in height were avoided, concerning the ground, with a probability of collision of 0% under specific environmental conditions, moving the robot since initial path in a straight line to the desired object, which is prone to changes according to the obstacles present in its. Function tests have been according to the manipulator's ability to evade possible obstacles of different heights located between the robot and the desired objec
    • …
    corecore