3,913 research outputs found

    Interactive Perception Based on Gaussian Process Classification for House-Hold Objects Recognition and Sorting

    Get PDF
    We present an interactive perception model for object sorting based on Gaussian Process (GP) classification that is capable of recognizing objects categories from point cloud data. In our approach, FPFH features are extracted from point clouds to describe the local 3D shape of objects and a Bag-of-Words coding method is used to obtain an object-level vocabulary representation. Multi-class Gaussian Process classification is employed to provide and probable estimation of the identity of the object and serves a key role in the interactive perception cycle – modelling perception confidence. We show results from simulated input data on both SVM and GP based multi-class classifiers to validate the recognition accuracy of our proposed perception model. Our results demonstrate that by using a GP-based classifier, we obtain true positive classification rates of up to 80%. Our semi-autonomous object sorting experiments show that the proposed GP based interactive sorting approach outperforms random sorting by up to 30% when applied to scenes comprising configurations of household objects

    Self-organization via active exploration in robotic applications

    Get PDF
    We describe a neural network based robotic system. Unlike traditional robotic systems, our approach focussed on non-stationary problems. We indicate that self-organization capability is necessary for any system to operate successfully in a non-stationary environment. We suggest that self-organization should be based on an active exploration process. We investigated neural architectures having novelty sensitivity, selective attention, reinforcement learning, habit formation, flexible criteria categorization properties and analyzed the resulting behavior (consisting of an intelligent initiation of exploration) by computer simulations. While various computer vision researchers acknowledged recently the importance of active processes (Swain and Stricker, 1991), the proposed approaches within the new framework still suffer from a lack of self-organization (Aloimonos and Bandyopadhyay, 1987; Bajcsy, 1988). A self-organizing, neural network based robot (MAVIN) has been recently proposed (Baloch and Waxman, 1991). This robot has the capability of position, size rotation invariant pattern categorization, recognition and pavlovian conditioning. Our robot does not have initially invariant processing properties. The reason for this is the emphasis we put on active exploration. We maintain the point of view that such invariant properties emerge from an internalization of exploratory sensory-motor activity. Rather than coding the equilibria of such mental capabilities, we are seeking to capture its dynamics to understand on the one hand how the emergence of such invariances is possible and on the other hand the dynamics that lead to these invariances. The second point is crucial for an adaptive robot to acquire new invariances in non-stationary environments, as demonstrated by the inverting glass experiments of Helmholtz. We will introduce Pavlovian conditioning circuits in our future work for the precise objective of achieving the generation, coordination, and internalization of sequence of actions

    Computer Vision-based Robotic Arm for Object Color, Shape, and Size Detection

    Get PDF
    Various aspects of the human workplace have been influenced by robotics due to its precision and accessibility. Nowadays, industrial activities have become more automated, increasing efficiency while reducing the production time, human labor, and risks involved. With time, electronic technology has advanced, and the ultimate goal of such technological advances is to make robotic systems as human-like as possible. As a result of this blessing of technological advances, robots will perform jobs far more efficiently than humans in challenging situations. In this paper, an automatic computer vision-based robotic gripper has been built that can select and arrange objects to complete various tasks. This study utilizes the image processing methodology of the PixyCMU camera sensor to distinguish multiple objects according to their distinct colors (red, yellow, and green). Next, a preprogrammed command is generated in the robotic arm to pick the item employing Arduino Mega and four MG996R servo motors. Finally, the device releases the object according to its color behind the fixed positions of the robotic arm to a specific place. The proposed system can also detect objects' geometrical shapes (circle, triangle, square, rectangle, pentagon, and star) and sizes (large, medium, and small) by utilizing OpenCV image processing libraries in Python language. Empirical results demonstrate that the designed robotic arm detects colored objects with 80% accuracy. It performs an excellent size and shapes recognition precision in real-time with 100% accuracy

    Preliminary technology utilization assessment of the robotic fruit harvester

    Get PDF
    The results of an analysis whose purpose was to examine the history and progress of mechanical fruit harvesting, to determine the significance of a robotic fruit tree harvester and to assess the available market for such a product are summarized. Background information that can be used in determining the benefit of a proof of principle demonstration is provided. Such a demonstration could be a major step toward the transfer of this NASA technology

    Vision-based Robot Manipulator for Industrial Applications

    Get PDF
    This paper presents a multi-stage process of the development of a vision-based object sorting robot manipulator for industrial applications. The main aim of this research is to integrate vision system with the existing Scorbot in order to widen the capability of the integrated camera-robot system in industrial applications. Modern industrial robot Scorbot-ER 9 Pro is the focus of this research. Currently, the robot does not have an integrated vision system. Thus; a camera has been integrated to robot gripper to achieve the target objectives. The main difficulties include establishing a relevant sequence of operations, developing a proper communication between camera and robot as well as the integration of the system components such as Matlab, Visual Basics, and Scorbas
    corecore