23 research outputs found

    Manipulation monitoring and robot intervention in complex manipulation sequences

    Get PDF
    Compared to machines, humans are intelligent and dexterous; they are indispensable for many complex tasks in areas such as flexible manufacturing or scientific experimentation. However, they are also subject to fatigue and inattention, which may cause errors. This motivates automated monitoring systems that verify the correct execution of manipulation sequences. To be practical, such a monitoring system should not require laborious programming.Peer ReviewedPostprint (author's final draft

    No Clamp Robotic Assembly with Use of Point Cloud Data from Low-Cost Triangulation Scanner

    Get PDF
    The paper shows the clamp-less assembly idea as a very important one in the modern assembly. Assembly equipment such as clamps represent a significant group of industrial equipment in manufacturing plants whose number can be effectively reduced. The article presents the concept of using industrial robot equipped with a triangulation scanner in the assembly process in order to minimize the number of clamps that hold the units in a particular position in space. It also shows how the system searches for objects in the point cloud based on multi-step processing algorithm proposed in this work, then picks them up, transports and positions them in the right assembly locations with the use of industrial robot manipulator. The accuracy of the positioning of parts was also examined as well as the impact of the number of iterations of the algorithm searching the models in the point cloud on the accuracy of determining the position of the objects. The tests show that presented system is suitable for assembly of various items as plastic packaging and palletizing of products. Such kind of system is the basis for modern, fully flexible assembly systems

    Automatic coarse co-registration of point clouds from diverse scan geometries: a test of detectors and descriptors

    Full text link
    Point clouds are collected nowadays from a plethora of sensors, some having higher accuracies and higher costs, some having lower accuracies but also lower costs. Not only there is a large choice for different sensors, but also these can be transported by different platforms, which can provide different scan geometries. In this work we test the extraction of four different keypoint detectors and three feature descriptors. We benchmark performance in terms of calculation time and we assess their performance in terms of accuracy in their ability in coarse automatic co-registration of two clouds that are collected with different sensors, platforms and scan geometries. One, which we define as having the higher accuracy, and thus will be used as reference, was surveyed via a UAV flight with a Riegl MiniVUX-3, the other on a bicycle with a Livox Horizon over a walking path with un-even ground.The novelty in this work consists in comparing several strategies for fast alignment of point clouds from very different surveying geometries, as the drone has a bird's eye view and the bicycle a ground-based view. An added challenge is related to the lower cost of the bicycle sensor ensemble that, together with the rough terrain, reasonably results in lower accuracy of the survey. The main idea is to use range images to capture a simplified version of the geometry of the surveyed area and then find the best features to match keypoints. Results show that NARF features detected more keypoints and resulted in a faster co-registration procedure in this scenariowhereas the accuracy of the co-registration is similar to all the combinations of keypoint detectors and features

    CDTB: A Color and Depth Visual Object Tracking Dataset and Benchmark

    Get PDF
    A long-term visual object tracking performance evaluation methodology and a benchmark are proposed. Performance measures are designed by following a long-term tracking definition to maximize the analysis probing strength. The new measures outperform existing ones in interpretation potential and in better distinguishing between different tracking behaviors. We show that these measures generalize the short-term performance measures, thus linking the two tracking problems. Furthermore, the new measures are highly robust to temporal annotation sparsity and allow annotation of sequences hundreds of times longer than in the current datasets without increasing manual annotation labor. A new challenging dataset of carefully selected sequences with many target disappearances is proposed. A new tracking taxonomy is proposed to position trackers on the short-term/long-term spectrum. The benchmark contains an extensive evaluation of the largest number of long-term tackers and comparison to state-of-the-art short-term trackers. We analyze the influence of tracking architecture implementations to long-term performance and explore various re-detection strategies as well as influence of visual model update strategies to long-term tracking drift. The methodology is integrated in the VOT toolkit to automate experimental analysis and benchmarking and to facilitate future development of long-term trackers

    Multi-View Object Instance Recognition in an Industrial Context

    Get PDF
    We present a fast object recognition system coding shape by viewpoint invariant geometric relations and appearance information. In our advanced industrial work-cell, the system can observe the work space of the robot by three pairs of Kinect and stereo cameras allowing for reliable and complete object information. From these sensors, we derive global viewpoint invariant shape features and robust color features making use of color normalization techniques. We show that in such a set-up, our system can achieve high performance already with a very low number of training samples, which is crucial for user acceptance and that the use of multiple views is crucial for performance. This indicates that our approach can be used in controlled but realistic industrial contexts that require—besides high reliability—fast processing and an intuitive and easy use at the end-user side.European UnionDanish Council for Strategic Researc

    AUTOMATIC COARSE CO-REGISTRATION OF POINT CLOUDS FROM DIVERSE SCAN GEOMETRIES: A TEST OF DETECTORS AND DESCRIPTORS

    Get PDF
    Point clouds are collected nowadays from a plethora of sensors, some having higher accuracies and higher costs, some having lower accuracies but also lower costs. Not only there is a large choice for different sensors, but also these can be transported by different platforms, which can provide different scan geometries. In this work we test the extraction of four different keypoint detectors and three feature descriptors. We benchmark performance in terms of calculation time and we assess their performance in terms of accuracy in their ability in coarse automatic co-registration of two clouds that are collected with different sensors, platforms and scan geometries. One, which we define as having the higher accuracy, and thus will be used as reference, was surveyed via a UAV flight with a Riegl MiniVUX-3, the other on a bicycle with a Livox Horizon over a walking path with un-even ground.The novelty in this work consists in comparing several strategies for fast alignment of point clouds from very different surveying geometries, as the drone has a bird’s eye view and the bicycle a ground-based view. An added challenge is related to the lower cost of the bicycle sensor ensemble that, together with the rough terrain, reasonably results in lower accuracy of the survey. The main idea is to use range images to capture a simplified version of the geometry of the surveyed area and then find the best features to match keypoints. Results show that NARF features detected more keypoints and resulted in a faster co-registration procedure in this scenario whereas the accuracy of the co-registration is similar to all the combinations of keypoint detectors and features

    KrĂĽger, Norbert

    Get PDF
    corecore