84,446 research outputs found

    Graph-based Object Understanding

    Get PDF
    Computer Vision algorithms become increasingly prevalent in our everyday lives. Especially recognition systems are often employed to automatize certain tasks (i.e. quality control). In State-of-the-Art approaches global shape char acteristics are leveraged, discarding nuanced shape varieties in the individual parts of the object. Thus, these systems fall short on both learning and utilizing the inherent underlying part structures of objects. By recognizing common substructures between known and queried objects, part-based systems may identify objects more robustly in lieu of occlusion or redundant parts. As we observe these traits, there are theories that such part-based approaches are indeed present in humans. Leveraging abstracted representations of decomposed objects may additionally offer better generalization on less training data. Enabling computer systems to reason about objects on the basis of their parts is the focus of this dissertation. Any part-based method first requires a segmentation approach to assign object regions to individual parts. Therefore, a 2D multi-view segmentation approach for 3D mesh segmentation is extended. The approach uses the normal and depth information of the objects to reliably extract part boundary contours. This method significantly reduces training time of the segmentation model compared to other segmentation approaches while still providing good segmentation results on the test data. To explore the benefits of part-based systems, a symbolic object classification dataset is created that inherently adheres to underlying rules made of spatial relations between part entities. This abstract data is also transformed into 3D point clouds. This enables us to benchmark conventional 3D point cloud classification models against the newly developed model that utilizes ground truth symbol segmentations for the classification task. With the new model, improved classification performance can be observed. This offers empirical evidence that part segmentation may boost classification accuracy if the data obey part-based rules. Additionally, prediction results of the model on segmented 3D data are compared against a modified variant of the model that directly uses the underlying symbols. The perception gap, representing issues with extracting the symbols from the segmented point clouds, is quantified. Furthermore, a framework for 3D object classification on real world objects is developed. The designed pipeline automatically segments an object into its parts, creates the according part graph and predicts the object class based on the similarity to graphs in the training dataset. The advantage of subgraph similarity is utilized in a second experiment, where out-of-distribution samples ofobjects are created, which contain redundant parts. Whereas traditional classification methods working on the global shape may misinterpret extracted feature vectors, the model creates robust predictions. Lastly, the task of object repairment is considered, in which a single part of the given object is compromised by a certain manipulation. As human-made objects follow an underlying part structure, a system to exploit this part structure in order to mend the object is developed. Given the global 3D point cloud of a compromised object, the object is automatically segmented, the shape features are extracted from the individual part clouds and are fed into a Graph Neural Network that predicts a manipulation action for each part. In conclusion, the opportunities of part-graph based methods for object understanding to improve 3D classification and regression tasks are explored. These approaches may enhance robotic computer vision pipelines in the future.2021-06-2

    3DMOTFormer: Graph Transformer for Online 3D Multi-Object Tracking

    Full text link
    Tracking 3D objects accurately and consistently is crucial for autonomous vehicles, enabling more reliable downstream tasks such as trajectory prediction and motion planning. Based on the substantial progress in object detection in recent years, the tracking-by-detection paradigm has become a popular choice due to its simplicity and efficiency. State-of-the-art 3D multi-object tracking (MOT) approaches typically rely on non-learned model-based algorithms such as Kalman Filter but require many manually tuned parameters. On the other hand, learning-based approaches face the problem of adapting the training to the online setting, leading to inevitable distribution mismatch between training and inference as well as suboptimal performance. In this work, we propose 3DMOTFormer, a learned geometry-based 3D MOT framework building upon the transformer architecture. We use an Edge-Augmented Graph Transformer to reason on the track-detection bipartite graph frame-by-frame and conduct data association via edge classification. To reduce the distribution mismatch between training and inference, we propose a novel online training strategy with an autoregressive and recurrent forward pass as well as sequential batch optimization. Using CenterPoint detections, our approach achieves 71.2% and 68.2% AMOTA on the nuScenes validation and test split, respectively. In addition, a trained 3DMOTFormer model generalizes well across different object detectors. Code is available at: https://github.com/dsx0511/3DMOTFormer.Comment: 17 pages, 8 figures, accepted by ICCV202

    MLGCN: An Ultra Efficient Graph Convolution Neural Model For 3D Point Cloud Analysis

    Full text link
    The analysis of 3D point clouds has diverse applications in robotics, vision and graphics. Processing them presents specific challenges since they are naturally sparse, can vary in spatial resolution and are typically unordered. Graph-based networks to abstract features have emerged as a promising alternative to convolutional neural networks for their analysis, but these can be computationally heavy as well as memory inefficient. To address these limitations we introduce a novel Multi-level Graph Convolution Neural (MLGCN) model, which uses Graph Neural Networks (GNN) blocks to extract features from 3D point clouds at specific locality levels. Our approach employs precomputed graph KNNs, where each KNN graph is shared between GCN blocks inside a GNN block, making it both efficient and effective compared to present models. We demonstrate the efficacy of our approach on point cloud based object classification and part segmentation tasks on benchmark datasets, showing that it produces comparable results to those of state-of-the-art models while requiring up to a thousand times fewer floating-point operations (FLOPs) and having significantly reduced storage requirements. Thus, our MLGCN model could be particular relevant to point cloud based 3D shape analysis in industrial applications when computing resources are scarce

    PolarMOT: How Far Can Geometric Relations Take Us in 3D Multi-Object Tracking?

    Full text link
    Most (3D) multi-object tracking methods rely on appearance-based cues for data association. By contrast, we investigate how far we can get by only encoding geometric relationships between objects in 3D space as cues for data-driven data association. We encode 3D detections as nodes in a graph, where spatial and temporal pairwise relations among objects are encoded via localized polar coordinates on graph edges. This representation makes our geometric relations invariant to global transformations and smooth trajectory changes, especially under non-holonomic motion. This allows our graph neural network to learn to effectively encode temporal and spatial interactions and fully leverage contextual and motion cues to obtain final scene interpretation by posing data association as edge classification. We establish a new state-of-the-art on nuScenes dataset and, more importantly, show that our method, PolarMOT, generalizes remarkably well across different locations (Boston, Singapore, Karlsruhe) and datasets (nuScenes and KITTI).Comment: ECCV 2022, 17 pages, 5 pages of supplementary, 3 figure

    Graph attention networks for point cloud processing

    Get PDF
    1 online resource (58 pages) : colour illustrations.Includes abstract.Includes bibliographical references (pages 52-58).Three-dimensional point cloud datasets are becoming ubiquitous due to the availability of consumer-grade 3D sensors such as Light Detection and Ranging (LIDAR), and RGB-D cameras. Recent advancements in 3D deep learning has dramatically improved the ability to recognize physical objects and interpret the indoor and outdoor scenes using point clouds acquired through different sensors. This thesis focuses on deep learning based techniques for point cloud processing. We propose novel architectures leveraging graph attention networks for point cloud-based object detection, classification, and segmentation. The proposed architectures work on point cloud scans directly by constructing a connected graph. For point cloud detection, we use the concatenation of relative geometric difference and feature difference between each pair of neighbouring points in the graph. To improve the performance of object detection, we introduce a distance-aware down-sampling scheme for object detection space. For point cloud segmentation and classification, we employ a global aware attention module using global, local, and self feature information. The experiments on different datasets (KITTI, ShapeNet, ModelNet, and Semantic3D) show that our methods yield comparable results for object detection, part segmentation, semantic segmentation, and classification

    CheckerPose: Progressive Dense Keypoint Localization for Object Pose Estimation with Graph Neural Network

    Full text link
    Estimating the 6-DoF pose of a rigid object from a single RGB image is a crucial yet challenging task. Recent studies have shown the great potential of dense correspondence-based solutions, yet improvements are still needed to reach practical deployment. In this paper, we propose a novel pose estimation algorithm named CheckerPose, which improves on three main aspects. Firstly, CheckerPose densely samples 3D keypoints from the surface of the 3D object and finds their 2D correspondences progressively in the 2D image. Compared to previous solutions that conduct dense sampling in the image space, our strategy enables the correspondence searching in a 2D grid (i.e., pixel coordinate). Secondly, for our 3D-to-2D correspondence, we design a compact binary code representation for 2D image locations. This representation not only allows for progressive correspondence refinement but also converts the correspondence regression to a more efficient classification problem. Thirdly, we adopt a graph neural network to explicitly model the interactions among the sampled 3D keypoints, further boosting the reliability and accuracy of the correspondences. Together, these novel components make our CheckerPose a strong pose estimation algorithm. When evaluated on the popular Linemod, Linemod-O, and YCB-V object pose estimation benchmarks, CheckerPose clearly boosts the accuracy of correspondence-based methods and achieves state-of-the-art performances

    Affordance-Based Grasping Point Detection Using Graph Convolutional Networks for Industrial Bin-Picking Applications

    Get PDF
    Grasping point detection has traditionally been a core robotic and computer vision problem. In recent years, deep learning based methods have been widely used to predict grasping points, and have shown strong generalization capabilities under uncertainty. Particularly, approaches that aim at predicting object affordances without relying on the object identity, have obtained promising results in random bin-picking applications. However, most of them rely on RGB/RGB-D images, and it is not clear up to what extent 3D spatial information is used. Graph Convolutional Networks (GCNs) have been successfully used for object classification and scene segmentation in point clouds, and also to predict grasping points in simple laboratory experimentation. In the present proposal, we adapted the Deep Graph Convolutional Network model with the intuition that learning from n-dimensional point clouds would lead to a performance boost to predict object affordances. To the best of our knowledge, this is the first time that GCNs are applied to predict affordances for suction and gripper end effectors in an industrial bin-picking environment. Additionally, we designed a bin-picking oriented data preprocessing pipeline which contributes to ease the learning process and to create a flexible solution for any bin-picking application. To train our models, we created a highly accurate RGB-D/3D dataset which is openly available on demand. Finally, we benchmarked our method against a 2D Fully Convolutional Network based method, improving the top-1 precision score by 1.8% and 1.7% for suction and gripper respectively.This Project received funding from the European Union’s Horizon 2020 research and Innovation Programme under grant agreement No. 780488
    • …
    corecore