68,568 research outputs found

    Geometric Pattern Matching Reduces to k-SUM

    Get PDF
    We prove that some exact geometric pattern matching problems reduce in linear time to o k-SUM when the pattern has a fixed size k. This holds in the real RAM model for searching for a similar copy of a set of k ? 3 points within a set of n points in the plane, and for searching for an affine image of a set of k ? d+2 points within a set of n points in d-space. As corollaries, we obtain improved real RAM algorithms and decision trees for the two problems. In particular, they can be solved by algebraic decision trees of near-linear height

    dARTMAP: A Neural Network for Fast Distributed Supervised Learning

    Full text link
    Distributed coding at the hidden layer of a multi-layer perceptron (MLP) endows the network with memory compression and noise tolerance capabilities. However, an MLP typically requires slow off-line learning to avoid catastrophic forgetting in an open input environment. An adaptive resonance theory (ART) model is designed to guarantee stable memories even with fast on-line learning. However, ART stability typically requires winner-take-all coding, which may cause category proliferation in a noisy input environment. Distributed ARTMAP (dARTMAP) seeks to combine the computational advantages of MLP and ART systems in a real-time neural network for supervised learning, An implementation algorithm here describes one class of dARTMAP networks. This system incorporates elements of the unsupervised dART model as well as new features, including a content-addressable memory (CAM) rule for improved contrast control at the coding field. A dARTMAP system reduces to fuzzy ARTMAP when coding is winner-take-all. Simulations show that dARTMAP retains fuzzy ARTMAP accuracy while significantly improving memory compression.National Science Foundation (IRI-94-01659); Office of Naval Research (N00014-95-1-0409, N00014-95-0657

    Aggregated Deep Local Features for Remote Sensing Image Retrieval

    Get PDF
    Remote Sensing Image Retrieval remains a challenging topic due to the special nature of Remote Sensing Imagery. Such images contain various different semantic objects, which clearly complicates the retrieval task. In this paper, we present an image retrieval pipeline that uses attentive, local convolutional features and aggregates them using the Vector of Locally Aggregated Descriptors (VLAD) to produce a global descriptor. We study various system parameters such as the multiplicative and additive attention mechanisms and descriptor dimensionality. We propose a query expansion method that requires no external inputs. Experiments demonstrate that even without training, the local convolutional features and global representation outperform other systems. After system tuning, we can achieve state-of-the-art or competitive results. Furthermore, we observe that our query expansion method increases overall system performance by about 3%, using only the top-three retrieved images. Finally, we show how dimensionality reduction produces compact descriptors with increased retrieval performance and fast retrieval computation times, e.g. 50% faster than the current systems.Comment: Published in Remote Sensing. The first two authors have equal contributio

    DeepICP: An End-to-End Deep Neural Network for 3D Point Cloud Registration

    Full text link
    We present DeepICP - a novel end-to-end learning-based 3D point cloud registration framework that achieves comparable registration accuracy to prior state-of-the-art geometric methods. Different from other keypoint based methods where a RANSAC procedure is usually needed, we implement the use of various deep neural network structures to establish an end-to-end trainable network. Our keypoint detector is trained through this end-to-end structure and enables the system to avoid the inference of dynamic objects, leverages the help of sufficiently salient features on stationary objects, and as a result, achieves high robustness. Rather than searching the corresponding points among existing points, the key contribution is that we innovatively generate them based on learned matching probabilities among a group of candidates, which can boost the registration accuracy. Our loss function incorporates both the local similarity and the global geometric constraints to ensure all above network designs can converge towards the right direction. We comprehensively validate the effectiveness of our approach using both the KITTI dataset and the Apollo-SouthBay dataset. Results demonstrate that our method achieves comparable or better performance than the state-of-the-art geometry-based methods. Detailed ablation and visualization analysis are included to further illustrate the behavior and insights of our network. The low registration error and high robustness of our method makes it attractive for substantial applications relying on the point cloud registration task.Comment: 10 pages, 6 figures, 3 tables, typos corrected, experimental results updated, accepted by ICCV 201

    Tree-based Coarsening and Partitioning of Complex Networks

    Full text link
    Many applications produce massive complex networks whose analysis would benefit from parallel processing. Parallel algorithms, in turn, often require a suitable network partition. For solving optimization tasks such as graph partitioning on large networks, multilevel methods are preferred in practice. Yet, complex networks pose challenges to established multilevel algorithms, in particular to their coarsening phase. One way to specify a (recursive) coarsening of a graph is to rate its edges and then contract the edges as prioritized by the rating. In this paper we (i) define weights for the edges of a network that express the edges' importance for connectivity, (ii) compute a minimum weight spanning tree TmT^m with respect to these weights, and (iii) rate the network edges based on the conductance values of TmT^m's fundamental cuts. To this end, we also (iv) develop the first optimal linear-time algorithm to compute the conductance values of \emph{all} fundamental cuts of a given spanning tree. We integrate the new edge rating into a leading multilevel graph partitioner and equip the latter with a new greedy postprocessing for optimizing the maximum communication volume (MCV). Experiments on bipartitioning frequently used benchmark networks show that the postprocessing already reduces MCV by 11.3%. Our new edge rating further reduces MCV by 10.3% compared to the previously best rating with the postprocessing in place for both ratings. In total, with a modest increase in running time, our new approach reduces the MCV of complex network partitions by 20.4%
    • …
    corecore