107,477 research outputs found

    Graph Convolutional Neural Networks based on Quantum Vertex Saliency

    Full text link
    This paper proposes a new Quantum Spatial Graph Convolutional Neural Network (QSGCNN) model that can directly learn a classification function for graphs of arbitrary sizes. Unlike state-of-the-art Graph Convolutional Neural Network (GCNN) models, the proposed QSGCNN model incorporates the process of identifying transitive aligned vertices between graphs, and transforms arbitrary sized graphs into fixed-sized aligned vertex grid structures. In order to learn representative graph characteristics, a new quantum spatial graph convolution is proposed and employed to extract multi-scale vertex features, in terms of quantum information propagation between grid vertices of each graph. Since the quantum spatial convolution preserves the grid structures of the input vertices (i.e., the convolution layer does not change the original spatial sequence of vertices), the proposed QSGCNN model allows to directly employ the traditional convolutional neural network architecture to further learn from the global graph topology, providing an end-to-end deep learning architecture that integrates the graph representation and learning in the quantum spatial graph convolution layer and the traditional convolutional layer for graph classifications. We demonstrate the effectiveness of the proposed QSGCNN model in relation to existing state-of-the-art methods. The proposed QSGCNN model addresses the shortcomings of information loss and imprecise information representation arising in existing GCN models associated with the use of SortPooling or SumPooling layers. Experiments on benchmark graph classification datasets demonstrate the effectiveness of the proposed QSGCNN model

    Relational Graph Representation Learning for Predicting Object Affordances

    Get PDF
    We address the problem of affordance classification for class-agnostic objects considering an open set of actions, by unsupervised learning of object interactions,inducing object affordance classes. A novel qualitative spatial representation incorporating depth information is used to construct Activity Graphs which encode object interactions. These Activity Graphs are clustered to obtain interaction classes, and subsequently extract classes of object affordances. Our experiments demonstrate that our method learns object affordances without being scene- or object-specific

    Spherical Message Passing for 3D Graph Networks

    Full text link
    We consider representation learning from 3D graphs in which each node is associated with a spatial position in 3D. This is an under explored area of research, and a principled framework is currently lacking. In this work, we propose a generic framework, known as the 3D graph network (3DGN), to provide a unified interface at different levels of granularity for 3D graphs. Built on 3DGN, we propose the spherical message passing (SMP) as a novel and specific scheme for realizing the 3DGN framework in the spherical coordinate system (SCS). We conduct formal analyses and show that the relative location of each node in 3D graphs is uniquely defined in the SMP scheme. Thus, our SMP represents a complete and accurate architecture for learning from 3D graphs in the SCS. We derive physically-based representations of geometric information and propose the SphereNet for learning representations of 3D graphs. We show that existing 3D deep models can be viewed as special cases of the SphereNet. Experimental results demonstrate that the use of complete and accurate 3D information in 3DGN and SphereNet leads to significant performance improvements in prediction tasks.Comment: 16 pages, 8 figures, 8 table

    Graph-Based Spatio-Temporal Feature Learning for Neuromorphic Vision Sensing

    Get PDF
    Neuromorphic vision sensing (NVS) devices represent visual information as sequences of asynchronous discrete events (a.k.a., “spikes”) in response to changes in scene reflectance. Unlike conventional active pixel sensing (APS), NVS allows for significantly higher event sampling rates at substantially increased energy efficiency and robustness to illumination changes. However, feature representation for NVS is far behind its APS-based counterparts, resulting in lower performance in high-level computer vision tasks. To fully utilize its sparse and asynchronous nature, we propose a compact graph representation for NVS, which allows for end-to-end learning with graph convolution neural networks. We couple this with a novel end-to-end feature learning framework that accommodates both appearance-based and motion-based tasks. The core of our framework comprises a spatial feature learning module, which utilizes residual-graph convolutional neural networks (RG-CNN), for end-to-end learning of appearance-based features directly from graphs. We extend this with our proposed Graph2Grid block and temporal feature learning module for efficiently modelling temporal dependencies over multiple graphs and a long temporal extent. We show how our framework can be configured for object classification, action recognition and action similarity labeling. Importantly, our approach preserves the spatial and temporal coherence of spike events, while requiring less computation and memory. The experimental validation shows that our proposed framework outperforms all recent methods on standard datasets. Finally, to address the absence of large real-world NVS datasets for complex recognition tasks, we introduce, evaluate and make available the American Sign Language letters (ASL-DVS), as well as human action dataset (UCF101-DVS, HMDB51-DVS and ASLAN-DVS)

    Spatial vs. Graph-Based Formula Retrieval

    Get PDF
    Recently math formula search engines have become a useful tool for novice users learning a new topic. While systems exist already with the ability to do formula retrieval, they rely on prefix matching and typed query entries. This can be an obstacle for novice users who are not proficient with languages used to express formulas such as LaTeX, or do not remember the left end of a formula, or wish to match formulas at multiple locations (e.g., using `dx\int \quad\quad dx\u27 as a query). We generalize a one dimensional spatial encoding for word spotting in handwritten document images, the Pyramidal Histogram of Characters or PHOC, to obtain the two-dimensional XY-PHOC providing robust spatial embeddings with modest storage requirements, and without requiring costly operations used to generate graphs. The spatial representation captures the relative position of symbols without needing to store explicit edges between symbols. Our spatial representation is able to match queries that are disjoint subgraphs within indexed formulas. Existing graph and tree-based formula retrieval models are not designed to handle disjoint graphs, and relationships may be added to a query that do not exist in the final formula, making it less similar for matching. XY-PHOC embeddings provide a simple spatial embedding providing competitive results in formula similarity search and autocompletion, and supports queries comprised of symbols in two dimensions, without the need to form a connected graph for search
    corecore