2,098 research outputs found

    Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective

    Full text link
    Neural-symbolic computing has now become the subject of interest of both academic and industry research laboratories. Graph Neural Networks (GNN) have been widely used in relational and symbolic domains, with widespread application of GNNs in combinatorial optimization, constraint satisfaction, relational reasoning and other scientific domains. The need for improved explainability, interpretability and trust of AI systems in general demands principled methodologies, as suggested by neural-symbolic computing. In this paper, we review the state-of-the-art on the use of GNNs as a model of neural-symbolic computing. This includes the application of GNNs in several domains as well as its relationship to current developments in neural-symbolic computing.Comment: Updated version, draft of accepted IJCAI2020 Survey Pape

    Computing k-Modal Embeddings of Planar Digraphs

    Get PDF
    Given a planar digraph G and a positive even integer k, an embedding of G in the plane is k-modal, if every vertex of G is incident to at most k pairs of consecutive edges with opposite orientations, i.e., the incoming and the outgoing edges at each vertex are grouped by the embedding into at most k sets of consecutive edges with the same orientation. In this paper, we study the k-Modality problem, which asks for the existence of a k-modal embedding of a planar digraph. This combinatorial problem is at the very core of a variety of constrained embedding questions for planar digraphs and flat clustered networks. First, since the 2-Modality problem can be easily solved in linear time, we consider the general k-Modality problem for any value of k>2 and show that the problem is NP-complete for planar digraphs of maximum degree Delta <= k+3. We relate its computational complexity to that of two notions of planarity for flat clustered networks: Planar Intersection-Link and Planar NodeTrix representations. This allows us to answer in the strongest possible way an open question by Di Giacomo [https://doi.org/10.1007/978-3-319-73915-1_37], concerning the complexity of constructing planar NodeTrix representations of flat clustered networks with small clusters, and to address a research question by Angelini et al. [https://doi.org/10.7155/jgaa.00437], concerning intersection-link representations based on geometric objects that determine complex arrangements. On the positive side, we provide a simple FPT algorithm for partial 2-trees of arbitrary degree, whose running time is exponential in k and linear in the input size. Second, motivated by the recently-introduced planar L-drawings of planar digraphs [https://doi.org/10.1007/978-3-319-73915-1_36], which require the computation of a 4-modal embedding, we focus our attention on k=4. On the algorithmic side, we show a complexity dichotomy for the 4-Modality problem with respect to Delta, by providing a linear-time algorithm for planar digraphs with Delta <= 6. This algorithmic result is based on decomposing the input digraph into its blocks via BC-trees and each of these blocks into its triconnected components via SPQR-trees. In particular, we are able to show that the constraints imposed on the embedding by the rigid triconnected components can be tackled by means of a small set of reduction rules and discover that the algorithmic core of the problem lies in special instances of NAESAT, which we prove to be always NAE-satisfiable - a result of independent interest that improves on Porschen et al. [https://doi.org/10.1007/978-3-540-24605-3_14]. Finally, on the combinatorial side, we consider outerplanar digraphs and show that any such a digraph always admits a k-modal embedding with k=4 and that this value of k is best possible for the digraphs in this family

    The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision

    Full text link
    We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers. Our model builds an object-based scene representation and translates sentences into executable, symbolic programs. To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation. Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to. Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences. We use curriculum learning to guide the searching over the large compositional space of images and language. Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences. Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains. It also empowers applications including visual question answering and bidirectional image-text retrieval.Comment: ICLR 2019 (Oral). Project page: http://nscl.csail.mit.edu
    • …
    corecore