4,289 research outputs found

    A survey of statistical network models

    Full text link
    Networks are ubiquitous in science and have become a focal point for discussion in everyday life. Formal statistical models for the analysis of network data have emerged as a major topic of interest in diverse areas of study, and most of these involve a form of graphical representation. Probability models on graphs date back to 1959. Along with empirical studies in social psychology and sociology from the 1960s, these early works generated an active network community and a substantial literature in the 1970s. This effort moved into the statistical literature in the late 1970s and 1980s, and the past decade has seen a burgeoning network literature in statistical physics and computer science. The growth of the World Wide Web and the emergence of online networking communities such as Facebook, MySpace, and LinkedIn, and a host of more specialized professional network communities has intensified interest in the study of networks and network data. Our goal in this review is to provide the reader with an entry point to this burgeoning literature. We begin with an overview of the historical development of statistical network modeling and then we introduce a number of examples that have been studied in the network literature. Our subsequent discussion focuses on a number of prominent static and dynamic network models and their interconnections. We emphasize formal model descriptions, and pay special attention to the interpretation of parameters and their estimation. We end with a description of some open problems and challenges for machine learning and statistics.Comment: 96 pages, 14 figures, 333 reference

    A Survey on Knowledge Graphs: Representation, Acquisition and Applications

    Full text link
    Human knowledge provides a formal understanding of the world. Knowledge graphs that represent structural relations between entities have become an increasingly popular research direction towards cognition and human-level intelligence. In this survey, we provide a comprehensive review of knowledge graph covering overall research topics about 1) knowledge graph representation learning, 2) knowledge acquisition and completion, 3) temporal knowledge graph, and 4) knowledge-aware applications, and summarize recent breakthroughs and perspective directions to facilitate future research. We propose a full-view categorization and new taxonomies on these topics. Knowledge graph embedding is organized from four aspects of representation space, scoring function, encoding models, and auxiliary information. For knowledge acquisition, especially knowledge graph completion, embedding methods, path inference, and logical rule reasoning, are reviewed. We further explore several emerging topics, including meta relational learning, commonsense reasoning, and temporal knowledge graphs. To facilitate future research on knowledge graphs, we also provide a curated collection of datasets and open-source libraries on different tasks. In the end, we have a thorough outlook on several promising research directions

    Neural Diagrammatic Reasoning

    Get PDF
    Diagrams have been shown to be effective tools for humans to represent and reason about complex concepts. They have been widely used to represent concepts in science teaching, to communicate workflow in industries and to measure human fluid intelligence. Mechanised reasoning systems typically encode diagrams into symbolic representations that can be easily processed with rule-based expert systems. This relies on human experts to define the framework of diagram-to-symbol mapping and the set of rules to reason with the symbols. This means the reasoning systems cannot be easily adapted to other diagrams without a new set of human-defined representation mapping and reasoning rules. Moreover such systems are not able to cope with diagram inputs as raw and possibly noisy images. The need for human input and the lack of robustness to noise significantly limit the applications of mechanised diagrammatic reasoning systems. A key research question then arises: can we develop human-like reasoning systems that learn to reason robustly without predefined reasoning rules? To answer this question, I propose Neural Diagrammatic Reasoning, a new family of diagrammatic reasoning systems which does not have the drawbacks of mechanised reasoning systems. The new systems are based on deep neural networks, a recently popular machine learning method that achieved human-level performance on a range of perception tasks such as object detection, speech recognition and natural language processing. The proposed systems are able to learn both diagram to symbol mapping and implicit reasoning rules only from data, with no prior human input about symbols and rules in the reasoning tasks. Specifically I developed EulerNet, a novel neural network model that solves Euler diagram syllogism tasks with 99.5% accuracy. Experiments show that EulerNet learns useful representations of the diagrams and tasks, and is robust to noise and deformation in the input data. I also developed MXGNet, a novel multiplex graph neural architecture that solves Raven Progressive Matrices (RPM) tasks. MXGNet achieves state-of-the-art accuracies on two popular RPM datasets. In addition, I developed Discrete-AIR, an unsupervised learning architecture that learns semi-symbolic representations of diagrams without any labels. Lastly I designed a novel inductive bias module that can be readily used in today’s deep neural networks to improve their generalisation capability on relational reasoning tasks.EPSRC Studentship and Cambridge Trust Scholarshi
    • …
    corecore