17 research outputs found

    Training Graph Neural Networks on Growing Stochastic Graphs

    Full text link
    Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data. Based on matrix multiplications, convolutions incur in high computational costs leading to scalability limitations in practice. To overcome these limitations, proposed methods rely on training GNNs in smaller number of nodes, and then transferring the GNN to larger graphs. Even though these methods are able to bound the difference between the output of the GNN with different number of nodes, they do not provide guarantees against the optimal GNN on the very large graph. In this paper, we propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon. We propose to grow the size of the graph as we train, and we show that our proposed methodology -- learning by transference -- converges to a neighborhood of a first order stationary point on the graphon data. A numerical experiment validates our proposed approach

    Increase and Conquer: Training Graph Neural Networks on Growing Graphs

    Full text link
    Graph neural networks (GNNs) use graph convolutions to exploit network invariances and learn meaningful features from network data. However, on large-scale graphs convolutions incur in high computational cost, leading to scalability limitations. Leveraging the graphon -- the limit object of a graph -- in this paper we consider the problem of learning a graphon neural network (WNN) -- the limit object of a GNN -- by training GNNs on graphs sampled Bernoulli from the graphon. Under smoothness conditions, we show that: (i) the expected distance between the learning steps on the GNN and on the WNN decreases asymptotically with the size of the graph, and (ii) when training on a sequence of growing graphs, gradient descent follows the learning direction of the WNN. Inspired by these results, we propose a novel algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training. This algorithm is benchmarked on both a recommendation system and a decentralized control problem where it is shown to retain comparable performance, to its large-scale counterpart, at a reduced computational cost

    Multi-task Bias-Variance Trade-off Through Functional Constraints

    Full text link
    Multi-task learning aims to acquire a set of functions, either regressors or classifiers, that perform well for diverse tasks. At its core, the idea behind multi-task learning is to exploit the intrinsic similarity across data sources to aid in the learning process for each individual domain. In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks dependencies -- to propose a bias-variance trade-off. To control the relationship between the variance (given by the number of i.i.d. samples), and the bias (coming from data from other task), we introduce a constrained learning formulation that enforces domain specific solutions to be close to a central function. This problem is solved in the dual domain, for which we propose a stochastic primal-dual algorithm. Experimental results for a multi-domain classification problem with real data show that the proposed procedure outperforms both the task specific, as well as the single classifiers

    Intrinsically motivated graph exploration using network theories of human curiosity

    Full text link
    Intrinsically motivated exploration has proven useful for reinforcement learning, even without additional extrinsic rewards. When the environment is naturally represented as a graph, how to guide exploration best remains an open question. In this work, we propose a novel approach for exploring graph-structured data motivated by two theories of human curiosity: the information gap theory and the compression progress theory. The theories view curiosity as an intrinsic motivation to optimize for topological features of subgraphs induced by the visited nodes in the environment. We use these proposed features as rewards for graph neural-network-based reinforcement learning. On multiple classes of synthetically generated graphs, we find that trained agents generalize to larger environments and to longer exploratory walks than are seen during training. Our method computes more efficiently than the greedy evaluation of the relevant topological properties. The proposed intrinsic motivations bear particular relevance for recommender systems. We demonstrate that curiosity-based recommendations are more predictive of human behavior than PageRank centrality for several real-world graph datasets, including MovieLens, Amazon Books, and Wikispeedia.Comment: 14 pages, 5 figures in main text, and 15 pages, 8 figures in supplemen

    Changes in visual function and optical and tear film quality in computer users

    Get PDF
    Purpose: To assess changes in visual function and optical and tear film quality in computer users.Methods: Forty computer workers and 40 controls were evaluated at the beginning and end of a working day. Symptoms were assessed using the Quality of Vision questionnaire (QoV), 5-item Dry Eye Questionnaire (DEQ--5) and Symptom Assessment in Dry Eye version II (SANDE II). Tear film quality was evaluated using the Medmont E300 dynamic corneal topography tool to measure the tear film surface quality (TFSQ), TFSQ area and auto tear break- -up time (TBUT). Optical quality was assessed by measuring high, low and total ocular aberrations with a Hartmann-Shack wavefront sensor. Visual performance was assessed by measuring photopic and mesopic visual acuity, photopic and mesopic contrast sensitivity and light disturbance.Results: Poorer DEQ-5, QoV and SANDE II scores were obtained in computer workers at the end of the working day compared with controls ( p = 0.19) or ocular aberrations were observed (p >= 0.09). Additionally, both light disturbance ( p = 0.07). In contrast, control subjects exhibited no decrease in any variable during the day.Conclusions: While visual acuity remained unchanged, several aspects of visual function and quality of vision decreased over a day of computer use. These changes were accompanied by greater dry eye symptoms and tear film changes, which are likely to have played a fundamental role. The present study provides insight into new metrics to assess digital eye strain.Conselleria d'Educacio, Investigacio, Cultura i Esport de la Generalitat Valenciana, Grant/Award Number: GV/2018/059; Spanish Ministry of Universities, Grant/Award Number: FPU17/0366
    corecore