26 research outputs found

    Training Graph Neural Networks on Growing Stochastic Graphs

    Full text link
    Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data. Based on matrix multiplications, convolutions incur in high computational costs leading to scalability limitations in practice. To overcome these limitations, proposed methods rely on training GNNs in smaller number of nodes, and then transferring the GNN to larger graphs. Even though these methods are able to bound the difference between the output of the GNN with different number of nodes, they do not provide guarantees against the optimal GNN on the very large graph. In this paper, we propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon. We propose to grow the size of the graph as we train, and we show that our proposed methodology -- learning by transference -- converges to a neighborhood of a first order stationary point on the graphon data. A numerical experiment validates our proposed approach

    Increase and Conquer: Training Graph Neural Networks on Growing Graphs

    Full text link
    Graph neural networks (GNNs) use graph convolutions to exploit network invariances and learn meaningful features from network data. However, on large-scale graphs convolutions incur in high computational cost, leading to scalability limitations. Leveraging the graphon -- the limit object of a graph -- in this paper we consider the problem of learning a graphon neural network (WNN) -- the limit object of a GNN -- by training GNNs on graphs sampled Bernoulli from the graphon. Under smoothness conditions, we show that: (i) the expected distance between the learning steps on the GNN and on the WNN decreases asymptotically with the size of the graph, and (ii) when training on a sequence of growing graphs, gradient descent follows the learning direction of the WNN. Inspired by these results, we propose a novel algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training. This algorithm is benchmarked on both a recommendation system and a decentralized control problem where it is shown to retain comparable performance, to its large-scale counterpart, at a reduced computational cost

    Multi-task Bias-Variance Trade-off Through Functional Constraints

    Full text link
    Multi-task learning aims to acquire a set of functions, either regressors or classifiers, that perform well for diverse tasks. At its core, the idea behind multi-task learning is to exploit the intrinsic similarity across data sources to aid in the learning process for each individual domain. In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks dependencies -- to propose a bias-variance trade-off. To control the relationship between the variance (given by the number of i.i.d. samples), and the bias (coming from data from other task), we introduce a constrained learning formulation that enforces domain specific solutions to be close to a central function. This problem is solved in the dual domain, for which we propose a stochastic primal-dual algorithm. Experimental results for a multi-domain classification problem with real data show that the proposed procedure outperforms both the task specific, as well as the single classifiers

    FastSample: Accelerating Distributed Graph Neural Network Training for Billion-Scale Graphs

    Full text link
    Training Graph Neural Networks(GNNs) on a large monolithic graph presents unique challenges as the graph cannot fit within a single machine and it cannot be decomposed into smaller disconnected components. Distributed sampling-based training distributes the graph across multiple machines and trains the GNN on small parts of the graph that are randomly sampled every training iteration. We show that in a distributed environment, the sampling overhead is a significant component of the training time for large-scale graphs. We propose FastSample which is composed of two synergistic techniques that greatly reduce the distributed sampling time: 1)a new graph partitioning method that eliminates most of the communication rounds in distributed sampling , 2)a novel highly optimized sampling kernel that reduces memory movement during sampling. We test FastSample on large-scale graph benchmarks and show that FastSample speeds up distributed sampling-based GNN training by up to 2x with no loss in accuracy

    Intrinsically motivated graph exploration using network theories of human curiosity

    Full text link
    Intrinsically motivated exploration has proven useful for reinforcement learning, even without additional extrinsic rewards. When the environment is naturally represented as a graph, how to guide exploration best remains an open question. In this work, we propose a novel approach for exploring graph-structured data motivated by two theories of human curiosity: the information gap theory and the compression progress theory. The theories view curiosity as an intrinsic motivation to optimize for topological features of subgraphs induced by the visited nodes in the environment. We use these proposed features as rewards for graph neural-network-based reinforcement learning. On multiple classes of synthetically generated graphs, we find that trained agents generalize to larger environments and to longer exploratory walks than are seen during training. Our method computes more efficiently than the greedy evaluation of the relevant topological properties. The proposed intrinsic motivations bear particular relevance for recommender systems. We demonstrate that curiosity-based recommendations are more predictive of human behavior than PageRank centrality for several real-world graph datasets, including MovieLens, Amazon Books, and Wikispeedia.Comment: 14 pages, 5 figures in main text, and 15 pages, 8 figures in supplemen

    Prevalence of burnout syndrome in nursing staff during the covid-19 pandemic

    Get PDF
    El trabajo en Enfermer铆a se ha considerado como una fuente potencial de burnout, con consecuencias en la esfera personal y en los 谩mbitos laboral y familiar. El objetivo de este trabajo ha sido estimar la prevalencia del S铆ndrome de Burnout en el personal de Enfermer铆a en dos centros de salud de la regi贸n metropolitana de Buenos Aires, durante la pandemia de COVID-19. M茅todo: Estudio observacional y anal铆tico, con participaci贸n an贸nima y voluntaria que incluy贸 89 enfermeras/os. Recolecci贸n de datos durante abril-junio 2021 por medio de la escala-cuestionario Maslach Burnout Inventory (MBI). Resultados: La muestra estuvo compuesta en su mayor铆a por mujeres (83.2%), con un elevado nivel de formaci贸n entre Enfermeros Profesionales y Licenciados (92.2%); edades entre 23 y 63 a帽os, con diferentes a帽os de experiencia laboral. La prevalencia del s铆ndrome de burnout result贸 ser elevada (89.9%) tanto si se considera su valor en forma global, como para cada una de las tres dimensiones del MBI. Conclusiones: Nueve de cada diez enfermeras/os est谩n afectadas/os por el s铆ndrome de burnout. Esta alta prevalencia puede estar asociada al desempe帽o de tareas extras durante la pandemia COVID-19. Se propone la implementaci贸n de programas de prevenci贸n y detecci贸n temprana, junto con el dise帽o de estrategias para el manejo adecuado del estr茅s laboralTheoretical framework: Nursing work has been considered as a potential source of burnout, with consequences in the personal sphere and in the work and family environments. The objective of this study was to estimate the prevalence of Burnout Syndrome in nursing staff in two health centers in the metropolitan region of Buenos Aires, during the COVID-19 pandemic. Method: Observational and analytical study, with anonymous and voluntary participation that included 89 nurses. Data collection during April-June 2021 through the Maslach Burnout Inventory (MBI). Results: The sample was com-posed mostly of women (83.2%), with a high level of training among Professional Nurses and Gra-duates (92.2%); ages ranged from 23 to 63 years, with different years of professional experience. The prevalence of burnout syndrome turned out to be high (89.9%) whether its value is considered globally or for each of the three dimensions of the Maslach Scale. Conclusions: Nine out of ten nurses are affected by burnout syndrome. This high prevalence may be associated with the performance of extra tasks during the COVID-19 pandemic. The implementation of prevention and early detection programs is proposed, along with the design of strategies for the proper management of work stres

    Hispania : (geograf铆a de Espa帽a) : primer curso

    No full text
    En la portada: Esta obra se adapta al cuestionario oficial del Ministerio de Educaci贸n Nacional. Plan 1957. Aprobada por Orden Ministerial ...Na port.: Esta obra se adapta al cuestionario oficial del Ministerio de Educaci贸n Nacional. Plan 1957. Aprobada por Orden Ministerial ...Data tomada do verso da portColof贸nAnte
    corecore