9 research outputs found

    The expressive power of pooling in Graph Neural Networks

    Full text link
    In Graph Neural Networks (GNNs), hierarchical pooling operators generate local summaries of the data by coarsening the graph structure and the vertex features. Considerable attention has been devoted to analyzing the expressive power of message-passing (MP) layers in GNNs, while a study on how graph pooling affects the expressiveness of a GNN is still lacking. Additionally, despite the recent advances in the design of pooling operators, there is not a principled criterion to compare them. In this work, we derive sufficient conditions for a pooling operator to fully preserve the expressive power of the MP layers before it. These conditions serve as a universal and theoretically-grounded criterion for choosing among existing pooling operators or designing new ones. Based on our theoretical findings, we analyze several existing pooling operators and identify those that fail to satisfy the expressiveness conditions. Finally, we introduce an experimental setup to verify empirically the expressive power of a GNN equipped with pooling layers, in terms of its capability to perform a graph isomorphism test

    Impact of the Covid 19 outbreaks on the italian twitter vaccination debat: a network based analysis

    Full text link
    Vaccine hesitancy, or the reluctance to be vaccinated, is a phenomenon that has recently become particularly significant, in conjunction with the vaccination campaign against COVID-19. During the lockdown period, necessary to control the spread of the virus, social networks have played an important role in the Italian debate on vaccination, generally representing the easiest and safest way to exchange opinions and maintain some form of sociability. Among social network platforms, Twitter has assumed a strategic role in driving the public opinion, creating compact groups of users sharing similar views towards the utility, uselessness or even dangerousness of vaccines. In this paper, we present a new, publicly available, dataset of Italian tweets, TwitterVax, collected in the period January 2019--May 2022. Considering monthly data, gathered into forty one retweet networks -- where nodes identify users and edges are present between users who have retweeted each other -- we performed community detection within the networks, analyzing their evolution and polarization with respect to NoVax and ProVax users through time. This allowed us to clearly discover debate trends as well as identify potential key moments and actors in opinion flows, characterizing the main features and tweeting behavior of the two communities

    Weisfeiler--Lehman goes Dynamic: An Analysis of the Expressive Power of Graph Neural Networks for Attributed and Dynamic Graphs

    Full text link
    Graph Neural Networks (GNNs) are a large class of relational models for graph processing. Recent theoretical studies on the expressive power of GNNs have focused on two issues. On the one hand, it has been proven that GNNs are as powerful as the Weisfeiler-Lehman test (1-WL) in their ability to distinguish graphs. Moreover, it has been shown that the equivalence enforced by 1-WL equals unfolding equivalence. On the other hand, GNNs turned out to be universal approximators on graphs modulo the constraints enforced by 1-WL/unfolding equivalence. However, these results only apply to Static Undirected Homogeneous Graphs with node attributes. In contrast, real-life applications often involve a variety of graph properties, such as, e.g., dynamics or node and edge attributes. In this paper, we conduct a theoretical analysis of the expressive power of GNNs for these two graph types that are particularly of interest. Dynamic graphs are widely used in modern applications, and its theoretical analysis requires new approaches. The attributed type acts as a standard form for all graph types since it has been shown that all graph types can be transformed without loss to Static Undirected Homogeneous Graphs with attributes on nodes and edges (SAUHG). The study considers generic GNN models and proposes appropriate 1-WL tests for those domains. Then, the results on the expressive power of GNNs are extended by proving that GNNs have the same capability as the 1-WL test in distinguishing dynamic and attributed graphs, the 1-WL equivalence equals unfolding equivalence and that GNNs are universal approximators modulo 1-WL/unfolding equivalence. Moreover, the proof of the approximation capability holds for SAUHGs, which include most of those used in practical applications, and it is constructive in nature allowing to deduce hints on the architecture of GNNs that can achieve the desired accuracy

    A Mixed Statistical and Machine Learning Approach for the Analysis of Multimodal Trail Making Test Data

    No full text
    Eye-tracking can offer a novel clinical practice and a non-invasive tool to detect neuropathological syndromes. In this paper, we show some analysis on data obtained from the visual sequential search test. Indeed, such a test can be used to evaluate the capacity of looking at objects in a specific order, and its successful execution requires the optimization of the perceptual resources of foveal and extrafoveal vision. The main objective of this work is to detect if some patterns can be found within the data, to discern among people with chronic pain, extrapyramidal patients and healthy controls. We employed statistical tests to evaluate differences among groups, considering three novel indicators: blinking rate, average blinking duration and maximum pupil size variation. Additionally, to divide the three patient groups based on scan-path images—which appear very noisy and all similar to each other—we applied deep learning techniques to embed them into a larger transformed space. We then applied a clustering approach to correctly detect and classify the three cohorts. Preliminary experiments show promising results

    Graph Neural Networks for temporal graphs: State of the art, open challenges, and opportunities

    Full text link
    Graph Neural Networks (GNNs) have become the leading paradigm for learning on (static) graph-structured data. However, many real-world systems are dynamic in nature, since the graph and node/edge attributes change over time. In recent years, GNN-based models for temporal graphs have emerged as a promising area of research to extend the capabilities of GNNs. In this work, we provide the first comprehensive overview of the current state-of-the-art of temporal GNN, introducing a rigorous formalization of learning settings and tasks and a novel taxonomy categorizing existing approaches in terms of how the temporal aspect is represented and processed. We conclude the survey with a discussion of the most relevant open challenges for the field, from both research and application perspectives

    Multi-stage Synthetic Image Generation for the Semantic Segmentation of Medical Images

    No full text
    Recently, deep learning methods have had a tremendous impact on computer vision applications, from image classification and semantic segmentation to object detection and face recognition. Nevertheless, the training of state-of-the-art neural network models is usually based on the availability of large sets of supervised data. Indeed, deep neural networks have a huge number of parameters which, to be properly trained, require a fairly large dataset of supervised examples. This problem is particularly relevant in the medical field due to privacy issues and the high cost of image tagging by medical experts. In this chapter, we present a new approach that allows to reduce this limitation by generating synthetic images with their corresponding supervision. In particular, this approach can be applied in semantic segmentation, where the generated images (and label-maps) can be used to augment real datasets during network training. The main characteristic of our method, differently from other existing techniques, lies in the generation procedure carried out in multiple steps, based on the intuition that, by splitting the procedure in multiple phases, the overall generation task is simplified. The effectiveness of the proposed multi-stage approach has been evaluated on two different domains, retinal fundus and chest X-ray images. In both domains, the multi-stage approach has been compared with the single-stage generation procedure. The results suggest that generating images in multiple steps is more effective and computationally cheaper, yet allowing high resolution, realistic images to be used for training deep networks

    A Two-Stage GAN for High-Resolution Retinal Image Generation and Segmentation

    No full text
    In this paper, we use Generative Adversarial Networks (GANs) to synthesize high-quality retinal images along with the corresponding semantic label-maps, instead of real images during training of a segmentation network. Different from other previous proposals, we employ a two-step approach: first, a progressively growing GAN is trained to generate the semantic label-maps, which describes the blood vessel structure (i.e., the vasculature); second, an image-to-image translation approach is used to obtain realistic retinal images from the generated vasculature. The adoption of a two-stage process simplifies the generation task, so that the network training requires fewer images with consequent lower memory usage. Moreover, learning is effective, and with only a handful of training samples, our approach generates realistic high-resolution images, which can be successfully used to enlarge small available datasets. Comparable results were obtained by employing only synthetic images in place of real data during training. The practical viability of the proposed approach was demonstrated on two well-established benchmark sets for retinal vessel segmentation—both containing a very small number of training samples—obtaining better performance with respect to state-of-the-art techniques

    A Two-Stage GAN for High-Resolution Retinal Image Generation and Segmentation

    No full text
    In this paper, we use Generative Adversarial Networks (GANs) to synthesize high-quality retinal images along with the corresponding semantic label-maps, instead of real images during training of a segmentation network. Different from other previous proposals, we employ a two-step approach: first, a progressively growing GAN is trained to generate the semantic label-maps, which describes the blood vessel structure (i.e., the vasculature); second, an image-to-image translation approach is used to obtain realistic retinal images from the generated vasculature. The adoption of a two-stage process simplifies the generation task, so that the network training requires fewer images with consequent lower memory usage. Moreover, learning is effective, and with only a handful of training samples, our approach generates realistic high-resolution images, which can be successfully used to enlarge small available datasets. Comparable results were obtained by employing only synthetic images in place of real data during training. The practical viability of the proposed approach was demonstrated on two well-established benchmark sets for retinal vessel segmentation—both containing a very small number of training samples—obtaining better performance with respect to state-of-the-art techniques

    A Neural Network Approach for the Analysis of Reproducible Ribo–Seq Profiles

    Get PDF
    In recent years, the Ribosome profiling technique (Ribo–seq) has emerged as a powerful method for globally monitoring the translation process in vivo at single nucleotide resolution. Based on deep sequencing of mRNA fragments, Ribo–seq allows to obtain profiles that reflect the time spent by ribosomes in translating each part of an open reading frame. Unfortunately, the profiles produced by this method can vary significantly in different experimental setups, being characterized by a poor reproducibility. To address this problem, we have employed a statistical method for the identification of highly reproducible Ribo–seq profiles, which was tested on a set of E. coli genes. State-of-the-art artificial neural network models have been used to validate the quality of the produced sequences. Moreover, new insights into the dynamics of ribosome translation have been provided through a statistical analysis on the obtained sequences.</jats:p
    corecore