10 research outputs found

    Multi-stage generation for segmentation of medical images

    Get PDF

    Guest Editorial: Non-Euclidean Machine Learning

    Get PDF
    Over the past decade, deep learning has had a revolutionary impact on a broad range of fields such as computer vision and image processing, computational photography, medical imaging and speech and language analysis and synthesis etc. Deep learning technologies are estimated to have added billions in business value, created new markets, and transformed entire industrial segments. Most of today’s successful deep learning methods such as Convolutional Neural Networks (CNNs) rely on classical signal processing models that limit their applicability to data with underlying Euclidean grid-like structure, e.g., images or acoustic signals. Yet, many applications deal with non-Euclidean (graph- or manifold-structured) data. For example, in social network analysis the users and their attributes are generally modeled as signals on the vertices of graphs. In biology protein-to-protein interactions are modeled as graphs. In computer vision & graphics 3D objects are modeled as meshes or point clouds. Furthermore, a graph representation is a very natural way to describe interactions between objects or signals. The classical deep learning paradigm on Euclidean domains falls short in providing appropriate tools for such kind of data. Until recently, the lack of deep learning models capable of correctly dealing with non-Euclidean data has been a major obstacle in these fields. This special section addresses the need to bring together leading efforts in non-Euclidean deep learning across all communities. From the papers that the special received twelve were selected for publication. The selected papers can naturally fall in three distinct categories: (a) methodologies that advance machine learning on data that are represented as graphs, (b) methodologies that advance machine learning on manifold-valued data, and (c) applications of machine learning methodologies on non-Euclidean spaces in computer vision and medical imaging. We briefly review the accepted papers in each of the groups

    From voxels to pixels and back: Self-supervision in natural-image reconstruction from fMRI

    Full text link
    Reconstructing observed images from fMRI brain recordings is challenging. Unfortunately, acquiring sufficient "labeled" pairs of {Image, fMRI} (i.e., images with their corresponding fMRI responses) to span the huge space of natural images is prohibitive for many reasons. We present a novel approach which, in addition to the scarce labeled data (training pairs), allows to train fMRI-to-image reconstruction networks also on "unlabeled" data (i.e., images without fMRI recording, and fMRI recording without images). The proposed model utilizes both an Encoder network (image-to-fMRI) and a Decoder network (fMRI-to-image). Concatenating these two networks back-to-back (Encoder-Decoder & Decoder-Encoder) allows augmenting the training with both types of unlabeled data. Importantly, it allows training on the unlabeled test-fMRI data. This self-supervision adapts the reconstruction network to the new input test-data, despite its deviation from the statistics of the scarce training data.Comment: *First two authors contributed equally. NeurIPS 201

    Drug Side Effect Prediction with Deep Learning Molecular Embedding in a Graph-of-Graphs Domain

    Get PDF
    Drug side effects (DSEs), or adverse drug reactions (ADRs), constitute an important health risk, given the approximately 197,000 annual DSE deaths in Europe alone. Therefore, during the drug development process, DSE detection is of utmost importance, and the occurrence of ADRs prevents many candidate molecules from going through clinical trials. Thus, early prediction of DSEs has the potential to massively reduce drug development times and costs. In this work, data are represented in a non-euclidean manner, in the form of a graph-of-graphs domain. In such a domain, structures of molecule are represented by molecular graphs, each of which becomes a node in the higher-level graph. In the latter, nodes stand for drugs and genes, and arcs represent their relationships. This relational nature represents an important novelty for the DSE prediction task, and it is directly used during the prediction. For this purpose, the MolecularGNN model is proposed. This new classifier is based on graph neural networks, a connectionist model capable of processing data in the form of graphs. The approach represents an improvement over a previous method, called DruGNN, as it is also capable of extracting information from the graph-based molecular structures, producing a task-based neural fingerprint (NF) of the molecule which is adapted to the specific task. The architecture has been compared with other GNN models in terms of performance, showing that the proposed approach is very promising

    Graph Neural Networks for Molecular Data

    Get PDF

    Contribution to Graph-based Manifold Learning with Application to Image Categorization.

    Get PDF
    122 pLos algoritmos de aprendizaje de variedades basados en grafos (Graph,based manifold) son técnicas que han demostrado ser potentes herramientas para la extracción de características y la reducción de la dimensionalidad en los campos de reconomiento de patrones, visión por computador y aprendizaje automático. Estos algoritmos utilizan información basada en las similitudes de pares de muestras y del grafo ponderado resultante para revelar la estructura geométrica intrínseca de la variedad

    Inductive–Transductive Learning with Graph Neural Networks

    No full text
    Graphs are a natural choice to encode data in many real–world applications. In fact, a graph can describe a given pattern as a complex structure made up of parts (the nodes) and relationships between them (the edges). Despite their rich representational power, most of machine learning approaches cannot deal directly with inputs encoded by graphs. Indeed, Graph Neural Networks (GNNs) have been devised as an extension of recursive models, able to process general graphs, possibly undirected and cyclic. In particular, GNNs can be trained to approximate all the “practically useful” functions on the graph space, based on the classical inductive learning approach, realized within the supervised framework. However, the information encoded in the edges can actually be used in a more refined way, to switch from inductive to transductive learning. In this paper, we present an inductive–transductive learning scheme based on GNNs. The proposed approach is evaluated both on artificial and real–world datasets showing promising results. The recently released GNN software, based on the Tensorflow library, is made available for interested users

    On inductive-transductive learning with Graph Neural Networks

    No full text
    Many realworld domains involve information naturally represented by graphs, where nodes denote basic patterns while edges stand for relationships among them. The Graph Neural Network (GNN) is a machine learning model capable of directly managing graphstructured data. In the original framework, GNNs are inductively trained, adapting their parameters based on a supervised learning environment. However, GNNs can also take advantage of transductive learning, thanks to the natural way they make information flow and spread across the graph, using relationships among patterns. In this paper, we propose a mixed inductivetransductive GNN model, study its properties and introduce an experimental strategy that allows us to understand and distinguish the role of inductive and transductive learning. The preliminary experimental results show interesting properties for the mixed model, highlighting how the peculiarities of the problems and the data can impact on the two learning strategies
    corecore