48,215 research outputs found

    Latent Patient Network Learning for Automatic Diagnosis

    Full text link
    Recently, Graph Convolutional Networks (GCNs) has proven to be a powerful machine learning tool for Computer Aided Diagnosis (CADx) and disease prediction. A key component in these models is to build a population graph, where the graph adjacency matrix represents pair-wise patient similarities. Until now, the similarity metrics have been defined manually, usually based on meta-features like demographics or clinical scores. The definition of the metric, however, needs careful tuning, as GCNs are very sensitive to the graph structure. In this paper, we demonstrate for the first time in the CADx domain that it is possible to learn a single, optimal graph towards the GCN's downstream task of disease classification. To this end, we propose a novel, end-to-end trainable graph learning architecture for dynamic and localized graph pruning. Unlike commonly employed spectral GCN approaches, our GCN is spatial and inductive, and can thus infer previously unseen patients as well. We demonstrate significant classification improvements with our learned graph on two CADx problems in medicine. We further explain and visualize this result using an artificial dataset, underlining the importance of graph learning for more accurate and robust inference with GCNs in medical applications

    Filter-informed Spectral Graph Wavelet Networks for Multiscale Feature Extraction and Intelligent Fault Diagnosis

    Full text link
    Intelligent fault diagnosis has been increasingly improved with the evolution of deep learning (DL) approaches. Recently, the emerging graph neural networks (GNNs) have also been introduced in the field of fault diagnosis with the goal to make better use of the inductive bias of the interdependencies between the different sensor measurements. However, there are some limitations with these GNN-based fault diagnosis methods. First, they lack the ability to realize multiscale feature extraction due to the fixed receptive field of GNNs. Secondly, they eventually encounter the over-smoothing problem with increase of model depth. Lastly, the extracted features of these GNNs are hard to understand owing to the black-box nature of GNNs. To address these issues, a filter-informed spectral graph wavelet network (SGWN) is proposed in this paper. In SGWN, the spectral graph wavelet convolutional (SGWConv) layer is established upon the spectral graph wavelet transform, which can decompose a graph signal into scaling function coefficients and spectral graph wavelet coefficients. With the help of SGWConv, SGWN is able to prevent the over-smoothing problem caused by long-range low-pass filtering, by simultaneously extracting low-pass and band-pass features. Furthermore, to speed up the computation of SGWN, the scaling kernel function and graph wavelet kernel function in SGWConv are approximated by the Chebyshev polynomials. The effectiveness of the proposed SGWN is evaluated on the collected solenoid valve dataset and aero-engine intershaft bearing dataset. The experimental results show that SGWN can outperform the comparative methods in both diagnostic accuracy and the ability to prevent over-smoothing. Moreover, its extracted features are also interpretable with domain knowledge

    Multimodal Earth observation data fusion: Graph-based approach in shared latent space

    Get PDF
    Multiple and heterogenous Earth observation (EO) platforms are broadly used for a wide array of applications, and the integration of these diverse modalities facilitates better extraction of information than using them individually. The detection capability of the multispectral unmanned aerial vehicle (UAV) and satellite imagery can be significantly improved by fusing with ground hyperspectral data. However, variability in spatial and spectral resolution can affect the efficiency of such dataset's fusion. In this study, to address the modality bias, the input data was projected to a shared latent space using cross-modal generative approaches or guided unsupervised transformation. The proposed adversarial networks and variational encoder-based strategies used bi-directional transformations to model the cross-domain correlation without using cross-domain correspondence. It may be noted that an interpolation-based convolution was adopted instead of the normal convolution for learning the features of the point spectral data (ground spectra). The proposed generative adversarial network-based approach employed dynamic time wrapping based layers along with a cyclic consistency constraint to use the minimal number of unlabeled samples, having cross-domain correlation, to compute a cross-modal generative latent space. The proposed variational encoder-based transformation also addressed the cross-modal resolution differences and limited availability of cross-domain samples by using a mixture of expert-based strategy, cross-domain constraints, and adversarial learning. In addition, the latent space was modelled to be composed of modality independent and modality dependent spaces, thereby further reducing the requirement of training samples and addressing the cross-modality biases. An unsupervised covariance guided transformation was also proposed to transform the labelled samples without using cross-domain correlation prior. The proposed latent space transformation approaches resolved the requirement of cross-domain samples which has been a critical issue with the fusion of multi-modal Earth observation data. This study also proposed a latent graph generation and graph convolutional approach to predict the labels resolving the domain discrepancy and cross-modality biases. Based on the experiments over different standard benchmark airborne datasets and real-world UAV datasets, the developed approaches outperformed the prominent hyperspectral panchromatic sharpening, image fusion, and domain adaptation approaches. By using specific constraints and regularizations, the network developed was less sensitive to network parameters, unlike in similar implementations. The proposed approach illustrated improved generalizability in comparison with the prominent existing approaches. In addition to the fusion-based classification of the multispectral and hyperspectral datasets, the proposed approach was extended to the classification of hyperspectral airborne datasets where the latent graph generation and convolution were employed to resolve the domain bias with a small number of training samples. Overall, the developed transformations and architectures will be useful for the semantic interpretation and analysis of multimodal data and are applicable to signal processing, manifold learning, video analysis, data mining, and time series analysis, to name a few.This research was partly supported by the Hebrew University of Jerusalem Intramural Research Found Career Development, Association of Field Crop Farmers in Israel and the Chief Scientist of the Israeli Ministry of Agriculture and Rural Development (projects 20-02-0087 and 12-01-0041)

    Graph Signal Processing: Overview, Challenges and Applications

    Full text link
    Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing. We then summarize recent developments in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning. We finish by providing a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE

    Geometric deep learning: going beyond Euclidean data

    Get PDF
    Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them. Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. The purpose of this paper is to overview different examples of geometric deep learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field

    Geometric deep learning

    Get PDF
    The goal of these course notes is to describe the main mathematical ideas behind geometric deep learning and to provide implementation details for several applications in shape analysis and synthesis, computer vision and computer graphics. The text in the course materials is primarily based on previously published work. With these notes we gather and provide a clear picture of the key concepts and techniques that fall under the umbrella of geometric deep learning, and illustrate the applications they enable. We also aim to provide practical implementation details for the methods presented in these works, as well as suggest further readings and extensions of these ideas
    • …
    corecore