8 research outputs found

    Towards Better Generalization with Flexible Representation of Multi-Module Graph Neural Networks

    Full text link
    Graph neural networks (GNNs) have become compelling models designed to perform learning and inference on graph-structured data. However, little work has been done to understand the fundamental limitations of GNNs for scaling to larger graphs and generalizing to out-of-distribution (OOD) inputs. In this paper, we use a random graph generator to systematically investigate how the graph size and structural properties affect the predictive performance of GNNs. We present specific evidence that the average node degree is a key feature in determining whether GNNs can generalize to unseen graphs, and that the use of multiple node update functions can improve the generalization performance of GNNs when dealing with graphs of multimodal degree distributions. Accordingly, we propose a multi-module GNN framework that allows the network to adapt flexibly to new graphs by generalizing a single canonical nonlinear transformation over aggregated inputs. Our results show that the multi-module GNNs improve the OOD generalization on a variety of inference tasks in the direction of diverse structural features

    Osteoporosis Prediction from Hand and Wrist X-rays using Image Segmentation and Self-Supervised Learning

    Full text link
    Osteoporosis is a widespread and chronic metabolic bone disease that often remains undiagnosed and untreated due to limited access to bone mineral density (BMD) tests like Dual-energy X-ray absorptiometry (DXA). In response to this challenge, current advancements are pivoting towards detecting osteoporosis by examining alternative indicators from peripheral bone areas, with the goal of increasing screening rates without added expenses or time. In this paper, we present a method to predict osteoporosis using hand and wrist X-ray images, which are both widely accessible and affordable, though their link to DXA-based data is not thoroughly explored. Initially, our method segments the ulnar, radius, and metacarpal bones using a foundational model for image segmentation. Then, we use a self-supervised learning approach to extract meaningful representations without the need for explicit labels, and move on to classify osteoporosis in a supervised manner. Our method is evaluated on a dataset with 192 individuals, cross-referencing their verified osteoporosis conditions against the standard DXA test. With a notable classification score (AUC=0.83), our model represents a pioneering effort in leveraging vision-based techniques for osteoporosis identification from the peripheral skeleton sites.Comment: Extended Abstract presented at Machine Learning for Health (ML4H) symposium 2023, December 10th, 2023, New Orleans, United States, 10 page

    Hierarchical Joint Graph Learning and Multivariate Time Series Forecasting

    Full text link
    Multivariate time series is prevalent in many scientific and industrial domains. Modeling multivariate signals is challenging due to their long-range temporal dependencies and intricate interactions--both direct and indirect. To confront these complexities, we introduce a method of representing multivariate signals as nodes in a graph with edges indicating interdependency between them. Specifically, we leverage graph neural networks (GNN) and attention mechanisms to efficiently learn the underlying relationships within the time series data. Moreover, we suggest employing hierarchical signal decompositions running over the graphs to capture multiple spatial dependencies. The effectiveness of our proposed model is evaluated across various real-world benchmark datasets designed for long-term forecasting tasks. The results consistently showcase the superiority of our model, achieving an average 23\% reduction in mean squared error (MSE) compared to existing models.Comment: Temporal Graph Learning Workshop @ NeurIPS 2023, New Orleans, United State

    Increased lactate dehydrogenase reflects the progression of COVID-19 pneumonia on chest computed tomography and predicts subsequent severe disease

    No full text
    Abstract Chest computed tomography (CT) is effective for assessing the severity of coronavirus disease 2019 (COVID-19). However, the clinical factors reflecting the disease progression of COVID-19 pneumonia on chest CT and predicting a subsequent exacerbation remain controversial. We conducted a retrospective cohort study of 450 COVID-19 patients. We used an automated image processing tool to quantify the COVID-19 pneumonia lesion extent on chest CT at admission. The factors associated with the lesion extent were estimated by a multiple regression analysis. After adjusting for background factors by propensity score matching, we conducted a multivariate Cox proportional hazards analysis to identify factors associated with severe disease after admission. The multiple regression analysis identified, body-mass index (BMI), lactate dehydrogenase (LDH), C-reactive protein (CRP), and albumin as continuous variables associated with the lesion extent on chest CT. The standardized partial regression coefficients for them were 1.76, 2.42, 1.54, and 0.71. The multivariate Cox proportional hazards analysis identified LDH (hazard ratio, 1.003; 95% confidence interval, 1.001–1.005) as a factor independently associated with the development of severe COVID-19 pneumonia. Increased serum LDH at admission may be useful in real-world clinical practice for the simple screening of COVID-19 patients at high risk of developing subsequent severe disease
    corecore