42 research outputs found

    H2G2-Net: A Hierarchical Heterogeneous Graph Generative Network Framework for Discovery of Multi-Modal Physiological Responses

    Full text link
    Discovering human cognitive and emotional states using multi-modal physiological signals draws attention across various research applications. Physiological responses of the human body are influenced by human cognition and commonly used to analyze cognitive states. From a network science perspective, the interactions of these heterogeneous physiological modalities in a graph structure may provide insightful information to support prediction of cognitive states. However, there is no clue to derive exact connectivity between heterogeneous modalities and there exists a hierarchical structure of sub-modalities. Existing graph neural networks are designed to learn on non-hierarchical homogeneous graphs with pre-defined graph structures; they failed to learn from hierarchical, multi-modal physiological data without a pre-defined graph structure. To this end, we propose a hierarchical heterogeneous graph generative network (H2G2-Net) that automatically learns a graph structure without domain knowledge, as well as a powerful representation on the hierarchical heterogeneous graph in an end-to-end fashion. We validate the proposed method on the CogPilot dataset that consists of multi-modal physiological signals. Extensive experiments demonstrate that our proposed method outperforms the state-of-the-art GNNs by 5%-20% in prediction accuracy.Comment: Paper accepted in Human-Centric Representation Learning workshop at AAAI 2024 (https://hcrl-workshop.github.io/2024/

    Integration of Machine Learning and Mechanistic Models Accurately Predicts Variation in Cell Density of Glioblastoma Using Multiparametric MRI

    Get PDF
    Glioblastoma (GBM) is a heterogeneous and lethal brain cancer. These tumors are followed using magnetic resonance imaging (MRI), which is unable to precisely identify tumor cell invasion, impairing effective surgery and radiation planning. We present a novel hybrid model, based on multiparametric intensities, which combines machine learning (ML) with a mechanistic model of tumor growth to provide spatially resolved tumor cell density predictions. The ML component is an imaging data-driven graph-based semi-supervised learning model and we use the Proliferation-Invasion (PI) mechanistic tumor growth model. We thus refer to the hybrid model as the ML-PI model. The hybrid model was trained using 82 image-localized biopsies from 18 primary GBM patients with pre-operative MRI using a leave-one-patient-out cross validation framework. A Relief algorithm was developed to quantify relative contributions from the data sources. The ML-PI model statistically significantly outperformed (p \u3c 0.001) both individual models, ML and PI, achieving a mean absolute predicted error (MAPE) of 0.106 ± 0.125 versus 0.199 ± 0.186 (ML) and 0.227 ± 0.215 (PI), respectively. Associated Pearson correlation coefficients for ML-PI, ML, and PI were 0.838, 0.518, and 0.437, respectively. The Relief algorithm showed the PI model had the greatest contribution to the result, emphasizing the importance of the hybrid model in achieving the high accuracy
    corecore