59 research outputs found

    Triamese-ViT: A 3D-Aware Method for Robust Brain Age Estimation from MRIs

    Full text link
    The integration of machine learning in medicine has significantly improved diagnostic precision, particularly in the interpretation of complex structures like the human brain. Diagnosing challenging conditions such as Alzheimer's disease has prompted the development of brain age estimation techniques. These methods often leverage three-dimensional Magnetic Resonance Imaging (MRI) scans, with recent studies emphasizing the efficacy of 3D convolutional neural networks (CNNs) like 3D ResNet. However, the untapped potential of Vision Transformers (ViTs), known for their accuracy and interpretability, persists in this domain due to limitations in their 3D versions. This paper introduces Triamese-ViT, an innovative adaptation of the ViT model for brain age estimation. Our model uniquely combines ViTs from three different orientations to capture 3D information, significantly enhancing accuracy and interpretability. Tested on a dataset of 1351 MRI scans, Triamese-ViT achieves a Mean Absolute Error (MAE) of 3.84, a 0.9 Spearman correlation coefficient with chronological age, and a -0.29 Spearman correlation coefficient between the brain age gap (BAG) and chronological age, significantly better than previous methods for brian age estimation. A key innovation of Triamese-ViT is its capacity to generate a comprehensive 3D-like attention map, synthesized from 2D attention maps of each orientation-specific ViT. This feature is particularly beneficial for in-depth brain age analysis and disease diagnosis, offering deeper insights into brain health and the mechanisms of age-related neural changes

    BM2CP: Efficient Collaborative Perception with LiDAR-Camera Modalities

    Full text link
    Collaborative perception enables agents to share complementary perceptual information with nearby agents. This would improve the perception performance and alleviate the issues of single-view perception, such as occlusion and sparsity. Most existing approaches mainly focus on single modality (especially LiDAR), and not fully exploit the superiority of multi-modal perception. We propose a collaborative perception paradigm, BM2CP, which employs LiDAR and camera to achieve efficient multi-modal perception. It utilizes LiDAR-guided modal fusion, cooperative depth generation and modality-guided intermediate fusion to acquire deep interactions among modalities of different agents, Moreover, it is capable to cope with the special case where one of the sensors, same or different type, of any agent is missing. Extensive experiments validate that our approach outperforms the state-of-the-art methods with 50X lower communication volumes in both simulated and real-world autonomous driving scenarios. Our code is available at https://github.com/byzhaoAI/BM2CP.Comment: 14 pages, 8 figures. Accepted by CoRL 202

    EEG-based Deep Emotional Diagnosis: A Comparative Study

    Get PDF
    Emotion is an important part of people's daily life, particularly relevant to the mental health of people. Emotional diagnosis is closely related to the nervous system, which can well reflect people's mental conditions in response to the surrounding environment or the development of various neurodegenerative diseases. Emotion recognition can help the medical diagnosis of mental health. In recent years, emotion recognition based on EEG has attracted the attention of many researchers accompanying with the continuous development of artificial intelligence and brain computer interface technology. In this paper, we carried out a comparison on the performance of three deep learning techniques on EEG classification, including DNN, CNN and CNN-LSTM. DEAP data set was used in our experiments. EEG signals were transformed from time domain to frequency domain first, and then features are extracted to classify emotions. From our research, it shows these deep learning techniques can achieve good accuracy on emotional diagnosis

    User-Centric Democratization towards Social Value Aligned Medical AI Services

    No full text
    Democratic AI, aiming at developing AI systems aligned with human values, holds promise for making AI services accessible to people. However, concerns have been raised regarding the participation of non-technical individuals, potentially undermining the carefully designed values of AI systems by experts. In this paper, we investigate Democratic AI, define it mathematically, and propose a user-centric evolutionary democratic AI (u-DemAI) framework. This framework maximizes the social values of cloud-based AI services by incorporating user feedback and emulating human behavior in a community via a user-in-the-loop iteration. We apply our framework to a medical AI service for brain age estimation and demonstrate that non-expert users can consistently contribute to improving AI systems through a natural democratic process. The u-DemAI framework presents a mathematical interpretation of Democracy for AI, conceptualizing it as a natural computing process. Our experiments successfully show that involving non-tech individuals can help improve performance and simultaneously mitigate bias in AI models developed by AI experts, showcasing the potential for Democratic AI to benefit end users and regain control over AI services that shape various aspects of our lives, including our health
    • …
    corecore