275 research outputs found

    CVTNet: A Cross-View Transformer Network for Place Recognition Using LiDAR Data

    Full text link
    LiDAR-based place recognition (LPR) is one of the most crucial components of autonomous vehicles to identify previously visited places in GPS-denied environments. Most existing LPR methods use mundane representations of the input point cloud without considering different views, which may not fully exploit the information from LiDAR sensors. In this paper, we propose a cross-view transformer-based network, dubbed CVTNet, to fuse the range image views (RIVs) and bird's eye views (BEVs) generated from the LiDAR data. It extracts correlations within the views themselves using intra-transformers and between the two different views using inter-transformers. Based on that, our proposed CVTNet generates a yaw-angle-invariant global descriptor for each laser scan end-to-end online and retrieves previously seen places by descriptor matching between the current query scan and the pre-built database. We evaluate our approach on three datasets collected with different sensor setups and environmental conditions. The experimental results show that our method outperforms the state-of-the-art LPR methods with strong robustness to viewpoint changes and long-time spans. Furthermore, our approach has a good real-time performance that can run faster than the typical LiDAR frame rate. The implementation of our method is released as open source at: https://github.com/BIT-MJY/CVTNet.Comment: accepted by IEEE Transactions on Industrial Informatics 202

    LCPR: A Multi-Scale Attention-Based LiDAR-Camera Fusion Network for Place Recognition

    Full text link
    Place recognition is one of the most crucial modules for autonomous vehicles to identify places that were previously visited in GPS-invalid environments. Sensor fusion is considered an effective method to overcome the weaknesses of individual sensors. In recent years, multimodal place recognition fusing information from multiple sensors has gathered increasing attention. However, most existing multimodal place recognition methods only use limited field-of-view camera images, which leads to an imbalance between features from different modalities and limits the effectiveness of sensor fusion. In this paper, we present a novel neural network named LCPR for robust multimodal place recognition, which fuses LiDAR point clouds with multi-view RGB images to generate discriminative and yaw-rotation invariant representations of the environment. A multi-scale attention-based fusion module is proposed to fully exploit the panoramic views from different modalities of the environment and their correlations. We evaluate our method on the nuScenes dataset, and the experimental results show that our method can effectively utilize multi-view camera and LiDAR data to improve the place recognition performance while maintaining strong robustness to viewpoint changes. Our open-source code and pre-trained models are available at https://github.com/ZhouZijie77/LCPR .Comment: Accepted by IEEE Robotics and Automation Letters (RAL) 202

    SeqOT: A Spatial-Temporal Transformer Network for Place Recognition Using Sequential LiDAR Data

    Full text link
    Place recognition is an important component for autonomous vehicles to achieve loop closing or global localization. In this paper, we tackle the problem of place recognition based on sequential 3D LiDAR scans obtained by an onboard LiDAR sensor. We propose a transformer-based network named SeqOT to exploit the temporal and spatial information provided by sequential range images generated from the LiDAR data. It uses multi-scale transformers to generate a global descriptor for each sequence of LiDAR range images in an end-to-end fashion. During online operation, our SeqOT finds similar places by matching such descriptors between the current query sequence and those stored in the map. We evaluate our approach on four datasets collected with different types of LiDAR sensors in different environments. The experimental results show that our method outperforms the state-of-the-art LiDAR-based place recognition methods and generalizes well across different environments. Furthermore, our method operates online faster than the frame rate of the sensor. The implementation of our method is released as open source at: https://github.com/BIT-MJY/SeqOT.Comment: Submitted to IEEE Transactions on Industrial Electronic

    Immunological characterization of stroke-heart syndrome and identification of inflammatory therapeutic targets

    Get PDF
    Acute cardiac dysfunction caused by stroke-heart syndrome (SHS) is the second leading cause of stroke-related death. The inflammatory response plays a significant role in the pathophysiological process of cardiac damage. However, the mechanisms underlying the brain–heart interaction are poorly understood. Therefore, we aimed to analysis the immunological characterization and identify inflammation therapeutic targets of SHS. We analyzed gene expression data of heart tissue 24 hours after induction of ischemia stoke by MCAO or sham surgery in a publicly available dataset (GSE102558) from Gene Expression Omnibus (GEO). Bioinformatics analysis revealed 138 differentially expressed genes (DEGs) in myocardium of MCAO-treated compared with sham-treated mice, among which, immune and inflammatory pathways were enriched. Analysis of the immune cells infiltration showed that the natural killer cell populations were significantly different between the two groups. We identified five DIREGs, Aplnr, Ccrl2, Cdkn1a, Irak2, and Serpine1 and found that their expression correlated with specific populations of infiltrating immune cells in the cardiac tissue. RT–qPCR and Western blot methods confirmed significant changes in the expression levels of Aplnr, Cdkn1a, Irak2, and Serpine1 after MCAO, which may serve as therapeutic targets to prevent cardiovascular complications after stroke

    Granular Space Reduction to a β

    Get PDF
    Multigranulation rough set is an extension of classical rough set, and optimistic multigranulation and pessimistic multigranulation are two special cases of it. β multigranulation rough set is a more generalized multigranulation rough set. In this paper, we first introduce fuzzy rough theory into β multigranulation rough set to construct a β multigranulation fuzzy rough set, which can be used to deal with continuous data; then some properties are discussed. Reduction is an important issue of multigranulation rough set, and an algorithm of granular space reduction to β multigranulation fuzzy rough set for preserving positive region is proposed. To test the algorithm, experiments are taken on five UCI data sets with different values of β. The results show the effectiveness of the proposed algorithm

    ConCarE:Personalized Clinical Feature Embedding via Capturing the Healthcare Context

    Get PDF
    Predicting the patient’s clinical outcome from the historical electronic medical records (EMR) is a fundamental research problem in medical informatics. Most deep learning-based solutions for EMR analysis concentrate on learning the clinical visit embedding and exploring the relations between vis- its. Although those works have shown superior performances in healthcare prediction, they fail to thoroughly explore the personal characteristics during the clinical visits. Moreover, existing work usually assumes that a more recent record has a larger weight in the prediction, but this assumption is not true for certain clinical features. In this paper, we propose ConCare to handle the irregular EMR data and extract feature interrelationship to perform individualized healthcare prediction. Our solution can embed the feature sequences separately by modeling the time-aware distribution. ConCare further improves the multi-head self-attention via the cross-head decorrelation, so that the inter-dependencies among dynamic features and static baseline information can be diversely captured to form the personal health context. Experimental results on two real-world EMR datasets demonstrate the effectiveness of ConCare. More importantly, ConCare is able to extract medical findings which can be confirmed by human experts and medical literature

    Explicit Interaction for Fusion-Based Place Recognition

    Full text link
    Fusion-based place recognition is an emerging technique jointly utilizing multi-modal perception data, to recognize previously visited places in GPS-denied scenarios for robots and autonomous vehicles. Recent fusion-based place recognition methods combine multi-modal features in implicit manners. While achieving remarkable results, they do not explicitly consider what the individual modality affords in the fusion system. Therefore, the benefit of multi-modal feature fusion may not be fully explored. In this paper, we propose a novel fusion-based network, dubbed EINet, to achieve explicit interaction of the two modalities. EINet uses LiDAR ranges to supervise more robust vision features for long time spans, and simultaneously uses camera RGB data to improve the discrimination of LiDAR point clouds. In addition, we develop a new benchmark for the place recognition task based on the nuScenes dataset. To establish this benchmark for future research with comprehensive comparisons, we introduce both supervised and self-supervised training schemes alongside evaluation protocols. We conduct extensive experiments on the proposed benchmark, and the experimental results show that our EINet exhibits better recognition performance as well as solid generalization ability compared to the state-of-the-art fusion-based place recognition approaches. Our open-source code and benchmark are released at: https://github.com/BIT-XJY/EINet

    AdaCare:Explainable Clinical Health Status Representation Learning via Scale Adaptive Feature Extraction and Recalibration

    Get PDF
    Deep learning-based health status representation learning and clinical prediction have raised much research interest in recent years. Existing models have shown superior performance, but there are still several major issues that have not been fully taken into consideration. First, the historical variation pattern of the biomarker in diverse time scales plays an important role in indicating the health status, but it has not been explicitly extracted by existing works. Second, key factors that strongly indicate the health risk are different among patients. It is still challenging to adaptively make use of the features for patients in diverse conditions. Third, using the prediction model as a black box will limit the reliability in clinical practice. However, none of the existing works can provide satisfying interpretability and meanwhile achieve high prediction performance. In this work, we develop a general health status representation learning model, named AdaCare. It can capture the long and short-term variations of biomarkers as clinical features to depict the health status in multiple time scales. It also models the correlation between clinical features to enhance the ones which strongly indicate the health status and thus can maintain a state-of-the-art performance in terms of prediction accuracy while providing qualitative in- interpretability. We conduct health risk prediction experiment on two real-world datasets. Experiment results indicate that AdaCare outperforms state-of-the-art approaches and provides effective interpretability which is verifiable by clinical experts
    • …
    corecore