443,922 research outputs found

    FE-Fusion-VPR: Attention-based Multi-Scale Network Architecture for Visual Place Recognition by Fusing Frames and Events

    Full text link
    Traditional visual place recognition (VPR), usually using standard cameras, is easy to fail due to glare or high-speed motion. By contrast, event cameras have the advantages of low latency, high temporal resolution, and high dynamic range, which can deal with the above issues. Nevertheless, event cameras are prone to failure in weakly textured or motionless scenes, while standard cameras can still provide appearance information in this case. Thus, exploiting the complementarity of standard cameras and event cameras can effectively improve the performance of VPR algorithms. In the paper, we propose FE-Fusion-VPR, an attention-based multi-scale network architecture for VPR by fusing frames and events. First, the intensity frame and event volume are fed into the two-stream feature extraction network for shallow feature fusion. Next, the three-scale features are obtained through the multi-scale fusion network and aggregated into three sub-descriptors using the VLAD layer. Finally, the weight of each sub-descriptor is learned through the descriptor re-weighting network to obtain the final refined descriptor. Experimental results show that on the Brisbane-Event-VPR and DDD20 datasets, the Recall@1 of our FE-Fusion-VPR is 29.26% and 33.59% higher than Event-VPR and Ensemble-EventVPR, and is 7.00% and 14.15% higher than MultiRes-NetVLAD and NetVLAD. To our knowledge, this is the first end-to-end network that goes beyond the existing event-based and frame-based SOTA methods to fuse frame and events directly for VPR

    A Survey on Multisensor Fusion and Consensus Filtering for Sensor Networks

    Get PDF
    Multisensor fusion and consensus filtering are two fascinating subjects in the research of sensor networks. In this survey, we will cover both classic results and recent advances developed in these two topics. First, we recall some important results in the development ofmultisensor fusion technology. Particularly, we pay great attention to the fusion with unknown correlations, which ubiquitously exist in most of distributed filtering problems. Next, we give a systematic review on several widely used consensus filtering approaches. Furthermore, some latest progress on multisensor fusion and consensus filtering is also presented. Finally, conclusions are drawn and several potential future research directions are outlined.the Royal Society of the UK, the National Natural Science Foundation of China under Grants 61329301, 61374039, 61304010, 11301118, and 61573246, the Hujiang Foundation of China under Grants C14002 and D15009, the Alexander von Humboldt Foundation of Germany, and the Innovation Fund Project for Graduate Student of Shanghai under Grant JWCXSL140

    Improving 3D U-Net for Brain Tumor Segmentation by Utilizing Lesion Prior

    Full text link
    We propose a novel, simple and effective method to integrate lesion prior and a 3D U-Net for improving brain tumor segmentation. First, we utilize the ground-truth brain tumor lesions from a group of patients to generate the heatmaps of different types of lesions. These heatmaps are used to create the volume-of-interest (VOI) map which contains prior information about brain tumor lesions. The VOI map is then integrated with the multimodal MR images and input to a 3D U-Net for segmentation. The proposed method is evaluated on a public benchmark dataset, and the experimental results show that the proposed feature fusion method achieves an improvement over the baseline methods. In addition, our proposed method also achieves a competitive performance compared to state-of-the-art methods.Comment: 5 pages, 4 figures, 1 table, LNCS forma

    [68Ga]-DOTATOC-PET/CT for meningioma IMRT treatment planning

    Get PDF
    <p>Abstract</p> <p>Purpose</p> <p>The observation that human meningioma cells strongly express somatostatin receptor (SSTR 2) was the rationale to analyze retrospectively in how far DOTATOC PET/CT is helpful to improve target volume delineation for intensity modulated radiotherapy (IMRT).</p> <p>Patients and Methods</p> <p>In 26 consecutive patients with preferentially skull base meningioma, diagnostic magnetic resonance imaging (MRI) and planning-computed tomography (CT) was complemented with data from [<sup>68</sup>Ga]-DOTA-D Phe<sup>1</sup>-Tyr<sup>3</sup>-Octreotide (DOTATOC)-PET/CT. Image fusion of PET/CT, diagnostic computed tomography, MRI and radiotherapy planning CT as well as target volume delineation was performed with OTP-Masterplan<sup>®</sup>. Initial gross tumor volume (GTV) definition was based on MRI data only and was secondarily complemented with DOTATOC-PET information. Irradiation was performed as EUD based IMRT, using the Hyperion Software package.</p> <p>Results</p> <p>The integration of the DOTATOC data led to additional information concerning tumor extension in 17 of 26 patients (65%). There were major changes of the clinical target volume (CTV) which modify the PTV in 14 patients, minor changes were realized in 3 patients. Overall the GTV-MRI/CT was larger than the GTV-PET in 10 patients (38%), smaller in 13 patients (50%) and almost the same in 3 patients (12%). Most of the adaptations were performed in close vicinity to bony skull base structures or after complex surgery. Median GTV based on MRI was 18.1 cc, based on PET 25.3 cc and subsequently the CTV was 37.4 cc. Radiation planning and treatment of the DOTATOC-adapted volumes was feasible.</p> <p>Conclusion</p> <p>DOTATOC-PET/CT information may strongly complement patho-anatomical data from MRI and CT in cases with complex meningioma and is thus helpful for improved target volume delineation especially for skull base manifestations and recurrent disease after surgery.</p

    Efficient Online Surface Correction for Real-time Large-Scale 3D Reconstruction

    Full text link
    State-of-the-art methods for large-scale 3D reconstruction from RGB-D sensors usually reduce drift in camera tracking by globally optimizing the estimated camera poses in real-time without simultaneously updating the reconstructed surface on pose changes. We propose an efficient on-the-fly surface correction method for globally consistent dense 3D reconstruction of large-scale scenes. Our approach uses a dense Visual RGB-D SLAM system that estimates the camera motion in real-time on a CPU and refines it in a global pose graph optimization. Consecutive RGB-D frames are locally fused into keyframes, which are incorporated into a sparse voxel hashed Signed Distance Field (SDF) on the GPU. On pose graph updates, the SDF volume is corrected on-the-fly using a novel keyframe re-integration strategy with reduced GPU-host streaming. We demonstrate in an extensive quantitative evaluation that our method is up to 93% more runtime efficient compared to the state-of-the-art and requires significantly less memory, with only negligible loss of surface quality. Overall, our system requires only a single GPU and allows for real-time surface correction of large environments.Comment: British Machine Vision Conference (BMVC), London, September 201

    Bayesian Spatial Binary Regression for Label Fusion in Structural Neuroimaging

    Full text link
    Many analyses of neuroimaging data involve studying one or more regions of interest (ROIs) in a brain image. In order to do so, each ROI must first be identified. Since every brain is unique, the location, size, and shape of each ROI varies across subjects. Thus, each ROI in a brain image must either be manually identified or (semi-) automatically delineated, a task referred to as segmentation. Automatic segmentation often involves mapping a previously manually segmented image to a new brain image and propagating the labels to obtain an estimate of where each ROI is located in the new image. A more recent approach to this problem is to propagate labels from multiple manually segmented atlases and combine the results using a process known as label fusion. To date, most label fusion algorithms either employ voting procedures or impose prior structure and subsequently find the maximum a posteriori estimator (i.e., the posterior mode) through optimization. We propose using a fully Bayesian spatial regression model for label fusion that facilitates direct incorporation of covariate information while making accessible the entire posterior distribution. We discuss the implementation of our model via Markov chain Monte Carlo and illustrate the procedure through both simulation and application to segmentation of the hippocampus, an anatomical structure known to be associated with Alzheimer's disease.Comment: 24 pages, 10 figure

    In-Situ Defect Detection in Laser Powder Bed Fusion by Using Thermography and Optical Tomography—Comparison to Computed Tomography

    Get PDF
    Among additive manufacturing (AM) technologies, the laser powder bed fusion (L-PBF) is one of the most important technologies to produce metallic components. The layer-wise build-up of components and the complex process conditions increase the probability of the occurrence of defects. However, due to the iterative nature of its manufacturing process and in contrast to conventional manufacturing technologies such as casting, L-PBF offers unique opportunities for in-situ monitoring. In this study, two cameras were successfully tested simultaneously as a machine manufacturer independent process monitoring setup: a high-frequency infrared camera and a camera for long time exposure, working in the visible and infrared spectrum and equipped with a near infrared filter. An AISI 316L stainless steel specimen with integrated artificial defects has been monitored during the build. The acquired camera data was compared to data obtained by computed tomography. A promising and easy to use examination method for data analysis was developed and correlations between measured signals and defects were identified. Moreover, sources of possible data misinterpretation were specified. Lastly, attempts for automatic data analysis by data integration are presented
    • …
    corecore