418,430 research outputs found
Efficient Online Surface Correction for Real-time Large-Scale 3D Reconstruction
State-of-the-art methods for large-scale 3D reconstruction from RGB-D sensors
usually reduce drift in camera tracking by globally optimizing the estimated
camera poses in real-time without simultaneously updating the reconstructed
surface on pose changes. We propose an efficient on-the-fly surface correction
method for globally consistent dense 3D reconstruction of large-scale scenes.
Our approach uses a dense Visual RGB-D SLAM system that estimates the camera
motion in real-time on a CPU and refines it in a global pose graph
optimization. Consecutive RGB-D frames are locally fused into keyframes, which
are incorporated into a sparse voxel hashed Signed Distance Field (SDF) on the
GPU. On pose graph updates, the SDF volume is corrected on-the-fly using a
novel keyframe re-integration strategy with reduced GPU-host streaming. We
demonstrate in an extensive quantitative evaluation that our method is up to
93% more runtime efficient compared to the state-of-the-art and requires
significantly less memory, with only negligible loss of surface quality.
Overall, our system requires only a single GPU and allows for real-time surface
correction of large environments.Comment: British Machine Vision Conference (BMVC), London, September 201
A Survey on Multisensor Fusion and Consensus Filtering for Sensor Networks
Multisensor fusion and consensus filtering are two fascinating subjects in the research of sensor networks. In this survey, we will cover both classic results and recent advances developed in these two topics. First, we recall some important results in the development ofmultisensor fusion technology. Particularly, we pay great attention to the fusion with unknown correlations, which ubiquitously exist in most of distributed filtering problems. Next, we give a systematic review on several widely used consensus filtering approaches. Furthermore, some latest progress on multisensor fusion and consensus filtering is also presented. Finally,
conclusions are drawn and several potential future research directions are outlined.the Royal Society of the UK, the National Natural Science Foundation of China under Grants 61329301, 61374039, 61304010, 11301118, and 61573246, the Hujiang Foundation of China under Grants C14002
and D15009, the Alexander von Humboldt Foundation of Germany, and the Innovation Fund Project for Graduate Student of Shanghai under Grant JWCXSL140
Real-time virtual sonography in gynecology & obstetrics. literature's analysis and case series
Fusion Imaging is a latest generation diagnostic technique, designed to combine ultrasonography with a second-tier technique such as magnetic resonance imaging and computer tomography. It has been mainly used until now in urology and hepatology. Concerning gynecology and obstetrics, the studies mostly focus on the diagnosis of prenatal disease, benign pathology and cervical cancer. We provided a systematic review of the literature with the latest publications regarding the role of Fusion technology in gynecological and obstetrics fields and we also described a case series of six emblematic patients enrolled from Gynecology Department of Sant ‘Andrea Hospital, “la Sapienza”, Rome, evaluated with Esaote Virtual Navigator equipment. We consider that Fusion Imaging could add values at the diagnosis of various gynecological and obstetrics conditions, but further studies are needed to better define and improve the role of this fascinating diagnostic tool
Efficient Online Surface Correction for Real-time Large-Scale 3D Reconstruction
State-of-the-art methods for large-scale 3D reconstruction from RGB-D sensors
usually reduce drift in camera tracking by globally optimizing the estimated
camera poses in real-time without simultaneously updating the reconstructed
surface on pose changes. We propose an efficient on-the-fly surface correction
method for globally consistent dense 3D reconstruction of large-scale scenes.
Our approach uses a dense Visual RGB-D SLAM system that estimates the camera
motion in real-time on a CPU and refines it in a global pose graph
optimization. Consecutive RGB-D frames are locally fused into keyframes, which
are incorporated into a sparse voxel hashed Signed Distance Field (SDF) on the
GPU. On pose graph updates, the SDF volume is corrected on-the-fly using a
novel keyframe re-integration strategy with reduced GPU-host streaming. We
demonstrate in an extensive quantitative evaluation that our method is up to
93% more runtime efficient compared to the state-of-the-art and requires
significantly less memory, with only negligible loss of surface quality.
Overall, our system requires only a single GPU and allows for real-time surface
correction of large environments.Comment: British Machine Vision Conference (BMVC), London, September 201
A two-step fusion process for multi-criteria decision applied to natural hazards in mountains
Mountain river torrents and snow avalanches generate human and material
damages with dramatic consequences. Knowledge about natural phenomenona is
often lacking and expertise is required for decision and risk management
purposes using multi-disciplinary quantitative or qualitative approaches.
Expertise is considered as a decision process based on imperfect information
coming from more or less reliable and conflicting sources. A methodology mixing
the Analytic Hierarchy Process (AHP), a multi-criteria aid-decision method, and
information fusion using Belief Function Theory is described. Fuzzy Sets and
Possibilities theories allow to transform quantitative and qualitative criteria
into a common frame of discernment for decision in Dempster-Shafer Theory (DST
) and Dezert-Smarandache Theory (DSmT) contexts. Main issues consist in basic
belief assignments elicitation, conflict identification and management, fusion
rule choices, results validation but also in specific needs to make a
difference between importance and reliability and uncertainty in the fusion
process
Bayesian Spatial Binary Regression for Label Fusion in Structural Neuroimaging
Many analyses of neuroimaging data involve studying one or more regions of
interest (ROIs) in a brain image. In order to do so, each ROI must first be
identified. Since every brain is unique, the location, size, and shape of each
ROI varies across subjects. Thus, each ROI in a brain image must either be
manually identified or (semi-) automatically delineated, a task referred to as
segmentation. Automatic segmentation often involves mapping a previously
manually segmented image to a new brain image and propagating the labels to
obtain an estimate of where each ROI is located in the new image. A more recent
approach to this problem is to propagate labels from multiple manually
segmented atlases and combine the results using a process known as label
fusion. To date, most label fusion algorithms either employ voting procedures
or impose prior structure and subsequently find the maximum a posteriori
estimator (i.e., the posterior mode) through optimization. We propose using a
fully Bayesian spatial regression model for label fusion that facilitates
direct incorporation of covariate information while making accessible the
entire posterior distribution. We discuss the implementation of our model via
Markov chain Monte Carlo and illustrate the procedure through both simulation
and application to segmentation of the hippocampus, an anatomical structure
known to be associated with Alzheimer's disease.Comment: 24 pages, 10 figure
- …