88,112 research outputs found
Outlier Detection for Shape Model Fitting
Medical image analysis applications often benefit from having a statistical shape model in the background. Statistical shape models are generative models which can generate shapes from the same family and assign a likelihood to the generated shape. In an Analysis-by-synthesis approach to medical image analysis, the target shape to be segmented, registered or completed must first be reconstructed by the statistical shape model. Shape models accomplish this by either acting as regression models, used to obtain the reconstruction, or as regularizers, used to limit the space of possible reconstructions. However, the accuracy of these models is not guaranteed for targets that lie out of the modeled distribution of the statistical shape model. Targets with pathologies are an example of out-of-distribution data. The target shape to be reconstructed has deformations caused by pathologies that do not exist on the healthy data used to build the model. Added and missing regions may lead to false correspondences, which act as outliers and influence the reconstruction result. Robust fitting is necessary to decrease the influence of outliers on the fitting solution, but often comes at the cost of decreased accuracy in the inlier region. Robust techniques often presuppose knowledge of outlier characteristics to build a robust cost function or knowledge of the correct regressed function to filter the outliers.
This thesis proposes strategies to obtain the outliers and reconstruction simultaneously without previous knowledge about either. The assumptions are that a statistical shape model that represents the healthy variations of the target organ is available, and that some landmarks on the model reference that annotate locations with correspondence to the target exist. The first strategy uses an EM-like algorithm to obtain the sampling posterior. This is a global reconstruction approach that requires classical noise assumptions on the outlier distribution. The second strategy uses Bayesian optimization to infer the closed-form predictive posterior distribution and estimate a label map of the outliers. The underlying regression model is a Gaussian Process Morphable Model (GPMM). To make the reconstruction obtained through Bayesian optimization robust, a novel acquisition function is proposed. The acquisition function uses the posterior and predictive posterior distributions to avoid choosing outliers as next query points. The algorithms give as outputs a label map and a a posterior distribution that can be used to choose the most likely reconstruction. To obtain the label map, the first strategy uses Bayesian classification to separate inliers and outliers, while the second strategy annotates all query points as inliers and unused model vertices as outliers. The proposed solutions are compared to the literature, evaluated through their sensitivity and breakdown points, and tested on publicly available datasets and in-house clinical examples.
The thesis contributes to shape model fitting to pathological targets by showing that:
- performing accurate inlier reconstruction and outlier detection is possible without case-specific manual thresholds or input label maps, through the use of outlier detection.
- outlier detection makes the algorithms agnostic to pathology type i.e. the algorithms are suitable for both sparse and grouped outliers which appear as holes and bumps, the severity of which influences the results.
- using the GPMM-based sequential Bayesian optimization approach, the closed-form predictive posterior distribution can be obtained despite the presence of outliers, because the Gaussian noise assumption is valid for the query points.
- using sequential Bayesian optimization instead of traditional optimization for shape model fitting brings forth several advantages that had not been previously explored. Fitting can be driven by different reconstruction goals such as speed, location-dependent accuracy, or robustness.
- defining pathologies as outliers opens the door for general pathology segmentation solutions for medical data. Segmentation algorithms do not need to be dependent on imaging modality, target pathology type, or training datasets for pathology labeling.
The thesis highlights the importance of outlier-based definitions of pathologies in medical data that are independent of pathology type and imaging modality. Developing such standards would not only simplify the comparison of different pathology segmentation algorithms on unlabeled datsets, but also push forward standard algorithms that are able to deal with general pathologies instead of data-driven definitions of pathologies. This comes with theoretical as well as clinical advantages. Practical applications are shown on shape reconstruction and labeling tasks. Publicly-available challenge datasets are used, one for cranium implant reconstruction, one for kidney tumor detection, and one for liver shape reconstruction. Further clinical applications are shown on in-house examples of a femur and mandible with artifacts and missing parts. The results focus on shape modeling but can be extended in future work to include intensity information and inner volume pathologies
Image segmentation with adaptive region growing based on a polynomial surface model
A new method for segmenting intensity images into smooth surface segments is presented. The main idea is to divide the image into flat, planar, convex, concave, and saddle patches that coincide as well as possible with meaningful object features in the image. Therefore, we propose an adaptive region growing algorithm based on low-degree polynomial fitting. The algorithm uses a new adaptive thresholding technique with the L∞ fitting cost as a segmentation criterion. The polynomial degree and the fitting error are automatically adapted during the region growing process. The main contribution is that the algorithm detects outliers and edges, distinguishes between strong and smooth intensity transitions and finds surface segments that are bent in a certain way. As a result, the surface segments corresponding to meaningful object features and the contours separating the surface segments coincide with real-image object edges. Moreover, the curvature-based surface shape information facilitates many tasks in image analysis, such as object recognition performed on the polynomial representation. The polynomial representation provides good image approximation while preserving all the necessary details of the objects in the reconstructed images. The method outperforms existing techniques when segmenting images of objects with diffuse reflecting surfaces
Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects
In this paper we introduce Co-Fusion, a dense SLAM system that takes a live
stream of RGB-D images as input and segments the scene into different objects
(using either motion or semantic cues) while simultaneously tracking and
reconstructing their 3D shape in real time. We use a multiple model fitting
approach where each object can move independently from the background and still
be effectively tracked and its shape fused over time using only the information
from pixels associated with that object label. Previous attempts to deal with
dynamic scenes have typically considered moving regions as outliers, and
consequently do not model their shape or track their motion over time. In
contrast, we enable the robot to maintain 3D models for each of the segmented
objects and to improve them over time through fusion. As a result, our system
can enable a robot to maintain a scene description at the object level which
has the potential to allow interactions with its working environment; even in
the case of dynamic scenes.Comment: International Conference on Robotics and Automation (ICRA) 2017,
http://visual.cs.ucl.ac.uk/pubs/cofusion,
https://github.com/martinruenz/co-fusio
Recovery of Outliers in Water Environment Monitoring Data
The water environment monitoring data are time sequences with outliers which depress the data quality, so outlier detection and recovery play an important role in the applications such as knowledge acquisition and prediction modelling of water environment indicators. To detect the outliers, the short-term chain comparison with the sliding window based on the time sequence characteristics is adopted. To recover outliers closer to the real data at that time, the sub-sequences are divided dynamically according to the change characteristics of the dataset, then the similarity between sub-sequences is measured by the shape distance and the outliers are recovered according to the change trend of the corresponding data in the most similar sub-sequences. The monitoring data of a water station are selected in the study. The experimental results show that the recovery method is superior to the commonly used prediction recovery method and fitting recovery method, the recovered data is smoother and the short-term trend is more obvious
Direct measurement of dark matter halo ellipticity from two-dimensional lensing shear maps of 25 massive clusters
We present new measurements of dark matter distributions in 25 X-ray luminous
clusters by making a full use of the two-dimensional (2D) weak lensing signals
obtained from high-quality Subaru/Suprime-Cam imaging data. Our approach to
directly compare the measured lensing shear pattern with elliptical model
predictions allows us to extract new information on the mass distributions of
individual clusters, such as the halo ellipticity and mass centroid. We find
that these parameters on the cluster shape are little degenerate with cluster
mass and concentration parameters. By combining the 2D fitting results for a
subsample of 18 clusters, the elliptical shape of dark matter haloes is
detected at 7\sigma significance level. The mean ellipticity is found to be e =
0.46 \pm 0.04 (1\sigma), which is in excellent agreement with the standard
collisionless CDM model prediction. The mass centroid can be constrained with a
typical accuracy of ~20" (~50 kpc/h) in radius for each cluster with some
significant outliers, enabling to assess one of the most important systematic
errors inherent in the stacked cluster weak lensing technique, the mass
centroid uncertainty. In addition, the shape of the dark mass distribution is
found to be only weakly correlated with that of the member galaxy distribution.
We carefully examine possible sources of systematic errors in our measurements,
finding none of them to be significant. Our results demonstrate the power of
high-quality imaging data for exploring the detailed spatial distribution of
dark matter (Abridged).Comment: 17 pages, 10 figures, MNRAS in pres
Robust Non-Rigid Registration with Reweighted Position and Transformation Sparsity
Non-rigid registration is challenging because it is ill-posed with high
degrees of freedom and is thus sensitive to noise and outliers. We propose a
robust non-rigid registration method using reweighted sparsities on position
and transformation to estimate the deformations between 3-D shapes. We
formulate the energy function with position and transformation sparsity on both
the data term and the smoothness term, and define the smoothness constraint
using local rigidity. The double sparsity based non-rigid registration model is
enhanced with a reweighting scheme, and solved by transferring the model into
four alternately-optimized subproblems which have exact solutions and
guaranteed convergence. Experimental results on both public datasets and real
scanned datasets show that our method outperforms the state-of-the-art methods
and is more robust to noise and outliers than conventional non-rigid
registration methods.Comment: IEEE Transactions on Visualization and Computer Graphic
Deformable face ensemble alignment with robust grouped-L1 anchors
Many methods exist at the moment for deformable face fitting. A drawback to nearly all these approaches is that they are (i) noisy in terms of landmark positions, and (ii) the noise is biased across frames (i.e. the misalignment is toward common directions across all frames). In this paper we propose a grouped -norm anchored method for simultaneously aligning an ensemble of deformable face images stemming from the same subject, given noisy heterogeneous landmark estimates. Impressive alignment performance improvement and refinement is obtained using very weak initialization as "anchors"
PointCleanNet: Learning to Denoise and Remove Outliers from Dense Point Clouds
Point clouds obtained with 3D scanners or by image-based reconstruction
techniques are often corrupted with significant amount of noise and outliers.
Traditional methods for point cloud denoising largely rely on local surface
fitting (e.g., jets or MLS surfaces), local or non-local averaging, or on
statistical assumptions about the underlying noise model. In contrast, we
develop a simple data-driven method for removing outliers and reducing noise in
unordered point clouds. We base our approach on a deep learning architecture
adapted from PCPNet, which was recently proposed for estimating local 3D shape
properties in point clouds. Our method first classifies and discards outlier
samples, and then estimates correction vectors that project noisy points onto
the original clean surfaces. The approach is efficient and robust to varying
amounts of noise and outliers, while being able to handle large densely-sampled
point clouds. In our extensive evaluation, both on synthesic and real data, we
show an increased robustness to strong noise levels compared to various
state-of-the-art methods, enabling accurate surface reconstruction from
extremely noisy real data obtained by range scans. Finally, the simplicity and
universality of our approach makes it very easy to integrate in any existing
geometry processing pipeline
- …