4 research outputs found
Simulation of Ground-Truth Validation Data Via Physically- and Statistically-Based Warps
Abstract. The problem of scarcity of ground-truth expert delineations of medi-cal image data is a serious one that impedes the training and validation of medi-cal image analysis techniques. We develop an algorithm for the automatic generation of large databases of annotated images from a single reference data-set. We provide a web-based interface through which the users can upload a reference data set (an image and its corresponding segmentation and landmark points), provide custom setting of parameters, and, following server-side com-putations, generate and download an arbitrary number of novel ground-truth data, including segmentations, displacement vector fields, intensity non-uniformity maps, and point correspondences. To produce realistic simulated data, we use variational (statistically-based) and vibrational (physically-based) spatial deformations, nonlinear radiometric warps mimicking imaging non-homogeneity, and additive random noise with different underlying distributions. We outline the algorithmic details, present sample results, and provide the web address to readers for immediate evaluation and usage
Statistical shape modelling: automatic shape model building
Statistical Shape Models (SSM) have wide applications in image segmentation, surface
registration and morphometry. This thesis deals with an important issue in SSM, which
is establishing correspondence between a set of shape surfaces on either 2D or 3D.
Current methods involve either manual annotation of the data (current āgold standardā);
or establishing correspondences by using segmentation or registration algorithms; or
using an information technique, Minimum Description Length (MDL), as an objective
function that measures the utility of a model (the state-of-the-art). This thesis presents in
principle another framework for establishing correspondences completely automatically
by treating it as a learning process. Shannon theory is used extensively to develop an
objective function, which measures the performance of a model along each eigenvector
direction, and a proper weighting is automatically calculated for each energy component.
Correspondence finding can then be treated as optimizing the objective function. An
efficient optimization method is also incorporated by deriving the gradient of the cost
function. Experimental results on various data are presented on both 2D and 3D. In the
end, a quantitative evaluation between the proposed algorithm and MDL shows that the
proposed model has better Generalization Ability, Specificity and similar Compactness.
It also shows a good potential ability to solve the so-called āPile Upā problem that
exists in MDL. In terms of application, I used the proposed algorithm to help build a
facial contour classifier. First, correspondence points across facial contours are found
automatically and classifiers are trained by using the correspondence points found by
the MDL, proposed method and direct human observer. These classification schemes are then used to perform gender prediction on facial contours. The final conclusion for
the experiments is that MEM found correspondence points built classification scheme
conveys a relatively more accurate gender prediction result.
Although, we have explored the potential of our proposed method to some extent, this is
not the end of the research for this topic. The future work is also clearly stated which
includes more validations on various 3D datasets; discrimination analysis between
normal and abnormal subjects could be the direct application for the proposed algorithm,
extension to model-building using appearance information, etc
A ground truth correspondence measure for benchmarking
Automatic localisation of correspondences for the construction of Statistical Shape Models from examples has been the focus of intense research during the last decade. Several algorithms are available and benchmarking is needed to rank the different algorithms. Prior work has focused on evaluating the quality of the models produced by the algorithms by measuring compactness, generality and specificity. In this paper problems with these standard measures are discussed. We propose that a ground truth correspondence measure (gcm) is used for benchmarking and in this paper benchmarking is performed on several state of the art algorithms. Minimum Description Length (MDL) with a curvature cost comes out as the winner of the automatic methods. Hand marked models turn out to be best but a semi-automatic method is shown to lie in between the best automatic method and the hand built models in performance