4,707 research outputs found
Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images
This paper investigates, using prior shape models and the concept of ball
scale (b-scale), ways of automatically recognizing objects in 3D images without
performing elaborate searches or optimization. That is, the goal is to place
the model in a single shot close to the right pose (position, orientation, and
scale) in a given image so that the model boundaries fall in the close vicinity
of object boundaries in the image. This is achieved via the following set of
key ideas: (a) A semi-automatic way of constructing a multi-object shape model
assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship
between objects in the training images and their intensity patterns captured in
b-scale images. (c) A hierarchical mechanism of positioning the model, in a
one-shot way, in a given image from a knowledge of the learnt pose relationship
and the b-scale image of the given image to be segmented. The evaluation
results on a set of 20 routine clinical abdominal female and male CT data sets
indicate the following: (1) Incorporating a large number of objects improves
the recognition accuracy dramatically. (2) The recognition algorithm can be
thought as a hierarchical framework such that quick replacement of the model
assembly is defined as coarse recognition and delineation itself is known as
finest recognition. (3) Scale yields useful information about the relationship
between the model assembly and any given image such that the recognition
results in a placement of the model close to the actual pose without doing any
elaborate searches or optimization. (4) Effective object recognition can make
delineation most accurate.Comment: This paper was published and presented in SPIE Medical Imaging 201
Content-based Propagation of User Markings for Interactive Segmentation of Patterned Images
Efficient and easy segmentation of images and volumes is of great practical
importance. Segmentation problems that motivate our approach originate from
microscopy imaging commonly used in materials science, medicine, and biology.
We formulate image segmentation as a probabilistic pixel classification
problem, and we apply segmentation as a step towards characterising image
content. Our method allows the user to define structures of interest by
interactively marking a subset of pixels. Thanks to the real-time feedback, the
user can place new markings strategically, depending on the current outcome.
The final pixel classification may be obtained from a very modest user input.
An important ingredient of our method is a graph that encodes image content.
This graph is built in an unsupervised manner during initialisation and is
based on clustering of image features. Since we combine a limited amount of
user-labelled data with the clustering information obtained from the unlabelled
parts of the image, our method fits in the general framework of semi-supervised
learning. We demonstrate how this can be a very efficient approach to
segmentation through pixel classification.Comment: 9 pages, 7 figures, PDFLaTe
Model-Based Visualization for Intervention Planning
Computer support for intervention planning is often a two-stage process: In a first stage, the relevant segmentation target structures are identified and delineated. In a second stage, image analysis results are employed for the actual planning process. In the first stage, model-based segmentation techniques are often used to reduce the interaction effort and increase the reproducibility. There is a similar argument to employ model-based techniques for the visualization as well. With increasingly more visualization options, users have many parameters to adjust in order to generate expressive visualizations. Surface models may be smoothed with a variety of techniques and parameters. Surface visualization and illustrative rendering techniques are controlled by a large set of additional parameters. Although interactive 3d visualizations should be flexible and support individual planning tasks, appropriate selection of visualization techniques and presets for their parameters is needed. In this chapter, we discuss this kind of visualization support. We refer to model-based visualization to denote the selection and parameterization of visualization techniques based on \u27a priori knowledge concerning visual perception, shapes of anatomical objects and intervention planning tasks
Ultrasound imaging system combined with multi-modality image analysis algorithms to monitor changes in anatomical structures
This dissertation concerns the development and validation of an ultrasound imaging system and novel image analysis algorithms applicable to multiple imaging modalities. The ultrasound imaging system will include a framework for 3D volume reconstruction of freehand ultrasound: a mechanism to register the 3D volumes across time and subjects, as well as with other imaging modalities, and a playback mechanism to view image slices concurrently from different acquisitions that correspond to the same anatomical region. The novel image analysis algorithms include a noise reduction method that clusters pixels into homogenous patches using a directed graph of edges between neighboring pixels, a segmentation method that creates a hierarchical graph structure using statistical analysis and a voting system to determine the similarity between homogeneous patches given their neighborhood, and finally, a hybrid atlas-based registration method that makes use of intensity corrections induced at anatomical landmarks to regulate deformable registration. The combination of the ultrasound imaging system and the image analysis algorithms will provide the ability to monitor nerve regeneration in patients undergoing regenerative, repair or transplant strategies in a sequential, non-invasive manner, including visualization of registered real-time and pre-acquired data, thus enabling preventive and therapeutic strategies for nerve regeneration in Composite Tissue Allotransplantation (CTA). The registration algorithm is also applied to MR images of the brain to obtain reliable and efficient segmentation of the hippocampus, which is a prominent structure in the study of diseases of the elderly such as vascular dementia, Alzheimer’s, and late life depression. Experimental results on 2D and 3D images, including simulated and real images, with illustrations visualizing the intermediate outcomes and the final results are presented.
Dynamical models and machine learning for supervised segmentation
This thesis is concerned with the problem of how to outline regions of interest in medical images, when
the boundaries are weak or ambiguous and the region shapes are irregular. The focus on machine learning
and interactivity leads to a common theme of the need to balance conflicting requirements. First,
any machine learning method must strike a balance between how much it can learn and how well it
generalises. Second, interactive methods must balance minimal user demand with maximal user control.
To address the problem of weak boundaries,methods of supervised texture classification are investigated
that do not use explicit texture features. These methods enable prior knowledge about the image to
benefit any segmentation framework. A chosen dynamic contour model, based on probabilistic boundary
tracking, combines these image priors with efficient modes of interaction. We show the benefits of the
texture classifiers over intensity and gradient-based image models, in both classification and boundary
extraction.
To address the problem of irregular region shape, we devise a new type of statistical shape model
(SSM) that does not use explicit boundary features or assume high-level similarity between region
shapes. First, the models are used for shape discrimination, to constrain any segmentation framework
by way of regularisation. Second, the SSMs are used for shape generation, allowing probabilistic segmentation
frameworks to draw shapes from a prior distribution. The generative models also include
novel methods to constrain shape generation according to information from both the image and user
interactions.
The shape models are first evaluated in terms of discrimination capability, and shown to out-perform
other shape descriptors. Experiments also show that the shape models can benefit a standard type of
segmentation algorithm by providing shape regularisers. We finally show how to exploit the shape
models in supervised segmentation frameworks, and evaluate their benefits in user trials
- …