1,987 research outputs found
Recommended from our members
Deformable organisms for automatic medical image analysis
We introduce a new approach to medical image analysis that combines deformable model methodologies with concepts from the field of artificial life. In particular, we propose ‘deformable organisms’, autonomous agents whose task is the automatic segmentation, labeling, and quantitative analysis of anatomical structures in medical images. Analogous to natural organisms capable of voluntary movement, our artificial organisms possess deformable bodies with distributed sensors, as well as (rudimentary) brains with motor, perception, behavior, and cognition centers. Deformable organisms are perceptually aware of the image analysis process. Their behaviors, which manifest themselves in voluntary movement and alteration of body shape, are based upon sensed image features, pre-stored anatomical knowledge, and a deliberate cognitive plan. We demonstrate several prototype deformable organisms based on a multiscale axisymmetric body morphology, including a ‘corpus callosum worm’ that can overcome noise, incomplete edges, considerable anatomical variation, and interference from collateral structures to segment and label the corpus callosum in 2D mid-sagittal MR brain images
Automatic Affine and Elastic Registration Strategies for Multi-dimensional Medical Images
Medical images have been used increasingly for diagnosis, treatment planning, monitoring disease processes, and other medical applications. A large variety of medical imaging modalities exists including CT, X-ray, MRI, Ultrasound, etc. Frequently a group of images need to be compared to one another and/or combined for research or cumulative purposes. In many medical studies, multiple images are acquired from subjects at different times or with different imaging modalities. Misalignment inevitably occurs, causing anatomical and/or functional feature shifts within the images. Computerized image registration (alignment) approaches can offer automatic and accurate image alignments without extensive user involvement and provide tools for visualizing combined images. This dissertation focuses on providing automatic image registration strategies. After a through review of existing image registration techniques, we identified two registration strategies that enhance the current field: (1) an automated rigid body and affine registration using voxel similarity measurements based on a sequential hybrid genetic algorithm, and (2) an automated deformable registration approach based upon a linear elastic finite element formulation. Both methods streamlined the registration process. They are completely automatic and require no user intervention. The proposed registration strategies were evaluated with numerous 2D and 3D MR images with a variety of tissue structures, orientations and dimensions. Multiple registration pathways were provided with guidelines for their applications. The sequential genetic algorithm mimics the pathway of an expert manually doing registration. Experiments demonstrated that the sequential genetic algorithm registration provides high alignment accuracy and is reliable for brain tissues. It avoids local minima/maxima traps of conventional optimization techniques, and does not require any preprocessing such as threshold, smoothing, segmentation, or definition of base points or edges. The elastic model was shown to be highly effective to accurately align areas of interest that are automatically extracted from the images, such as brains. Using a finite element method to get the displacement of each element node by applying a boundary mapping, this method provides an accurate image registration with excellent boundary alignment of each pair of slices and consequently align the entire volume automatically. This dissertation presented numerous volume alignments. Surface geometries were created directly from the aligned segmented images using the Multiple Material Marching Cubes algorithm. Using the proposed registration strategies, multiple subjects were aligned to a standard MRI reference, which is aligned to a segmented reference atlas. Consequently, multiple subjects are aligned to the segmented atlas and a full fMRI analysis is possible
High performance computing for 3D image segmentation
Digital image processing is a very popular and still very promising eld of science, which has been successfully applied to numerous areas and problems, reaching elds like forensic analysis, security systems, multimedia processing, aerospace, automotive, and many more.
A very important part of the image processing area is image segmentation. This refers to the task of partitioning a given image into multiple regions and is typically used to locate and mark objects and boundaries in input scenes. After segmentation the image represents a set of data far more suitable for further algorithmic processing and decision making. Image segmentation algorithms are a very broad eld and they have received signi cant amount of research interest A good example of an area, in which image processing plays a constantly growing role, is the eld of medical solutions. The expectations and demands that are presented in this branch of science are very high and dif cult to meet for the applied technology. The problems are challenging and the potential bene ts are signi cant and clearly visible. For over thirty years image processing has been applied to different problems and questions in medicine and the practitioners have exploited the rich possibilities that it offered. As a result, the eld of medicine has seen signi cant improvements in the interpretation of examined medical data. Clearly, the medical knowledge has also evolved signi cantly over these years, as well as the medical equipment that serves doctors and researchers. Also the common computer hardware, which is present at homes, of ces and laboratories, is constantly evolving and changing. All of these factors have sculptured the shape of modern image processing techniques and established in which ways it is currently used and developed. Modern medical image processing is centered around 3D images with high spatial and temporal resolution, which can bring a tremendous amount of data for medical practitioners. Processing of such large sets of data is not an easy task, requiring high computational power. Furthermore, in present times the computational power is not as easily available as in recent years, as the growth of possibilities of a single processing unit is very limited - a trend towards multi-unit processing and parallelization of the workload is clearly visible. Therefore, in order to continue the development of more complex and more advanced image processing techniques, a new direction is necessary.
A very interesting family of image segmentation algorithms, which has been gaining a lot of focus in the last three decades, is called Deformable Models. They are based on the concept of placing a geometrical object in the scene of interest and deforming it until it assumes the shape of objects of interest. This process is usually guided by several forces, which originate in mathematical functions, features of the input images and other constraints of the deformation process, like object curvature or continuity. A range of very desired features of Deformable Models include their high capability for customization and specialization for different tasks and also extensibility with various approaches for prior knowledge incorporation. This set of characteristics makes Deformable Models a very ef cient approach, which is capable of delivering results in competitive times and with very good quality of segmentation, robust to noisy and incomplete data.
However, despite the large amount of work carried out in this area, Deformable Models still suffer from a number of drawbacks. Those that have been gaining the most focus are e.g. sensitivity to the initial position and shape of the model, sensitivity to noise in the input images and to awed input data, or the need for user supervision over the process.
The work described in this thesis aims at addressing the problems of modern image segmentation, which has raised from the combination of above-mentioned factors: the signi cant growth of image volumes sizes, the growth of complexity of image processing algorithms, coupled with the change in processor development and turn towards multi-processing units instead of growing bus speeds and the number of operations per second of a single processing unit. We present our innovative model for 3D image segmentation, called the The Whole Mesh Deformation model, which holds a set of very desired features that successfully address the above-mentioned requirements. Our model has been designed speci cally for execution on parallel architectures and with the purpose of working well with very large 3D images that are created by modern medical acquisition devices.
Our solution is based on Deformable Models and is characterized by a very effective and precise segmentation capability. The proposed Whole Mesh Deformation (WMD) model uses a 3D mesh instead of a contour or a surface to represent the segmented shapes of interest, which allows exploiting more information in the image and obtaining results in shorter times. The model offers a very good ability for topology changes and allows effective parallelization of work ow, which makes it a very good choice for large data-sets. In this thesis we present a precise model description, followed by experiments on arti cial images and real medical data
Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.
Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well
Registration and Analysis of Developmental Image Sequences
Mapping images into the same anatomical coordinate system via image registration is a fundamental step when studying physiological processes, such as brain development. Standard registration methods are applicable when biological structures are mapped to the same anatomy and their appearance remains constant across the images or changes spatially uniformly. However, image sequences of animal or human development often do not follow these assumptions, and thus standard registration methods are unsuited for their analysis. In response, this dissertation tackles the problems of i) registering developmental image sequences with spatially non-uniform appearance change and ii) reconstructing a coherent 3D volume from serially sectioned images with non-matching anatomies between the sections. There are three major contributions presented in this dissertation. First, I develop a similarity metric that incorporates a time-dependent appearance model into the registration framework. The proposed metric allows for longitudinal image registration in the presence of spatially non-uniform appearance change over time—a common medical imaging problem for longitudinal magnetic resonance images of the neonatal brain. Next, a method is introduced for registering longitudinal developmental datasets with missing time points using an appearance atlas built from a population. The proposed method is applied to a longitudinal study of young macaque monkeys with incomplete image sequences. The final contribution is a template-free registration method to reconstruct images of serially sectioned biological samples into a coherent 3D volume. The method is applied to confocal fluorescence microscopy images of serially sectioned embryonic mouse brains.Doctor of Philosoph
Automatic reconstruction of 3D neuron structures using a graph-augmented deformable model
Motivation: Digital reconstruction of 3D neuron structures is an important step toward reverse engineering the wiring and functions of a brain. However, despite a number of existing studies, this task is still challenging, especially when a 3D microscopic image has low single-to-noise ratio and discontinued segments of neurite patterns
Image registration and visualization of in situ gene expression images.
In the age of high-throughput molecular biology techniques, scientists have incorporated the methodology of in-situ hybridization to map spatial patterns of gene expression. In order to compare expression patterns within a common tissue structure, these images need to be registered or organized into a common coordinate system for alignment to a reference or atlas images. We use three different image registration methodologies (manual; correlation based; mutual information based) to determine the common coordinate system for the reference and in-situ hybridization images. All three methodologies are incorporated into a Matlab tool to visualize the results in a user friendly way and save them for future work. Our results suggest that the user-defined landmark method is best when considering images from different modalities; automated landmark detection is best when the images are expected to have a high degree of consistency; and the mutual information methodology is useful when the images are from the same modality
Visual Quality Enhancement in Optoacoustic Tomography using Active Contour Segmentation Priors
Segmentation of biomedical images is essential for studying and
characterizing anatomical structures, detection and evaluation of pathological
tissues. Segmentation has been further shown to enhance the reconstruction
performance in many tomographic imaging modalities by accounting for
heterogeneities of the excitation field and tissue properties in the imaged
region. This is particularly relevant in optoacoustic tomography, where
discontinuities in the optical and acoustic tissue properties, if not properly
accounted for, may result in deterioration of the imaging performance.
Efficient segmentation of optoacoustic images is often hampered by the
relatively low intrinsic contrast of large anatomical structures, which is
further impaired by the limited angular coverage of some commonly employed
tomographic imaging configurations. Herein, we analyze the performance of
active contour models for boundary segmentation in cross-sectional optoacoustic
tomography. The segmented mask is employed to construct a two compartment model
for the acoustic and optical parameters of the imaged tissues, which is
subsequently used to improve accuracy of the image reconstruction routines. The
performance of the suggested segmentation and modeling approach are showcased
in tissue-mimicking phantoms and small animal imaging experiments.Comment: Accepted for publication in IEEE Transactions on Medical Imagin
A Comprehensive Corpus Callosum Segmentation Tool for Detecting Callosal Abnormalities and Genetic Associations from Multi Contrast MRIs
Structural alterations of the midsagittal corpus callosum (midCC) have been
associated with a wide range of brain disorders. The midCC is visible on most
MRI contrasts and in many acquisitions with a limited field-of-view. Here, we
present an automated tool for segmenting and assessing the shape of the midCC
from T1w, T2w, and FLAIR images. We train a UNet on images from multiple public
datasets to obtain midCC segmentations. A quality control algorithm is also
built-in, trained on the midCC shape features. We calculate intraclass
correlations (ICC) and average Dice scores in a test-retest dataset to assess
segmentation reliability. We test our segmentation on poor quality and partial
brain scans. We highlight the biological significance of our extracted features
using data from over 40,000 individuals from the UK Biobank; we classify
clinically defined shape abnormalities and perform genetic analyses
- …