87 research outputs found
Improvements in the registration of multimodal medical imaging : application to intensity inhomogeneity and partial volume corrections
Alignment or registration of medical images has a relevant role on clinical diagnostic and treatment decisions as well as in research settings. With the advent of new technologies for multimodal imaging, robust registration of functional and anatomical information is still a challenge, particular in small-animal imaging given the lesser structural content of certain anatomical parts, such as the brain, than in humans. Besides, patient-dependent and acquisition artefacts affecting the images information content further complicate registration, as is the case of intensity inhomogeneities (IIH) showing in MRI and the partial volume effect (PVE) attached to PET imaging. Reference methods exist for accurate image registration but their performance is severely deteriorated in situations involving little images Overlap. While several approaches to IIH and PVE correction exist these methods still do not guarantee or rely on robust registration. This Thesis focuses on overcoming current limitations af registration to enable novel IIH and PVE correction methods.El registre d'imatges mèdiques té un paper rellevant en les decisions de diagnòstic i tractament clÃniques aixà com en la recerca. Amb el desenvolupament de noves tecnologies d'imatge multimodal, el registre robust d'informació funcional i anatòmica és encara avui un repte, en particular, en imatge de petit animal amb un menor contingut estructural que en humans de certes parts anatòmiques com el cervell. A més, els artefactes induïts pel propi pacient i per la tècnica d'adquisició que afecten el contingut d'informació de les imatges complica encara més el procés de registre. És el cas de les inhomogeneïtats d'intensitat (IIH) que apareixen a les RM i de l'efecte de volum parcial (PVE) caracterÃstic en PET. Tot i que existeixen mètodes de referència pel registre acurat d'imatges la seva eficà cia es veu greument minvada en casos de poc solapament entre les imatges. De la mateixa manera, també existeixen mètodes per la correcció d'IIH i de PVE però que no garanteixen o que requereixen un registre robust. Aquesta tesi es centra en superar aquestes limitacions sobre el registre per habilitar nous mètodes per la correcció d'IIH i de PVE
White Blood Cell Segmentation by Circle Detection Using Electromagnetism-Like Optimization
Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability
Doctor of Philosophy
dissertationVisualization and exploration of volumetric datasets has been an active area of research for over two decades. During this period, volumetric datasets used by domain users have evolved from univariate to multivariate. The volume datasets are typically explored and classified via transfer function design and visualized using direct volume rendering. To improve classification results and to enable the exploration of multivariate volume datasets, multivariate transfer functions emerge. In this dissertation, we describe our research on multivariate transfer function design. To improve the classification of univariate volumes, various one-dimensional (1D) or two-dimensional (2D) transfer function spaces have been proposed; however, these methods work on only some datasets. We propose a novel transfer function method that provides better classifications by combining different transfer function spaces. Methods have been proposed for exploring multivariate simulations; however, these approaches are not suitable for complex real-world datasets and may be unintuitive for domain users. To this end, we propose a method based on user-selected samples in the spatial domain to make complex multivariate volume data visualization more accessible for domain users. However, this method still requires users to fine-tune transfer functions in parameter space transfer function widgets, which may not be familiar to them. We therefore propose GuideME, a novel slice-guided semiautomatic multivariate volume exploration approach. GuideME provides the user, an easy-to-use, slice-based user interface that suggests the feature boundaries and allows the user to select features via click and drag, and then an optimal transfer function is automatically generated by optimizing a response function. Throughout the exploration process, the user does not need to interact with the parameter views at all. Finally, real-world multivariate volume datasets are also usually of large size, which is larger than the GPU memory and even the main memory of standard work stations. We propose a ray-guided out-of-core, interactive volume rendering and efficient query method to support large and complex multivariate volumes on standard work stations
Recommended from our members
Composition-guided image acquisition
textTo make a picture more appealing, professional photographers apply a wealth of photographic composition rules, of which amateur photographers are of- ten unaware. This dissertation aims at providing in-camera feedback to the amateur photographer while taking pictures. The proposed algorithms do not depend on prior knowledge of the indoor/outdoor setting or scene, and are amenable to software implementation on fixed-point programmable digital signal processors available in digital still cameras.
The key enabling step in automating photographic composition rules is to locate the main subject. Digital still image acquisition maps the 3-D world onto a 2-D picture. By using the 2-D picture alone, segmenting the main subject without prior knowledge of the scene is ill-posed. Even with prior knowledge, segmentation is often computationally intensive and error prone.
This dissertation defends the idea that reliable main subject segmenta- tion without prior knowledge of scene and setting may be achieved by acquiring a single picture, in which the optical system blurs objects not in the plane of
focus. After segmentation, photographic composition rules may be automated. In this context, segmentation only needs to approximately and not precisely locate the main subject.
In this dissertation, I combine optical and digital image processing to perform the segmentation of the main subject without prior knowledge of the scene. In particular, I propose to acquire a picture in which the main subject is in focus, and the shutter aperture is fully open. The lens optics will blur any object not in the plane of focus. For the acquired picture, I develop a computationally simple one-pass algorithm to segment the main subject.
The post segmentation objective is to automate selected photographic composition rules. The algorithms can either be applied on the picture taken with the objects not in the plane of focus blurred, or on a user-intended picture with the same focal length settings. This way, in-camera feedback can be provided to the amateur photographer, in the form of alternate compositions of the same scene.
I automate three photographic composition rules: (1) placement of the main subject obeying the rule-of-thirds, (2) background blurring to simulate the main subject being in motion or decrease the depth-of-field of the picture, and (3) merger detection and mitigation when equally focused main subject and background objects merge as one object.
The primary contributions of the dissertation are in digital still image processing. The first is the automation of segmentation of the main subject in a single still picture assisted by optical pre-processing. The second is the automation of main subject placement, artistic background blur, and merger detection and mitigation to try to improve photographic composition.Electrical and Computer Engineerin
Development of a tool for automatic segmentation of the cerebellum in MR images of children
The human cerebellar cortex is a highly foliated structure that supports both motor and complex cognitive functions in humans. Magnetic Resonance Imaging (MRI) is commonly used to explore structural alterations in patients with psychiatric and neurological diseases. The ability to detect regional structural differences in cerebellar lobules may provide valuable insights into disease biology, progression and response to treatment, but has been hampered by the lack of appropriate tools for performing automated structural cerebellar segmentation and morphometry. In this thesis, time intensive manual tracings by an expert neuroanatomist of 16 cerebellar regions on high-resolution T1-weighted MR images of 18 children aged 9-13 years were used to generate the Cape Town Pediatric Cerebellar Atlas (CAPCA18) in the age-appropriate National Institute of Health Pediatric Database (NIHPD) asymmetric template space. An automated pipeline was developed to process the MR images and generate lobule-wise segmentations, as well as a measure of the uncertainty of the label assignments. Validation in an independent group of children with ages similar to those of the children used in the construction of the atlas, yielded spatial overlaps with manual segmentations greater than 70% in all lobules, except lobules VIIb and X. Average spatial overlap of the whole cerebellar cortex was 86%, compared to 78% using the alternative Spatially Unbiased Infra-tentorial Template (SUIT), which was developed using adult images
A Novel Adaptive Probabilistic Nonlinear Denoising Approach for Enhancing PET Data Sinogram
We propose filtering the PET sinograms with a constraint curvature motion diffusion. The edge-stopping function is computed in terms of edge probability under the assumption of contamination by Poisson noise. We show that the Chi-square is the appropriate prior for finding the edge probability in the sinogram noise-free gradient. Since the sinogram noise is uncorrelated and follows a Poisson distribution, we then propose an adaptive probabilistic diffusivity function where the edge probability is computed at each pixel. The filter is applied on the 2D sinogram prereconstruction. The PET images are reconstructed using the Ordered Subset Expectation Maximization (OSEM). We demonstrate through simulations with images contaminated by Poisson noise that the performance of the proposed method substantially surpasses that of recently published methods, both visually and in terms of statistical measures
Digital Image Analysis of Vitiligo for Monitoring of Vitiligo Treatment
Vitiligo is an acquired pigmentary skin disorder characterized by depigmented macules
that result from damage to and destruction of epidermal melanocytes. Visually, the
vitiligous areas are paler in contrast to normal skin or completely white due to the lack of
pigment melanin. The course of vitiligo is unpredictable where the vitiligous skin lesions
may remain stable for years before worsening.
Vitiligo treatments have two objectives, to arrest disease progression and to re-pigment
the vitiligous skin lesions. To monitor the efficacy of the treatment, dermatologists
observe the disease directly, or indirectly using digital photos. Currently there is no
objective method to determine the efficacy of the vitiligo treatment. Physician's Global
Assessment (PGA) scale is the current scoring system used by dermatologists to evaluate
the treatment. The scale is based on the degree of repigmentation within lesions over
time. This quantitative tool however may not be help to detect slight changes due to
treatment as it would still be largely dependent on the human eye and judgment to
produce the scorings. In addition, PGA score is also subjective, as it varies with
dermatologists.
The progression of vitiligo treatment can be very slow and can take more than 6 months.
It is observed that dermatologists find it visually hard to determine the areas of skin
repigmentation due to this slow progress and as a result the observations are made after a
longer time frame. The objective of this research is to develop a tool that enables
dermatologists to determine and quantify areas of repigmentation objectively over a
shorter time frame during treatment. The approaches towards achieving this objective are
based on digital image processing techniques.
Skin color is due to the combination of skin histological parameters, namely pigment
melanin and haemoglobin. However in digital imaging, color is produced by combining three different spectral bands, namely red, green, and blue (RGB). It is believed that the
spatial distribution of melanin and haemoglobin in skin image could be separated.
It is found that skin color distribution lies on a two-dimensional melanin-haemoglobin
color subspace. In order to determine repigmentation (due to pigment melanin) it is
necessary to perform a conversion from RGB skin image to this two-dimensional color
subspace. Using principal component analysis (PCA) as a dimensional reduction tool,
the two-dimensional subspace can be represented by its first and second principal
components. Independent component analysis is employed to convert the twodimensional
subspace into a skin image that represents skin areas due to melanin and
haemoglobin only.
In the skin image that represents skin areas due to melanin, vitiligous skin lesions are
identified as skin areas that lack melanin. Segmentation is performed to separate the
healthy skin and the vitiligous lesions. The difference in the vitiligous surface areas
between skin images before and after treatment will be expressed as a percentage of
repigmentation in each vitiligo lesion. This percentage will represent the repigmentation
progression of a particular body region.
Results of preliminary and pre-clinical trial study show that our vitiligo monitoring
system has been able to determine repigmentation progression objectively and thus
treatment efficacy on a shorter time cycle. An intensive clinical trial is currently
undertaken in Hospital Kuala Lumpur using our developed system.
VI
Entropy in Image Analysis III
Image analysis can be applied to rich and assorted scenarios; therefore, the aim of this recent research field is not only to mimic the human vision system. Image analysis is the main methods that computers are using today, and there is body of knowledge that they will be able to manage in a totally unsupervised manner in future, thanks to their artificial intelligence. The articles published in the book clearly show such a future
Advances in Detection and Classification of Underwater Targets using Synthetic Aperture Sonar Imagery
In this PhD thesis, the problem of underwater mine detection and classification using
synthetic aperture sonar (SAS) imagery is considered. The automatic detection and
automatic classification (ADAC) system is applied to images obtained by SAS systems.
The ADAC system contains four steps, namely mine-like object (MLO) detection, image
segmentation, feature extraction, and mine type classification. This thesis focuses
on the last three steps.
In the mine-like object detection step, a template-matching technique based on the a
priori knowledge of mine shapes is applied to scan the sonar imagery for the detection
of MLOs. Regions containing MLOs are called regions of interest (ROI). They are
extracted and forwarded to the subsequent steps, i.e. image segmentation and feature
extraction.
In the image segmentation step, a modified expectation-maximization (EM) approach
is proposed. For the sake of acquiring the shape information of the MLO in the ROI, the
SAS images are segmented into highlights, shadows, and backgrounds. A generalized
mixture model is adopted to approximate the statistics of the image data. In addition,
a Dempster-Shafer theory-based clustering technique is used to consider the spatial
correlation between pixels so that the clutters in background regions can be removed.
Optimal parameter settings for the proposed EM approach are found with the help of
quantitative numerical studies.
In the feature extraction step, features are extracted and will be used as the inputs
for the mine type classification step. Both the geometrical features and the texture
features are applied. However, there are numerous features proposed to describe the
object shape and the texture in the literature.
Due to the curse of dimensionality, it is indispensable to do the feature selection during
the design of an ADAC system. A sophisticated filter method is developed to choose
optimal features for the classification purpose. This filter method utilizes a novel
feature relevance measure that is a combination of the mutual information, the modified
Relief weight, and the Shannon entropy. The selected features demonstrate a higher
generalizability. Compared with other filter methods, the features selected by our
method can lead to superior classification accuracy, and their performance variation
over different classifiers is decreased.
In the mine type classification step, the prediction of the types of MLO is considered. In
order to take advantage of the complementary information among different classifiers, a classifier combination scheme is developed in the framework of the Dempster-Shafer
theory. The outputs of individual classifiers are combined according to this classifier
combination scheme. The resulting classification accuracy is better than those of
individual classifiers.
All of the proposed methods are evaluated using SAS data. Finally, conclusions are
drawn, and some suggestions about future works are proposed as well
- …