2,463 research outputs found
Methods for Analysing Endothelial Cell Shape and Behaviour in Relation to the Focal Nature of Atherosclerosis
The aim of this thesis is to develop automated methods for the analysis of the
spatial patterns, and the functional behaviour of endothelial cells, viewed under
microscopy, with applications to the understanding of atherosclerosis.
Initially, a radial search approach to segmentation was attempted in order to
trace the cell and nuclei boundaries using a maximum likelihood algorithm; it
was found inadequate to detect the weak cell boundaries present in the available
data. A parametric cell shape model was then introduced to fit an equivalent
ellipse to the cell boundary by matching phase-invariant orientation fields of the
image and a candidate cell shape. This approach succeeded on good quality
images, but failed on images with weak cell boundaries. Finally, a support
vector machines based method, relying on a rich set of visual features, and a
small but high quality training dataset, was found to work well on large numbers
of cells even in the presence of strong intensity variations and imaging noise.
Using the segmentation results, several standard shear-stress dependent parameters
of cell morphology were studied, and evidence for similar behaviour
in some cell shape parameters was obtained in in-vivo cells and their nuclei.
Nuclear and cell orientations around immature and mature aortas were broadly
similar, suggesting that the pattern of flow direction near the wall stayed approximately
constant with age. The relation was less strong for the cell and
nuclear length-to-width ratios.
Two novel shape analysis approaches were attempted to find other properties
of cell shape which could be used to annotate or characterise patterns, since a
wide variability in cell and nuclear shapes was observed which did not appear
to fit the standard parameterisations. Although no firm conclusions can yet be
drawn, the work lays the foundation for future studies of cell morphology.
To draw inferences about patterns in the functional response of cells to flow,
which may play a role in the progression of disease, single-cell analysis was performed
using calcium sensitive florescence probes. Calcium transient rates were
found to change with flow, but more importantly, local patterns of synchronisation
in multi-cellular groups were discernable and appear to change with flow.
The patterns suggest a new functional mechanism in flow-mediation of cell-cell
calcium signalling
A Two-stage Classification Method for High-dimensional Data and Point Clouds
High-dimensional data classification is a fundamental task in machine
learning and imaging science. In this paper, we propose a two-stage multiphase
semi-supervised classification method for classifying high-dimensional data and
unstructured point clouds. To begin with, a fuzzy classification method such as
the standard support vector machine is used to generate a warm initialization.
We then apply a two-stage approach named SaT (smoothing and thresholding) to
improve the classification. In the first stage, an unconstraint convex
variational model is implemented to purify and smooth the initialization,
followed by the second stage which is to project the smoothed partition
obtained at stage one to a binary partition. These two stages can be repeated,
with the latest result as a new initialization, to keep improving the
classification quality. We show that the convex model of the smoothing stage
has a unique solution and can be solved by a specifically designed primal-dual
algorithm whose convergence is guaranteed. We test our method and compare it
with the state-of-the-art methods on several benchmark data sets. The
experimental results demonstrate clearly that our method is superior in both
the classification accuracy and computation speed for high-dimensional data and
point clouds.Comment: 21 pages, 4 figure
Advanced planning and intra-operative validation for robot-assisted keyhole neurosurgery in ROBOCAST
Imaging White Blood Cells using a Snapshot Hyper-Spectral Imaging System
Automated white blood cell (WBC) counting systems process an extracted whole blood sample and provide a cell count. A step that would not be ideal for onsite screening of individuals in triage or at a security gate. Snapshot Hyper-Spectral imaging systems are capable of capturing several spectral bands simultaneously, offering co-registered images of a target. With appropriate optics, these systems are potentially able to image blood cells in vivo as they flow through a vessel, eliminating the need for a blood draw and sample staining. Our group has evaluated the capability of a commercial Snapshot Hyper-Spectral imaging system, specifically the Arrow system from Rebellion Photonics, in differentiating between white and red blood cells on unstained and sealed blood smear slides. We evaluated the imaging capabilities of this hyperspectral camera as a platform to build an automated blood cell counting system. Hyperspectral data consisting of 25, 443x313 hyperspectral bands with ~3nm spacing were captured over the range of 419 to 494nm. Open-source hyperspectral datacube analysis tools, used primarily in Geographic Information Systems (GIS) applications, indicate that white blood cells\u27 features are most prominent in the 428-442nm band for blood samples viewed under 20x and 50x magnification over a varying range of illumination intensities. The system has shown to successfully segment blood cells based on their spectral-spatial information. These images could potentially be used in subsequent automated white blood cell segmentation and counting algorithms for performing in vivo white blood cell counting
Quantitative magnetic resonance image analysis via the EM algorithm with stochastic variation
Quantitative Magnetic Resonance Imaging (qMRI) provides researchers insight
into pathological and physiological alterations of living tissue, with the help
of which researchers hope to predict (local) therapeutic efficacy early and
determine optimal treatment schedule. However, the analysis of qMRI has been
limited to ad-hoc heuristic methods. Our research provides a powerful
statistical framework for image analysis and sheds light on future localized
adaptive treatment regimes tailored to the individual's response. We assume in
an imperfect world we only observe a blurred and noisy version of the
underlying pathological/physiological changes via qMRI, due to measurement
errors or unpredictable influences. We use a hidden Markov random field to
model the spatial dependence in the data and develop a maximum likelihood
approach via the Expectation--Maximization algorithm with stochastic variation.
An important improvement over previous work is the assessment of variability in
parameter estimation, which is the valid basis for statistical inference. More
importantly, we focus on the expected changes rather than image segmentation.
Our research has shown that the approach is powerful in both simulation studies
and on a real dataset, while quite robust in the presence of some model
assumption violations.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS157 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Accurate Image Analysis of the Retina Using Hessian Matrix and Binarisation of Thresholded Entropy with Application of Texture Mapping
In this paper, we demonstrate a comprehensive method for segmenting the retinal vasculature in camera images of the fundus. This is of interest in the area of diagnostics for eye diseases that affect the blood vessels in the eye. In a departure from other state-of-the-art methods, vessels are first pre-grouped together with graph partitioning, using a spectral clustering technique based on morphological features. Local curvature is estimated over the whole image using eigenvalues of Hessian matrix in order to enhance the vessels, which appear as ridges in images of the retina. The result is combined with a binarized image, obtained using a threshold that maximizes entropy, to extract the retinal vessels from the background. Speckle type noise is reduced by applying a connectivity constraint on the extracted curvature based enhanced image. This constraint is varied over the image according to each region's predominant blood vessel size. The resultant image exhibits the central light reflex of retinal arteries and veins, which prevents the segmentation of whole vessels. To address this, the earlier entropy-based binarization technique is repeated on the original image, but crucially, with a different threshold to incorporate the central reflex vessels. The final segmentation is achieved by combining the segmented vessels with and without central light reflex. We carry out our approach on DRIVE and REVIEW, two publicly available collections of retinal images for research purposes. The obtained results are compared with state-of-the-art methods in the literature using metrics such as sensitivity (true positive rate), selectivity (false positive rate) and accuracy rates for the DRIVE images and measured vessel widths for the REVIEW images. Our approach out-performs the methods in the literature.Xiaoxia Yin, Brian W-H Ng, Jing He, Yanchun Zhang, Derek Abbot
- …