2,150 research outputs found
Colour, texture, and motion in level set based segmentation and tracking
This paper introduces an approach for the extraction and combination of different cues in a level set based image segmentation framework. Apart from the image grey value or colour, we suggest to add its spatial and temporal variations, which may provide important further characteristics. It often turns out that the combination of colour, texture, and motion permits to distinguish object regions that cannot be separated by one cue alone. We propose a two-step approach. In the first stage, the input features are extracted and enhanced by applying coupled nonlinear diffusion. This ensures coherence between the channels and deals with outliers. We use a nonlinear diffusion technique, closely related to total variation flow, but being strictly edge enhancing. The resulting features are then employed for a vector-valued front propagation based on level sets and statistical region models that approximate the distributions of each feature. The application of this approach to two-phase segmentation is followed by an extension to the tracking of multiple objects in image sequences
Annotating Synapses in Large EM Datasets
Reconstructing neuronal circuits at the level of synapses is a central
problem in neuroscience and becoming a focus of the emerging field of
connectomics. To date, electron microscopy (EM) is the most proven technique
for identifying and quantifying synaptic connections. As advances in EM make
acquiring larger datasets possible, subsequent manual synapse identification
({\em i.e.}, proofreading) for deciphering a connectome becomes a major time
bottleneck. Here we introduce a large-scale, high-throughput, and
semi-automated methodology to efficiently identify synapses. We successfully
applied our methodology to the Drosophila medulla optic lobe, annotating many
more synapses than previous connectome efforts. Our approaches are extensible
and will make the often complicated process of synapse identification
accessible to a wider-community of potential proofreaders
Recommended from our members
Computational models for stuctural analysis of retinal images
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonThe evaluation of retina structures has been of great interest because it could be used as a non-intrusive diagnosis in modern ophthalmology to detect many important eye diseases as well as cardiovascular disorders. A variety of retinal image analysis tools have been developed to assist ophthalmologists and eye diseases experts by reducing the time required in eye screening, optimising the costs as well as providing efficient disease treatment and management systems. A key component in these tools is the segmentation and quantification of retina structures. However, the imaging artefacts
such as noise, intensity homogeneity and the overlapping tissue of retina structures can cause significant degradations to the performance of these automated image analysis tools. This thesis aims to provide robust and reliable automated retinal image analysis
technique to allow for early detection of various retinal and other diseases. In particular, four innovative segmentation methods have been proposed, including two for retinal vessel network segmentation, two for optic disc segmentation and one for retina nerve fibre layers detection. First, three pre-processing operations are combined in
the segmentation method to remove noise and enhance the appearance of the blood vessel in the image, and a Mixture of Gaussians is used to extract the blood vessel tree. Second, a graph cut segmentation approach is introduced, which incorporates the
mechanism of vectors flux into the graph formulation to allow for the segmentation of very narrow blood vessels. Third, the optic disc segmentation is performed using two alternative methods: the Markov random field image reconstruction approach detects the optic disc by removing the blood vessels from the optic disc area, and the graph cut
with compensation factor method achieves that using prior information of the blood vessels. Fourth, the boundaries of the retinal nerve fibre layer (RNFL) are detected by adapting a graph cut segmentation technique that includes a kernel-induced space and a continuous multiplier based max-flow algorithm. The strong experimental results
of our retinal blood vessel segmentation methods including Mixture of Gaussian, Graph Cut achieved an average accuracy of 94:33%, 94:27% respectively. Our optic disc segmentation methods including Markov Random Field and Compensation Factor also achieved an average sensitivity of 92:85% and 85:70% respectively. These results
obtained on several public datasets and compared with existing methods have shown that our proposed methods are robust and efficient in the segmenting retinal structures such the blood vessels and the optic disc.Brunel University Londonhttp://bura.brunel.ac.uk/bitstream/2438/10387/1/FulltextThesis.pd
Optic flow estimation inside a bounded domain
Bibliography: p. 27-28.Partially supported by I.N.R.I.A. (Institut National de la Recherche en Informatique et Automatique), Le Chesnay, France. National Science Foundation grant ECS-83-12921 Army Research Office grant DAAG-29-84-K-0005Anne Rougée, Bernard C. Levy, Alan S. Willsky
Recommended from our members
Structure analysis and lesion detection from retinal fundus images
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Ocular pathology is one of the main health problems worldwide. The number of people with retinopathy symptoms has increased considerably in recent years. Early adequate treatment has demonstrated to be effective to avoid the loss of the vision. The analysis of fundus images is a non intrusive option for periodical retinal screening.
Different models designed for the analysis of retinal images are based on supervised methods, which require of hand labelled images and processing time as part of the training stage. On the other hand most of the methods have been designed under the basis of specific characteristics of the retinal images (e.g. field of view, resolution). This compromises its performance to a reduce group of retinal image with similar features.
For these reasons an unsupervised model for the analysis of retinal image is required, a model that can work without human supervision or interaction. And that is able to perform on retinal images with different characteristics. In this research, we have worked on the development of this type of model. The system locates the eye structures (e.g. optic disc and blood vessels) as first step. Later, these structures are masked out from the retinal image in order to create a clear field to perform the lesion detection.
We have selected the Graph Cut technique as a base to design the retinal structures segmentation methods. This selection allows incorporating prior knowledge to constraint the searching for the optimal segmentation. Different link weight assignments were formulated in order to attend the specific needs of the retinal structures (e.g. shape).
This research project has put to work together the fields of image processing and ophthalmology to create a novel system that contribute significantly to the state of the art in medical image analysis. This new knowledge provides a new alternative to address the analysis of medical images and opens a new panorama for researchers exploring this research area.Mexican National Council of Science and Technolog
Improving statistical power of glaucoma clinical trials using an ensemble of cyclical generative adversarial networks
Albeit spectral-domain OCT (SDOCT) is now in clinical use for glaucoma management, published clinical trials relied on time-domain OCT (TDOCT) which is characterized by low signal-to-noise ratio, leading to low statistical power. For this reason, such trials require large numbers of patients observed over long intervals and become more costly. We propose a probabilistic ensemble model and a cycle-consistent perceptual loss for improving the statistical power of trials utilizing TDOCT. TDOCT are converted to synthesized SDOCT and segmented via Bayesian fusion of an ensemble of GANs. The final retinal nerve fibre layer segmentation is obtained automatically on an averaged synthesized image using label fusion. We benchmark different networks using i) GAN, ii) Wasserstein GAN (WGAN) (iii) GAN + perceptual loss and iv) WGAN + perceptual loss. For training and validation, an independent dataset is used, while testing is performed on the UK Glaucoma Treatment Study (UKGTS), i.e. a TDOCT-based trial. We quantify the statistical power of the measurements obtained with our method, as compared with those derived from the original TDOCT. The results provide new insights into the UKGTS, showing a significantly better separation between treatment arms, while improving the statistical power of TDOCT on par with visual field measurements
Towards Complete Ocular Disease Diagnosis in Color Fundus Image
Non-invasive assessment of retinal fundus image is well suited for early detection of ocular disease and is facilitated more by advancements in computed vision and machine learning. Most of the Deep learning based diagnosis system gives just a diagnosis(absence or presence) of a certain number of diseases without hinting the underlying pathological abnormalities. We attempt to extract such pathological markers, as an ophthalmologist would do, in this thesis and pave a way for explainable diagnosis/assistance task. Such abnormalities can be present in various regions of a fundus image including vasculature, Optic Nerve Disc/Cup, or even in non-vascular region. This thesis consist of series of novel techniques starting from robust retinal vessel segmentation, complete vascular topology extraction, and better ArteryVein classification. Finally, we compute two of the most important vascular anomalies-arteryvein ratio and vessel tortuosity. While most of the research focuses on vessel segmentation, and artery-vein classification, we have successfully advanced this line of research one step further. We believe it can be a very valuable framework for future researcher working on automated retinal disease diagnosis
Study of Image Local Scale Structure Using Nonlinear Diffusion
Multi-scale representation and local scale extraction of images are important in computer vision research, as in general , structures within images are unknown. Traditionally, the multi-scale analysis is based on the linear diusion (i.e. heat diusion) with known limitation in edge distortions. In addition, the term scale which is used
widely in multi-scale and local scale analysis does not have a consistent denition and it can pose potential diculties in real image analysis, especially for the proper interpretation of scale as a geometric measure. In this study, in order to overcome
limitations of linear diusion, we focus on the multi-scale analysis based on total variation minimization model. This model has been used in image denoising with the power that it can preserve edge structures. Based on the total variation model, we construct the multi-scale space and propose a denition for image local scale. The
new denition of local scale incorporates both pixel-wise and orientation information.
This denition can be interpreted with a clear geometrical meaning and applied in general image analysis. The potential applications of total variation model in retinal fundus image analysis is explored. The existence of blood vessel and drusen structures within a single fundus image makes the image analysis a challenging problem.
A multi-scale model based on total variation is used, showing the capabilities in both drusen and blood vessel detections. The performance of vessel detection is compared with publicly available methods, showing the improvements both quantitatively and
qualitatively. This study provides a better insight into local scale study and shows the potentials of total variation model in medical image analysis
Investigating and Improving Latent Density Segmentation Models for Aleatoric Uncertainty Quantification in Medical Imaging
Data uncertainties, such as sensor noise or occlusions, can introduce
irreducible ambiguities in images, which result in varying, yet plausible,
semantic hypotheses. In Machine Learning, this ambiguity is commonly referred
to as aleatoric uncertainty. Latent density models can be utilized to address
this problem in image segmentation. The most popular approach is the
Probabilistic U-Net (PU-Net), which uses latent Normal densities to optimize
the conditional data log-likelihood Evidence Lower Bound. In this work, we
demonstrate that the PU- Net latent space is severely inhomogenous. As a
result, the effectiveness of gradient descent is inhibited and the model
becomes extremely sensitive to the localization of the latent space samples,
resulting in defective predictions. To address this, we present the Sinkhorn
PU-Net (SPU-Net), which uses the Sinkhorn Divergence to promote homogeneity
across all latent dimensions, effectively improving gradient-descent updates
and model robustness. Our results show that by applying this on public datasets
of various clinical segmentation problems, the SPU-Net receives up to 11%
performance gains compared against preceding latent variable models for
probabilistic segmentation on the Hungarian-Matched metric. The results
indicate that by encouraging a homogeneous latent space, one can significantly
improve latent density modeling for medical image segmentation.Comment: 12 pages incl. references, 11 figure
- …