180 research outputs found

    Retinal vessel segmentation using multi-scale textons derived from keypoints

    Get PDF
    This paper presents a retinal vessel segmentation algorithm which uses a texton dictionary to classify vessel/non-vessel pixels. However, in contrast to previous work where filter parameters are learnt from manually labelled image pixels our filter parameters are derived from a smaller set of image features that we call keypoints. A Gabor filter bank, parameterised empirically by ROC analysis, is used to extract keypoints representing significant scale specific vessel features using an approach inspired by the SIFT algorithm. We first determine keypoints using a validation set and then derive seeds from these points to initialise a k-means clustering algorithm which builds a texton dictionary from another training set. During testing we use a simple 1-NN classifier to identify vessel/non-vessel pixels and evaluate our system using the DRIVE database. We achieve average values of sensitivity, specificity and accuracy of 78.12%, 96.68% and 95.05% respectively. We find that clusters of filter responses from keypoints are more robust than those derived from hand-labelled pixels. This, in turn yields textons more representative of vessel/non-vessel classes and mitigates problems arising due to intra and inter-observer variability

    MedicalSeg: a medical GUI application for image segmentation management

    Get PDF
    In the field of medical imaging, the division of an image into meaningful structures using image segmentation is an essential step for pre-processing analysis. Many studies have been carried out to solve the general problem of the evaluation of image segmentation results. One of the main focuses in the computer vision field is based on artificial intelligence algorithms for segmentation and classification, including machine learning and deep learning approaches. The main drawback of supervised segmentation approaches is that a large dataset of ground truth validated by medical experts is required. In this sense, many research groups have developed their segmentation approaches according to their specific needs. However, a generalised application aimed at visualizing, assessing and comparing the results of different methods facilitating the generation of a ground-truth repository is not found in recent literature. In this paper, a new graphical user interface application (MedicalSeg) for the management of medical imaging based on pre-processing and segmentation is presented. The objective is twofold, first to create a test platform for comparing segmentation approaches, and secondly to generate segmented images to create ground truths that can then be used for future purposes as artificial intelligence tools. An experimental demonstration and performance analysis discussion are presented in this paper.Peer ReviewedPostprint (published version

    Retinal vessel segmentation using textons

    Get PDF
    Segmenting vessels from retinal images, like segmentation in many other medical image domains, is a challenging task, as there is no unified way that can be adopted to extract the vessels accurately. However, it is the most critical stage in automatic assessment of various forms of diseases (e.g. Glaucoma, Age-related macular degeneration, diabetic retinopathy and cardiovascular diseases etc.). Our research aims to investigate retinal image segmentation approaches based on textons as they provide a compact description of texture that can be learnt from a training set. This thesis presents a brief review of those diseases and also includes their current situations, future trends and techniques used for their automatic diagnosis in routine clinical applications. The importance of retinal vessel segmentation is particularly emphasized in such applications. An extensive review of previous work on retinal vessel segmentation and salient texture analysis methods is presented. Five automatic retinal vessel segmentation methods are proposed in this thesis. The first method focuses on addressing the problem of removing pathological anomalies (Drusen, exudates) for retinal vessel segmentation, which have been identified by other researchers as a problem and a common source of error. The results show that the modified method shows some improvement compared to a previously published method. The second novel supervised segmentation method employs textons. We propose a new filter bank (MR11) that includes bar detectors for vascular feature extraction and other kernels to detect edges and photometric variations in the image. The k-means clustering algorithm is adopted for texton generation based on the vessel and non-vessel elements which are identified by ground truth. The third improved supervised method is developed based on the second one, in which textons are generated by k-means clustering and texton maps representing vessels are derived by back projecting pixel clusters onto hand labelled ground truth. A further step is implemented to ensure that the best combinations of textons are represented in the map and subsequently used to identify vessels in the test set. The experimental results on two benchmark datasets show that our proposed method performs well compared to other published work and the results of human experts. A further test of our system on an independent set of optical fundus images verified its consistent performance. The statistical analysis on experimental results also reveals that it is possible to train unified textons for retinal vessel segmentation. In the fourth method a novel scheme using Gabor filter bank for vessel feature extraction is proposed. The ii method is inspired by the human visual system. Machine learning is used to optimize the Gabor filter parameters. The experimental results demonstrate that our method significantly enhances the true positive rate while maintaining a level of specificity that is comparable with other approaches. Finally, we proposed a new unsupervised texton based retinal vessel segmentation method using derivative of SIFT and multi-scale Gabor filers. The lack of sufficient quantities of hand labelled ground truth and the high level of variability in ground truth labels amongst experts provides the motivation for this approach. The evaluation results reveal that our unsupervised segmentation method is comparable with the best other supervised methods and other best state of the art methods

    On the Stability of Region Count in the Parameter Space of Image Analysis Methods

    Get PDF
    In this dissertation a novel bottom-up computer vision approach is proposed. This approach is based upon quantifying the stability of the number of regions or count in a multi-dimensional parameter scale-space. The stability analysis comes from the properties of flat areas in the region count space generated through bottom-up algorithms of thresholding and region growing, hysteresis thresholding, variance-based region growing. The parameters used can be threshold, region growth, intensity statistics and other low-level parameters. The advantages and disadvantages of top-down, bottom-up and hybrid computational models are discussed. The approaches of scale-space, perceptual organization and clustering methods in computer vision are also analyzed, and the difference between our approach and these approaches is clarified. An overview of our stable count idea and implementation of three algorithms derived from this idea are presented. The algorithms are applied to real-world images as well as simulated signals. We have developed three experiments based upon our framework of stable region count. The experiments are using flower detector, peak detector and retinal image lesion detector respectively to process images and signals. The results from these experiments all suggest that our computer vision framework can solve different image and signal problems and provide satisfactory solutions. In the end future research directions and improvements are proposed

    The image ray transform

    No full text
    Image feature extraction is a fundamental area of image processing and computer vision. There are many ways that techniques can be created that extract features and particularly novel techniques can be developed by taking influence from the physical world. This thesis presents the Image Ray Transform (IRT), a technique based upon an analogy to light, using the mechanisms that define how light travels through different media and analogy to optical fibres to extract structural features within an image. Through analogising the image as a transparent medium we can use refraction and reflection to cast many rays inside the image and guide them towards features, transforming the image in order to emphasise tubular and circular structures.The power of the transform for structural feature detection is shown empirically in a number of applications, especially through its ability to highlight curvilinear structures. The IRT is used to enhance the accuracy of circle detection through use as a preprocessor, highlighting circles to a greater extent than conventional edge detection methods. The transform is also shown to be well suited to enrolment for ear biometrics, providing a high detection and recognition rate with PCA, comparable to manual enrolment. Vascular features such as those found in medical images are also shown to be emphasised by the transform, and the IRT is used for detection of the vasculature in retinal fundus images.Extensions to the basic image ray transform allow higher level features to be detected. A method is shown for expressing rays in an invariant form to describe the structures of an object and hence the object itself with a bag-of-visual words model. These ray features provide a complementary description of objects to other patch-based descriptors and have been tested on a number of object categorisation databases. Finally a different analysis of rays is provided that can produce information on both bilateral (reflectional) and rotational symmetry within the image, allowing a deeper understanding of image structure. The IRT is a flexible technique, capable of detecting a range of high and low level image features, and open to further use and extension across a range of applications

    Towards automated three-dimensional tracking of nephrons through stacked histological image sets

    Get PDF
    A dissertation submitted to the Faculty of Engineering and the Built Environment, University of Witwatersrand for the degree of Master of Science in Engineering. August, 2015The three-dimensional microarchitecture of the mammalian kidney is of keen interest in the fields of cell biology and biomedical engineering as it plays a crucial role in renal function. This study presents a novel approach to the automatic tracking of individual nephrons through three-dimensional histological image sets of mouse and rat kidneys. The image database forms part of a previous study carried out at the University of Aarhus, Denmark. The previous study involved manually tracking a few hundred nephrons through the image sets in order to explore the renal microarchitecture, the results of which forms the gold standard for this study. The purpose of the current research is to develop methods which contribute towards creating an automated, intelligent system as a standard tool for such image sets. This would reduce the excessive time and human effort previously required for the tracking task, enabling a larger sample of nephrons to be tracked. It would also be desirable, in future, to explore the renal microstructure of various species and diseased specimens. The developed algorithm is robust, able to isolate closely packed nephrons and track their convoluted paths despite a number of non-ideal conditions such as local image distortions, artefacts and connective tissue interference. The system consists of initial image pre-processing steps such as background removal, adaptive histogram equalisation and image segmentation. A feature extraction stage achieves data abstraction and information concentration by extracting shape iii descriptors, radial shape profiles and key coordinates for each nephron crosssection. A custom graph-based tracking algorithm is implemented to track the nephrons using the extracted coordinates. A rule-base and machine learning algorithms including an Artificial Neural Network and Support Vector Machine are used to evaluate the shape features and other information to validate the algorithm’s results through each of its iterations. The validation steps prove to be highly effective in rejecting incorrect tracking moves, with the rule-base having greater than 90% accuracy and the Artificial Neural Network and Support Vector Machine both producing 93% classification accuracies. Comparison of a selection of automatically and manually tracked nephrons yielded results of 95% accuracy and 98% tracking extent for the proximal convoluted tubule, proximal straight tubule and ascending thick limb of the loop of Henle. The ascending and descending thin limbs of the loop of Henle pose a challenge, having low accuracy and low tracking extent due to the low resolution, narrow diameter and high density of cross-sections in the inner medulla. Limited manual intervention is proposed as a solution to these limitations, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron. The developed semi-automatic system saves a considerable amount of time and effort in comparison with the manual task. Furthermore, the developed methodology forms a foundation for future development towards a fully automated tracking system for nephrons

    Computational Methods for Image Acquisition and Analysis with Applications in Optical Coherence Tomography

    Get PDF
    The computational approach to image acquisition and analysis plays an important role in medical imaging and optical coherence tomography (OCT). This thesis is dedicated to the development and evaluation of algorithmic solutions for better image acquisition and analysis with a focus on OCT retinal imaging. For image acquisition, we first developed, implemented, and systematically evaluated a compressive sensing approach for image/signal acquisition for single-pixel camera architectures and an OCT system. Our evaluation outcome provides a detailed insight into implementing compressive data acquisition of those imaging systems. We further proposed a convolutional neural network model, LSHR-Net, as the first deep-learning imaging solution for the single-pixel camera. This method can achieve better accuracy, hardware-efficient image acquisition and reconstruction than the conventional compressive sensing algorithm. Three image analysis methods were proposed to achieve retinal OCT image analysis with high accuracy and robustness. We first proposed a framework for healthy retinal layer segmentation. Our framework consists of several image processing algorithms specifically aimed at segmenting a total of 12 thin retinal cell layers, outperforming other segmentation methods. Furthermore, we proposed two deep-learning-based models to segment retinal oedema lesions in OCT images, with particular attention on processing small-scale datasets. The first model leverages transfer learning to implement oedema segmentation and achieves better accuracy than comparable methods. Based on the meta-learning concept, a second model was designed to be a solution for general medical image segmentation. The results of this work indicate that our model can be applied to retinal OCT images and other small-scale medical image data, such as skin cancer, demonstrated in this thesis

    Recent trends, technical concepts and components of computer-assisted orthopedic surgery systems: A comprehensive review

    Get PDF
    Computer-assisted orthopedic surgery (CAOS) systems have become one of the most important and challenging types of system in clinical orthopedics, as they enable precise treatment of musculoskeletal diseases, employing modern clinical navigation systems and surgical tools. This paper brings a comprehensive review of recent trends and possibilities of CAOS systems. There are three types of the surgical planning systems, including: systems based on the volumetric images (computer tomography (CT), magnetic resonance imaging (MRI) or ultrasound images), further systems utilize either 2D or 3D fluoroscopic images, and the last one utilizes the kinetic information about the joints and morphological information about the target bones. This complex review is focused on three fundamental aspects of CAOS systems: their essential components, types of CAOS systems, and mechanical tools used in CAOS systems. In this review, we also outline the possibilities for using ultrasound computer-assisted orthopedic surgery (UCAOS) systems as an alternative to conventionally used CAOS systems.Web of Science1923art. no. 519
    corecore