777 research outputs found

    The image ray transform for structural feature detection

    No full text
    The use of analogies to physical phenomena is an exciting paradigm in computer vision that allows unorthodox approaches to feature extraction, creating new techniques with unique properties. A technique known as the "image ray transform" has been developed based upon an analogy to the propagation of light as rays. The transform analogises an image to a set of glass blocks with refractive index linked to pixel properties and then casts a large number of rays through the image. The course of these rays is accumulated into an output image. The technique can successfully extract tubular and circular features and we show successful circle detection, ear biometrics and retinal vessel extraction. The transform has also been extended through the use of multiple rays arranged as a beam to increase robustness to noise, and we show quantitative results for fully automatic ear recognition, achieving 95.2% rank one recognition across 63 subjects

    Optic nerve head segmentation

    Get PDF
    Reliable and efficient optic disk localization and segmentation are important tasks in automated retinal screening. General-purpose edge detection algorithms often fail to segment the optic disk due to fuzzy boundaries, inconsistent image contrast or missing edge features. This paper presents an algorithm for the localization and segmentation of the optic nerve head boundary in low-resolution images (about 20 /spl mu//pixel). Optic disk localization is achieved using specialized template matching, and segmentation by a deformable contour model. The latter uses a global elliptical model and a local deformable model with variable edge-strength dependent stiffness. The algorithm is evaluated against a randomly selected database of 100 images from a diabetic screening programme. Ten images were classified as unusable; the others were of variable quality. The localization algorithm succeeded on all bar one usable image; the contour estimation algorithm was qualitatively assessed by an ophthalmologist as having Excellent-Fair performance in 83% of cases, and performs well even on blurred image

    Recognition of License Plates and Optical Nerve Pattern Detection Using Hough Transform

    Get PDF
    The global technique of detection of the features is Hough transform used in image processing, computer vision and image analysis. The detection of prominent line of the object under consideration is the main purpose of the Hough transform which is carried out by the process of voting. The first part of this work is the use of Hough transform as feature vector, tested on Indian license plate system, having font of UK standard and UK standard 3D, which has ten slots for characters and numbers.So tensub images are obtained.These sub images are fed to Hough transform and Hough peaks to extract the Hough peaks information. First two Hough peaks are taken into account for the recognition purposes. The edge detection along with image rotation is also used prior to the implementation of Hough transform in order to get the edges of the gray scale image. Further, the image rotation angle is varied; the superior results are taken under consideration. The second part of this work makes the use of Hough transform and Hough peaks, for examining the optical nerve patterns of eye. An available database for RIM-one is used to serve the purpose. The optical nerve pattern is unique for every human being and remains almost unchanged throughout the life time. So the purpose is to detect the change in the pattern report the abnormality, to make automatic system so capable that they can replace the experts of that field. For this detection purpose Hough Transform and Hough Peaks are used and the fact that these nerve patterns are unique in every sense is confirmed

    Ophthalmologic Image Registration Based on Shape-Context: Application to Fundus Autofluorescence (FAF) Images

    No full text
    Online access to subscriber only at http://www.actapress.com/Content_Of_Proceeding.aspx?ProceedingID=494International audienceA novel registration algorithm, which was developed in order to facilitate ophthalmologic image processing, is presented in this paper. It has been evaluated on FAF images, which present low Si gnal/Noise Ratio (SNR) and variations in dynamic grayscale range. These characteristics complicate the registration process and cause a failure to area-based registration techniques [1, 2] . Our method is based on shape-context theory [3] . In the first step, images are enhanced by Gaussian model based histog ram modification. Features are extracted in the next step by morphological operators, which are used to detect an approximation of vascular tree from both reference and floating images. Simplified medial axis of vessels is then calculated. From each image, a set of control points called Bifurcation Points (BPs) is extracted from the medial axis through a new fast algorithm. Radial histogram is formed for each BP using the medial axis. The Chi2 distance is measured between two sets of BPs based on radial histogram. Hungarian algorithm is applied to assign the correspondence among BPs from reference and floating images. The algorithmic robustness is evaluated by mutual information criteria between manual registration considered as Ground Truth and automatic one

    A new method of vascular point detection using artificial neural network

    Get PDF
    Vascular intersection is an important feature in retina fundus image (RFI). It can be used to monitor the progress of diabetes hence accurately determining vascular point is of utmost important. In this work a new method of vascular point detection using artificial neural network model has been proposed. The method uses a 5x5 window in order to detect the combination of bifurcation and crossover points in a retina fundus image. Simulated images have been used to train the artificial neural network and on convergence the network is used to test (RFI) from DRIVE database. Performance analysis of the system shows that ANN based technique achieves 100% accuracy on simulated images and minimum of 92% accuracy on RFI obtained from DRIVE database

    FRAMEWORK FOR LOW-QUAL ITY RETINAL MOSAICING

    Get PDF
    The medical equipment used to capture retinal fundus images is generally expensive. With the development of technology and the emergence of smartphones, new portable screening options have emerged, one of them being the D-Eye device. This and other similar devices associated with a smartphone, when compared to specialized equipment, present lower quality in the retinal video captured, yet with sufficient quality to perform a medical pre-screening. From this, if necessary, individuals can be referred for specialized screening, in order to obtain a medical diagnosis. This dissertation contributes to the development of a framework, which is a tool that allows grouping a set of developed and explored methods, applied to low-quality retinal videos. Three areas of intervention were defined: the extraction of relevant regions in video sequences; creating mosaicing images in order to obtain a summary image of each retinal video; develop of a graphical interface to accommodate the previous contributions. To extract the relevant regions from these videos (the retinal zone), two methods were proposed, one of them is based on more classical image processing approaches such as thresholds and Hough Circle transform. The other performs the extraction of the retinal location by applying a neural network, which is one of the methods reported in the literature with good performance for object detection, the YOLOv4. The mosaicing process was divided into two stages; in the first stage, the GLAMpoints neural network was applied to extract relevant points. From these, some transformations are carried out to have in the same referential the overlap of common regions of the images. In the second stage, a smoothing process was performed in the transition between images. A graphical interface was developed to encompass all the above methods to facilitate access to and use of them. In addition, other features were implemented, such as comparing results with ground truth and exporting videos containing only regions of interest

    A new method of vascular point detection using artificial neural network

    Get PDF
    Vascular intersection is an important feature in retina fundus image (RFI). It can be used to monitor the progress of diabetes hence accurately determining vascular point is of utmost important. In this work a new method of vascular point detection using artificial neural network model has been proposed. The method uses a 5×5 window in order to detect the combination of bifurcation and crossover points in a retina fundus image. Simulated images have been used to train the artificial neural network and on convergence the network is used to test (RFI) from DRIVE database. Performance analysis of the system shows that ANN based technique achieves 100% accuracy on simulated images and minimum of 92% accuracy on RFI obtained from DRIVE database

    Generalizable automated pixel-level structural segmentation of medical and biological data

    Get PDF
    Over the years, the rapid expansion in imaging techniques and equipments has driven the demand for more automation in handling large medical and biological data sets. A wealth of approaches have been suggested as optimal solutions for their respective imaging types. These solutions span various image resolutions, modalities and contrast (staining) mechanisms. Few approaches generalise well across multiple image types, contrasts or resolution. This thesis proposes an automated pixel-level framework that addresses 2D, 2D+t and 3D structural segmentation in a more generalizable manner, yet has enough adaptability to address a number of specific image modalities, spanning retinal funduscopy, sequential fluorescein angiography and two-photon microscopy. The pixel-level segmentation scheme involves: i ) constructing a phase-invariant orientation field of the local spatial neighbourhood; ii ) combining local feature maps with intensity-based measures in a structural patch context; iii ) using a complex supervised learning process to interpret the combination of all the elements in the patch in order to reach a classification decision. This has the advantage of transferability from retinal blood vessels in 2D to neural structures in 3D. To process the temporal components in non-standard 2D+t retinal angiography sequences, we first introduce a co-registration procedure: at the pairwise level, we combine projective RANSAC with a quadratic homography transformation to map the coordinate systems between any two frames. At the joint level, we construct a hierarchical approach in order for each individual frame to be registered to the global reference intra- and inter- sequence(s). We then take a non-training approach that searches in both the spatial neighbourhood of each pixel and the filter output across varying scales to locate and link microvascular centrelines to (sub-) pixel accuracy. In essence, this \link while extract" piece-wise segmentation approach combines the local phase-invariant orientation field information with additional local phase estimates to obtain a soft classification of the centreline (sub-) pixel locations. Unlike retinal segmentation problems where vasculature is the main focus, 3D neural segmentation requires additional exibility, allowing a variety of structures of anatomical importance yet with different geometric properties to be differentiated both from the background and against other structures. Notably, cellular structures, such as Purkinje cells, neural dendrites and interneurons, all display certain elongation along their medial axes, yet each class has a characteristic shape captured by an orientation field that distinguishes it from other structures. To take this into consideration, we introduce a 5D orientation mapping to capture these orientation properties. This mapping is incorporated into the local feature map description prior to a learning machine. Extensive performance evaluations and validation of each of the techniques presented in this thesis is carried out. For retinal fundus images, we compute Receiver Operating Characteristic (ROC) curves on existing public databases (DRIVE & STARE) to assess and compare our algorithms with other benchmark methods. For 2D+t retinal angiography sequences, we compute the error metrics ("Centreline Error") of our scheme with other benchmark methods. For microscopic cortical data stacks, we present segmentation results on both surrogate data with known ground-truth and experimental rat cerebellar cortex two-photon microscopic tissue stacks.Open Acces
    corecore