12 research outputs found

    Registration and Fusion of the Autofluorescent and Infrared Retinal Images

    Get PDF
    This article deals with registration and fusion of multimodal opththalmologic images obtained by means of a laser scanning device (Heidelberg retina angiograph). The registration framework has been designed and tested for combination of autofluorescent and infrared images. This process is a necessary step for consecutive pixel level fusion and analysis utilizing information from both modalities. Two fusion methods are presented and compared

    Retinal Fundus Image Registration via Vascular Structure Graph Matching

    Get PDF
    Motivated by the observation that a retinal fundus image may contain some unique geometric structures within its vascular trees which can be utilized for feature matching, in this paper, we proposed a graph-based registration framework called GM-ICP to align pairwise retinal images. First, the retinal vessels are automatically detected and represented as vascular structure graphs. A graph matching is then performed to find global correspondences between vascular bifurcations. Finally, a revised ICP algorithm incorporating with quadratic transformation model is used at fine level to register vessel shape models. In order to eliminate the incorrect matches from global correspondence set obtained via graph matching, we proposed a structure-based sample consensus (STRUCT-SAC) algorithm. The advantages of our approach are threefold: (1) global optimum solution can be achieved with graph matching; (2) our method is invariant to linear geometric transformations; and (3) heavy local feature descriptors are not required. The effectiveness of our method is demonstrated by the experiments with 48 pairs retinal images collected from clinical patients

    Automated Quantitative Analysis of Blood Flow in Extracranial–Intracranial Arterial Bypass Based on Indocyanine Green Angiography

    Get PDF
    Microvascular imaging based on indocyanine green is an important tool for surgeons who carry out extracranial–intracranial arterial bypass surgery. In terms of blood perfusion, indocyanine green images contain abundant information, which cannot be effectively interpreted by humans or currently available commercial software. In this paper, an automatic processing framework for perfusion assessments based on indocyanine green videos is proposed and consists of three stages, namely, vessel segmentation based on the UNet deep neural network, preoperative and postoperative image registrations based on scale-invariant transform features, and blood flow evaluation based on the Horn–Schunck optical flow method. This automatic processing flow can reveal the blood flow direction and intensity curve of any vessel, as well as the blood perfusion changes before and after an operation. Commercial software embedded in a microscope is used as a reference to evaluate the effectiveness of the algorithm in this study. A total of 120 patients from multiple centers were sampled for the study. For blood vessel segmentation, a Dice coefficient of 0.80 and a Jaccard coefficient of 0.73 were obtained. For image registration, the success rate was 81%. In preoperative and postoperative video processing, the coincidence rates between the automatic processing method and commercial software were 89 and 87%, respectively. The proposed framework not only achieves blood perfusion analysis similar to that of commercial software but also automatically detects and matches blood vessels before and after an operation, thus quantifying the flow direction and enabling surgeons to intuitively evaluate the perfusion changes caused by bypass surgery

    Feature-Based Retinal Image Registration Using D-Saddle Feature

    Get PDF

    Fingerprint Matching with Self Organizing Maps

    Get PDF

    A novel automated approach of multi-modality retinal image registration and fusion

    Get PDF
    Biomedical image registration and fusion are usually scene dependent, and require intensive computational effort. A novel automated approach of feature-based control point detection and area-based registration and fusion of retinal images has been successfully designed and developed. The new algorithm, which is reliable and time-efficient, has an automatic adaptation from frame to frame with few tunable threshold parameters. The reference and the to-be-registered images are from two different modalities, i.e. angiogram grayscale images and fundus color images. The relative study of retinal images enhances the information on the fundus image by superimposing information contained in the angiogram image. Through the thesis research, two new contributions have been made to the biomedical image registration and fusion area. The first contribution is the automatic control point detection at the global direction change pixels using adaptive exploratory algorithm. Shape similarity criteria are employed to match the control points. The second contribution is the heuristic optimization algorithm that maximizes Mutual-Pixel-Count (MPC) objective function. The initially selected control points are adjusted during the optimization at the sub-pixel level. A global maxima equivalent result is achieved by calculating MPC local maxima with an efficient computation cost. The iteration stops either when MPC reaches the maximum value, or when the maximum allowable loop count is reached. To our knowledge, it is the first time that the MPC concept has been introduced into biomedical image fusion area as the measurement criteria for fusion accuracy. The fusion image is generated based on the current control point coordinates when the iteration stops. The comparative study of the presented automatic registration and fusion scheme against Centerline Control Point Detection Algorithm, Genetic Algorithm, RMSE objective function, and other existing data fusion approaches has shown the advantage of the new approach in terms of accuracy, efficiency, and novelty

    Segmentation, registration,and selective watermarking of retinal images

    Get PDF
    In this dissertation, I investigated some fundamental issues related to medical image segmentation, registration, and watermarking. I used color retinal fundus images to perform my study because of the rich representation of different objects (blood vessels, microaneurysms, hemorrhages, exudates, etc.) that are pathologically important and have close resemblance in shapes and colors. To attack this complex subject, I developed a divide-and-conquer strategy to address related issues step-by-step and to optimize the parameters of different algorithm steps. Most, if not all, objects in our discussion are related. The algorithms for detection, registration, and protection of different objects need to consider how to differentiate the foreground from the background and be able to correctly characterize the features of the image objects and their geometric properties. To address these problems, I characterized the shapes of blood vessels in retinal images and proposed the algorithms to extract the features of blood vessels. A tracing algorithm was developed for the detection of blood vessels along the vascular network. Due to the noise interference and various image qualities, the robust segmentation techniques were used for the accurate characterization of the objects shapes and verification. Based on the segmentation results, a registration algorithm was developed, which uses the bifurcation and cross-over points of blood vessels to establish the correspondence between the images and derive the transformation that aligns the images. A Region-of-Interest (ROI) based watermarking scheme was proposed for image authenticity. It uses linear segments extracted from the image as reference locations for embedding and detecting watermark. Global and locally-randomized synchronization schemes were proposed for bit-sequence synchronization of a watermark. The scheme is robust against common image processing and geometric distortions (rotation and scaling), and it can detect alternations such as moving or removing of the image content

    Advanced retinal imaging: Feature extraction, 2-D registration, and 3-D reconstruction

    Get PDF
    In this dissertation, we have studied feature extraction and multiple view geometry in the context of retinal imaging. Specifically, this research involves three components, i.e., feature extraction, 2-D registration, and 3-D reconstruction. First, the problem of feature extraction is investigated. Features are significantly important in motion estimation techniques because they are the input to the algorithms. We have proposed a feature extraction algorithm for retinal images. Bifurcations/crossovers are used as features. A modified local entropy thresholding algorithm based on a new definition of co-occurrence matrix is proposed. Then, we consider 2-D retinal image registration which is the problem of the transformation of 2-D/2-D. Both linear and nonlinear models are incorporated to account for motions and distortions. A hybrid registration method has been introduced in order to take advantages of both feature-based and area-based methods have offered along with relevant decision-making criteria. Area-based binary mutual information is proposed or translation estimation. A feature-based hierarchical registration technique, which involves the affine and quadratic transformations, is developed. After that, a 3-D retinal surface reconstruction issue has been addressed. To generate a 3-D scene from 2-D images, a camera projection or transformations of 3-D/2-D techniques have been investigated. We choose an affine camera to characterize for 3-D retinal reconstruction. We introduce a constrained optimization procedure which incorporates a geometrically penalty function and lens distortion into the cost function. The procedure optimizes all of the parameters, camera's parameters, 3-D points, the physical shape of human retina, and lens distortion, simultaneously. Then, a point-based spherical fitting method is introduced. The proposed retinal imaging techniques will pave the path to a comprehensive visual 3-D retinal model for many medical applications

    Multimodal registration of retinal images using self organizing maps

    No full text
    Abstract-In this paper, an automatic method for registering multimodal retinal images is presented. The method consists of three steps: the vessel centerline detection and extraction of bifurcation points only in the reference image, the automatic correspondence of bifurcation points in the two images using a novel implementation of the self organizing maps and the extraction of the parameters of the affine transform using the previously obtained correspondences. The proposed registration algorithm was tested on 24 multimodal retinal pairs and the obtained results show an advantageous performance in terms of accuracy with respect to the manual registration
    corecore