890 research outputs found

    Multimodal retinal image registration using a fast principal component analysis hybrid-based similarity measure

    Get PDF
    Multimodal retinal images (RI) are extensively used for analysing various eye diseases and conditions such as myopia and diabetic retinopathy. The incorporation of either two or more RI modalities provides complementary structure information in the presence of non-uniform illumination and low-contrast homogeneous regions. It also presents significant challenges for retinal image registration (RIR). This paper investigates how the Expectation Maximization for Principal Component Analysis with Mutual Information (EMPCA-MI) algorithm can effectively achieve multimodal RIR. This iterative hybrid-based similarity measure combines spatial features with mutual information to provide enhanced registration without recourse to either segmentation or feature extraction. Experimental results for clinical multimodal RI datasets comprising colour fundus and scanning laser ophthalmoscope images confirm EMPCA-MI is able to consistently afford superior numerical and qualitative registration performance compared with existing RIR techniques, such as the bifurcation structures method

    Retinal Fundus Image Registration via Vascular Structure Graph Matching

    Get PDF
    Motivated by the observation that a retinal fundus image may contain some unique geometric structures within its vascular trees which can be utilized for feature matching, in this paper, we proposed a graph-based registration framework called GM-ICP to align pairwise retinal images. First, the retinal vessels are automatically detected and represented as vascular structure graphs. A graph matching is then performed to find global correspondences between vascular bifurcations. Finally, a revised ICP algorithm incorporating with quadratic transformation model is used at fine level to register vessel shape models. In order to eliminate the incorrect matches from global correspondence set obtained via graph matching, we proposed a structure-based sample consensus (STRUCT-SAC) algorithm. The advantages of our approach are threefold: (1) global optimum solution can be achieved with graph matching; (2) our method is invariant to linear geometric transformations; and (3) heavy local feature descriptors are not required. The effectiveness of our method is demonstrated by the experiments with 48 pairs retinal images collected from clinical patients

    A new method of vascular point detection using artificial neural network

    Get PDF
    Vascular intersection is an important feature in retina fundus image (RFI). It can be used to monitor the progress of diabetes hence accurately determining vascular point is of utmost important. In this work a new method of vascular point detection using artificial neural network model has been proposed. The method uses a 5x5 window in order to detect the combination of bifurcation and crossover points in a retina fundus image. Simulated images have been used to train the artificial neural network and on convergence the network is used to test (RFI) from DRIVE database. Performance analysis of the system shows that ANN based technique achieves 100% accuracy on simulated images and minimum of 92% accuracy on RFI obtained from DRIVE database

    A new method of vascular point detection using artificial neural network

    Get PDF
    Vascular intersection is an important feature in retina fundus image (RFI). It can be used to monitor the progress of diabetes hence accurately determining vascular point is of utmost important. In this work a new method of vascular point detection using artificial neural network model has been proposed. The method uses a 5×5 window in order to detect the combination of bifurcation and crossover points in a retina fundus image. Simulated images have been used to train the artificial neural network and on convergence the network is used to test (RFI) from DRIVE database. Performance analysis of the system shows that ANN based technique achieves 100% accuracy on simulated images and minimum of 92% accuracy on RFI obtained from DRIVE database

    A novel automated approach of multi-modality retinal image registration and fusion

    Get PDF
    Biomedical image registration and fusion are usually scene dependent, and require intensive computational effort. A novel automated approach of feature-based control point detection and area-based registration and fusion of retinal images has been successfully designed and developed. The new algorithm, which is reliable and time-efficient, has an automatic adaptation from frame to frame with few tunable threshold parameters. The reference and the to-be-registered images are from two different modalities, i.e. angiogram grayscale images and fundus color images. The relative study of retinal images enhances the information on the fundus image by superimposing information contained in the angiogram image. Through the thesis research, two new contributions have been made to the biomedical image registration and fusion area. The first contribution is the automatic control point detection at the global direction change pixels using adaptive exploratory algorithm. Shape similarity criteria are employed to match the control points. The second contribution is the heuristic optimization algorithm that maximizes Mutual-Pixel-Count (MPC) objective function. The initially selected control points are adjusted during the optimization at the sub-pixel level. A global maxima equivalent result is achieved by calculating MPC local maxima with an efficient computation cost. The iteration stops either when MPC reaches the maximum value, or when the maximum allowable loop count is reached. To our knowledge, it is the first time that the MPC concept has been introduced into biomedical image fusion area as the measurement criteria for fusion accuracy. The fusion image is generated based on the current control point coordinates when the iteration stops. The comparative study of the presented automatic registration and fusion scheme against Centerline Control Point Detection Algorithm, Genetic Algorithm, RMSE objective function, and other existing data fusion approaches has shown the advantage of the new approach in terms of accuracy, efficiency, and novelty

    Color Fundus Image Registration Using a Learning-Based Domain-Specific Landmark Detection Methodology

    Get PDF
    Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG[Abstract] Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.This research was funded by Instituto de Salud Carlos III, Government of Spain, DTS18/00 136 research project; Ministerio de Ciencia e Innovación y Universidades, Government of Spain, RTI2018-095 894-B-I00 research project; Consellería de Cultura, Educación e Universidade, Xunta de Galicia through the predoctoral grant contract ref. ED481A 2021/147 and Grupos de Referencia Competitiva, grant ref. ED431C 2020/24; CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, receives financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaría Xeral de Universidades (20%). The funding institutions had no involvement in the study design, in the collection, analysis and interpretation of data; in the writing of the manuscript; or in the decision to submit the manuscript for publication. Funding for open access charge: Universidade da Coruña/CISUGXunta de Galicia; ED481A 2021/147Xunta de Galicia; ED431C 2020/24Xunta de Galicia; ED431G 2019/0

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
    corecore