189 research outputs found

    Optimal Transport-based Graph Matching for 3D retinal OCT image registration

    Get PDF
    Registration of longitudinal optical coherence tomography (OCT) images assists disease monitoring and is essential in image fusion applications. Mouse retinal OCT images are often collected for longitudinal study of eye disease models such as uveitis, but their quality is often poor compared with human imaging. This paper presents a novel but efficient framework involving an optimal transport based graph matching (OT-GM) method for 3D mouse OCT image registration. We first perform registration of fundus-like images obtained by projecting all b-scans of a volume on a plane orthogonal to them, hereafter referred to as the x-y plane. We introduce Adaptive Weighted Vessel Graph Descriptors (AWVGD) and 3D Cube Descriptors (CD) to identify the correspondence between nodes of graphs extracted from segmented vessels within the OCT projection images. The AWVGD comprises scaling, translation and rotation, which are computationally efficient, whereas CD exploits 3D spatial and frequency domain information. The OT-GM method subsequently performs the correct alignment in the x-y plane. Finally, registration along the direction orthogonal to the x-y plane (the z-direction) is guided by the segmentation of two important anatomical features peculiar to mouse b-scans, the Internal Limiting Membrane (ILM) and the hyaloid remnant (HR). Both subjective and objective evaluation results demonstrate that our framework outperforms other well-established methods on mouse OCT images within a reasonable execution time

    Back to Basics: Fast Denoising Iterative Algorithm

    Full text link
    We introduce Back to Basics (BTB), a fast iterative algorithm for noise reduction. Our method is computationally efficient, does not require training or ground truth data, and can be applied in the presence of independent noise, as well as correlated (coherent) noise, where the noise level is unknown. We examine three study cases: natural image denoising in the presence of additive white Gaussian noise, Poisson-distributed image denoising, and speckle suppression in optical coherence tomography (OCT). Experimental results demonstrate that the proposed approach can effectively improve image quality, in challenging noise settings. Theoretical guarantees are provided for convergence stability

    Task adapted reconstruction for inverse problems

    Full text link
    The paper considers the problem of performing a task defined on a model parameter that is only observed indirectly through noisy data in an ill-posed inverse problem. A key aspect is to formalize the steps of reconstruction and task as appropriate estimators (non-randomized decision rules) in statistical estimation problems. The implementation makes use of (deep) neural networks to provide a differentiable parametrization of the family of estimators for both steps. These networks are combined and jointly trained against suitable supervised training data in order to minimize a joint differentiable loss function, resulting in an end-to-end task adapted reconstruction method. The suggested framework is generic, yet adaptable, with a plug-and-play structure for adjusting both the inverse problem and the task at hand. More precisely, the data model (forward operator and statistical model of the noise) associated with the inverse problem is exchangeable, e.g., by using neural network architecture given by a learned iterative method. Furthermore, any task that is encodable as a trainable neural network can be used. The approach is demonstrated on joint tomographic image reconstruction, classification and joint tomographic image reconstruction segmentation

    A compactness based saliency approach for leakages detection in fluorescein angiogram

    Get PDF
    This study has developed a novel saliency detection method based on compactness feature for detecting three common types of leakage in retinal fluorescein angiogram: large focal, punctate focal, and vessel segment leakage. Leakage from retinal vessels occurs in a wide range of retinal diseases, such as diabetic maculopathy and paediatric malarial retinopathy. The proposed framework consists of three major steps: saliency detection, saliency refinement and leakage detection. First, the Retinex theory is adapted to address the illumination inhomogeneity problem. Then two saliency cues, intensity and compactness, are proposed for the estimation of the saliency map of each individual superpixel at each level. The saliency maps at different levels over the same cues are fused using an averaging operator. Finally, the leaking sites can be detected by masking the vessel and optic disc regions. The effectiveness of this framework has been evaluated by applying it to different types of leakage images with cerebral malaria. The sensitivity in detecting large focal, punctate focal and vessel segment leakage is 98.1, 88.2 and 82.7 %, respectively, when compared to a reference standard of manual annotations by expert human observers. The developed framework will become a new powerful tool for studying retinal conditions involving retinal leakage

    Color Fundus Image Registration Using a Learning-Based Domain-Specific Landmark Detection Methodology

    Get PDF
    Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG[Abstract] Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.This research was funded by Instituto de Salud Carlos III, Government of Spain, DTS18/00 136 research project; Ministerio de Ciencia e Innovación y Universidades, Government of Spain, RTI2018-095 894-B-I00 research project; Consellería de Cultura, Educación e Universidade, Xunta de Galicia through the predoctoral grant contract ref. ED481A 2021/147 and Grupos de Referencia Competitiva, grant ref. ED431C 2020/24; CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, receives financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaría Xeral de Universidades (20%). The funding institutions had no involvement in the study design, in the collection, analysis and interpretation of data; in the writing of the manuscript; or in the decision to submit the manuscript for publication. Funding for open access charge: Universidade da Coruña/CISUGXunta de Galicia; ED481A 2021/147Xunta de Galicia; ED431C 2020/24Xunta de Galicia; ED431G 2019/0

    Landmark Localization, Feature Matching and Biomarker Discovery from Magnetic Resonance Images

    Get PDF
    The work presented in this thesis proposes several methods that can be roughly divided into three different categories: I) landmark localization in medical images, II) feature matching for image registration, and III) biomarker discovery in neuroimaging. The first part deals with the identification of anatomical landmarks. The motivation stems from the fact that the manual identification and labeling of these landmarks is very time consuming and prone to observer errors, especially when large datasets must be analyzed. In this thesis we present three methods to tackle this challenge: A landmark descriptor based on local self-similarities (SS), a subspace building framework based on manifold learning and a sparse coding landmark descriptor based on data-specific learned dictionary basis. The second part of this thesis deals with finding matching features between a pair of images. These matches can be used to perform a registration between them. Registration is a powerful tool that allows mapping images in a common space in order to aid in their analysis. Accurate registration can be challenging to achieve using intensity based registration algorithms. Here, a framework is proposed for learning correspondences in pairs of images by matching SS features and random sample and consensus (RANSAC) is employed as a robust model estimator to learn a deformation model based on feature matches. Finally, the third part of the thesis deals with biomarker discovery using machine learning. In this section a framework for feature extraction from learned low-dimensional subspaces that represent inter-subject variability is proposed. The manifold subspace is built using data-driven regions of interest (ROI). These regions are learned via sparse regression, with stability selection. Also, probabilistic distribution models for different stages in the disease trajectory are estimated for different class populations in the low-dimensional manifold and used to construct a probabilistic scoring function.Open Acces

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Surface Denoising based on The Variation of Normals and Retinal Shape Analysis

    Get PDF
    Through the development of this thesis, starting from the curvature tensor, we have been able to understand the variation of tangent vectors to define a shape analysis operator and also a relationship between the classical shape operator and the curvature tensor on a triangular surface. In continuation, the first part of the thesis analyzed the variation of surface normals and introduced a shape analysis operator, which is further used for mesh and point set denoising. In the second part of the thesis, mathematical modeling and shape quantification algorithms are introduced for retinal shape analysis. In the first half, this thesis followed the concept of the variation of surface normals, which is termed as the normal voting tensor and derived a relation between the shape operator and the normal voting tensor. The concept of the directional and the mean curvatures is extended on the dual representation of a triangulated surface. A normal voting tensor is defined on each triangle of a geometry and termed as the element-based normal voting tensor (ENVT). Later, a deformation tensor is extracted from the ENVT and it consists of the anisotropy of a surface and the mean curvature vector is defined based on the ENVT deformation tensor. The ENVT-based mesh denoising algorithm is introduced, where the ENVT is used as a shape operator. A binary optimization technique is applied on the spectral components of the ENVT that helps the algorithm to retain sharp features in the concerned geometry and improves the convergence rate of the algorithm. Later, a stochastic analysis of the effect of noise on the triangular mesh based on the minimum edge length of the elements in the geometry is explained. It gives an upper bound to the noise standard deviation to have minimum probability for flipped element normals. The ENVT-based mesh denoising concept is extended for a point set denoising, where noisy vertex normals are filtered using the vertex-based NVT and the binary optimization. For vertex update stage in point set denoising, we added different constraints to the quadratic error metric based on features (edges and corners) or non-feature (planar) points. This thesis also investigated a robust statistics framework for face normal bilateral filtering and proposed a robust and high fidelity two-stage mesh denoising method using Tukey’s bi-weight function as a robust estimator, which stops the diffusion at sharp features and produces smooth umbilical regions. This algorithm introduced a novel vertex update scheme, which uses a differential coordinate-based Laplace operator along with an edge-face normal orthogonality constraint to produce a high-quality mesh without face normal flips and it also makes the algorithm more robust against high-intensity noise. The second half of thesis focused on the application of the proposed geometric processing algorithms on the OCT (optical coherence tomography) scan data for quantification of the human retinal shape. The retina is a part of the central nervous system and comprises a similar cellular composition as the brain. Therefore, many neurological disorders affect the retinal shape and these neuroinflammatory conditions are known to cause modifications to two important regions of the retina: the fovea and the optical nerve head (ONH). This thesis consists of an accurate and robust shape modeling of these regions to diagnose several neurological disorders by detecting the shape changes. For the fovea, a parametric modeling algorithm is introduced using Cubic Bezier and this algorithm derives several 3D shape parameters, which quantify the foveal shape with high accuracy. For the ONH, a 3D shape analysis algorithm is introduced to measure the shape variation regarding different neurological disorders. The proposed algorithm uses triangulated manifold surfaces of two different layers of the retina to derive several 3D shape parameters. The experimental results of the fovea and the ONH morphometry confirmed that these algorithms can provide an aid to diagnose several neurological disorders

    Patch-based methods for variational image processing problems

    Get PDF
    Image Processing problems are notoriously difficult. To name a few of these difficulties, they are usually ill-posed, involve a huge number of unknowns (from one to several per pixel!), and images cannot be considered as the linear superposition of a few physical sources as they contain many different scales and non-linearities. However, if one considers instead of images as a whole small blocks (or patches) inside the pictures, many of these hurdles vanish and problems become much easier to solve, at the cost of increasing again the dimensionality of the data to process. Following the seminal NL-means algorithm in 2005-2006, methods that consider only the visual correlation between patches and ignore their spatial relationship are called non-local methods. While powerful, it is an arduous task to define non-local methods without using heuristic formulations or complex mathematical frameworks. On the other hand, another powerful property has brought global image processing algorithms one step further: it is the sparsity of images in well chosen representation basis. However, this property is difficult to embed naturally in non-local methods, yielding algorithms that are usually inefficient or circonvoluted. In this thesis, we explore alternative approaches to non-locality, with the goals of i) developing universal approaches that can handle local and non-local constraints and ii) leveraging the qualities of both non-locality and sparsity. For the first point, we will see that embedding the patches of an image into a graph-based framework can yield a simple algorithm that can switch from local to non-local diffusion, which we will apply to the problem of large area image inpainting. For the second point, we will first study a fast patch preselection process that is able to group patches according to their visual content. This preselection operator will then serve as input to a social sparsity enforcing operator that will create sparse groups of jointly sparse patches, thus exploiting all the redundancies present in the data, in a simple mathematical framework. Finally, we will study the problem of reconstructing plausible patches from a few binarized measurements. We will show that this task can be achieved in the case of popular binarized image keypoints descriptors, thus demonstrating a potential privacy issue in mobile visual recognition applications, but also opening a promising way to the design and the construction of a new generation of smart cameras
    corecore