306 research outputs found

    Saliency Driven Vasculature Segmentation with Infinite Perimeter Active Contour Model

    Get PDF
    Automated detection of retinal blood vessels plays an important role in advancing the understanding of the mechanism, diagnosis and treatment of cardiovascular disease and many systemic diseases, such as diabetic retinopathy and age-related macular degeneration. Here, we propose a new framework for precisely segmenting retinal vasculatures. The proposed framework consists of three steps. A non-local total variation model is adapted to the Retinex theory, which aims to address challenges presented by intensity inhomogeneities, and the relatively low contrast of thin vessels compared to the background. The image is then divided into superpixels, and a compactness-based saliency detection method is proposed to locate the object of interest. For better general segmentation performance, we then make use of a new infinite active contour model to segment the vessels in each superpixel. The proposed framework has wide applications, and the results show that our model outperforms its competitors

    Advanced Visual Computing for Image Saliency Detection

    Get PDF
    Saliency detection is a category of computer vision algorithms that aims to filter out the most salient object in a given image. Existing saliency detection methods can generally be categorized as bottom-up methods and top-down methods, and the prevalent deep neural network (DNN) has begun to show its applications in saliency detection in recent years. However, the challenges in existing methods, such as problematic pre-assumption, inefficient feature integration and absence of high-level feature learning, prevent them from superior performances. In this thesis, to address the limitations above, we have proposed multiple novel models with favorable performances. Specifically, we first systematically reviewed the developments of saliency detection and its related works, and then proposed four new methods, with two based on low-level image features, and two based on DNNs. The regularized random walks ranking method (RR) and its reversion-correction-improved version (RCRR) are based on conventional low-level image features, which exhibit higher accuracy and robustness in extracting the image boundary based foreground / background queries; while the background search and foreground estimation (BSFE) and dense and sparse labeling (DSL) methods are based on DNNs, which have shown their dominant advantages in high-level image feature extraction, as well as the combined strength of multi-dimensional features. Each of the proposed methods is evaluated by extensive experiments, and all of them behave favorably against the state-of-the-art, especially the DSL method, which achieves remarkably higher performance against sixteen state-of-the-art methods (including ten conventional methods and six learning based methods) on six well-recognized public datasets. The successes of our proposed methods reveal more potential and meaningful applications of saliency detection in real-life computer vision tasks

    Retinal vascular segmentation using superpixel-based line operator and its application to vascular topology estimation

    Get PDF
    Purpose: Automatic methods of analyzing of retinal vascular networks, such as retinal blood vessel detection, vascular network topology estimation, and arteries / veins classi cation are of great assistance to the ophthalmologist in terms of diagnosis and treatment of a wide spectrum of diseases. Methods: We propose a new framework for precisely segmenting retinal vasculatures, constructing retinal vascular network topology, and separating the arteries and veins. A non-local total variation inspired Retinex model is employed to remove the image intensity inhomogeneities and relatively poor contrast. For better generalizability and segmentation performance, a superpixel based line operator is proposed as to distinguish between lines and the edges, thus allowing more tolerance in the position of the respective contours. The concept of dominant sets clustering is adopted to estimate retinal vessel topology and classify the vessel network into arteries and veins. Results: The proposed segmentation method yields competitive results on three pub- lic datasets (STARE, DRIVE, and IOSTAR), and it has superior performance when com- pared with unsupervised segmentation methods, with accuracy of 0.954, 0.957, and 0.964, respectively. The topology estimation approach has been applied to ve public databases 1 (DRIVE,STARE, INSPIRE, IOSTAR, and VICAVR) and achieved high accuracy of 0.830, 0.910, 0.915, 0.928, and 0.889, respectively. The accuracies of arteries / veins classi cation based on the estimated vascular topology on three public databases (INSPIRE, DRIVE and VICAVR) are 0.90.9, 0.910, and 0.907, respectively. Conclusions: The experimental results show that the proposed framework has e ectively addressed crossover problem, a bottleneck issue in segmentation and vascular topology recon- struction. The vascular topology information signi cantly improves the accuracy on arteries / veins classi cation

    Variational segmentation of vector-valued images with gradient vector flow

    Get PDF
    International audienceIn this paper, we extend the gradient vector flow field for robust variational segmentation of vector-valued images. Rather than using scalar edge information, we define a vectorial edge map derived from a weighted local structure tensor of the image that enables the diffusion of the gradient vectors in accurate directions through the 4DGVF equation. To reduce the contribution of noise in the structure tensor, image channels are weighted according to a blind estimator of contrast. The method is applied to biological volume delineation in dynamic PET imaging, and validated on realistic Monte Carlo simulations of numerical phantoms as well as on real images

    Part decomposition of 3D surfaces

    Get PDF
    This dissertation describes a general algorithm that automatically decomposes realworld scenes and objects into visual parts. The input to the algorithm is a 3 D triangle mesh that approximates the surfaces of a scene or object. This geometric mesh completely specifies the shape of interest. The output of the algorithm is a set of boundary contours that dissect the mesh into parts where these parts agree with human perception. In this algorithm, shape alone defines the location of a bom1dary contour for a part. The algorithm leverages a human vision theory known as the minima rule that states that human visual perception tends to decompose shapes into parts along lines of negative curvature minima. Specifically, the minima rule governs the location of part boundaries, and as a result the algorithm is known as the Minima Rule Algorithm. Previous computer vision methods have attempted to implement this rule but have used pseudo measures of surface curvature. Thus, these prior methods are not true implementations of the rule. The Minima Rule Algorithm is a three step process that consists of curvature estimation, mesh segmentation, and quality evaluation. These steps have led to three novel algorithms known as Normal Vector Voting, Fast Marching Watersheds, and Part Saliency Metric, respectively. For each algorithm, this dissertation presents both the supporting theory and experimental results. The results demonstrate the effectiveness of the algorithm using both synthetic and real data and include comparisons with previous methods from the research literature. Finally, the dissertation concludes with a summary of the contributions to the state of the art

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Machine learning in cardiovascular radiology:ESCR position statement on design requirements, quality assessment, current applications, opportunities, and challenges

    Get PDF
    Machine learning offers great opportunities to streamline and improve clinical care from the perspective of cardiac imagers, patients, and the industry and is a very active scientific research field. In light of these advances, the European Society of Cardiovascular Radiology (ESCR), a non-profit medical society dedicated to advancing cardiovascular radiology, has assembled a position statement regarding the use of machine learning (ML) in cardiovascular imaging. The purpose of this statement is to provide guidance on requirements for successful development and implementation of ML applications in cardiovascular imaging. In particular, recommendations on how to adequately design ML studies and how to report and interpret their results are provided. Finally, we identify opportunities and challenges ahead. While the focus of this position statement is ML development in cardiovascular imaging, most considerations are relevant to ML in radiology in general. KEY POINTS: • Development and clinical implementation of machine learning in cardiovascular imaging is a multidisciplinary pursuit. • Based on existing study quality standard frameworks such as SPIRIT and STARD, we propose a list of quality criteria for ML studies in radiology. • The cardiovascular imaging research community should strive for the compilation of multicenter datasets for the development, evaluation, and benchmarking of ML algorithms

    Three Dimensional Nonlinear Statistical Modeling Framework for Morphological Analysis

    Get PDF
    This dissertation describes a novel three-dimensional (3D) morphometric analysis framework for building statistical shape models and identifying shape differences between populations. This research generalizes the use of anatomical atlases on more complex anatomy as in case of irregular, flat bones, and bones with deformity and irregular bone growth. The foundations for this framework are: 1) Anatomical atlases which allow the creation of homologues anatomical models across populations; 2) Statistical representation for output models in a compact form to capture both local and global shape variation across populations; 3) Shape Analysis using automated 3D landmarking and surface matching. The proposed framework has various applications in clinical, forensic and physical anthropology fields. Extensive research has been published in peer-reviewed image processing, forensic anthropology, physical anthropology, biomedical engineering, and clinical orthopedics conferences and journals. The forthcoming discussion of existing methods for morphometric analysis, including manual and semi-automatic methods, addresses the need for automation of morphometric analysis and statistical atlases. Explanations of these existing methods for the construction of statistical shape models, including benefits and limitations of each method, provide evidence of the necessity for such a novel algorithm. A novel approach was taken to achieve accurate point correspondence in case of irregular and deformed anatomy. This was achieved using a scale space approach to detect prominent scale invariant features. These features were then matched and registered using a novel multi-scale method, utilizing both coordinate data as well as shape descriptors, followed by an overall surface deformation using a new constrained free-form deformation. Applications of output statistical atlases are discussed, including forensic applications for the skull sexing, as well as physical anthropology applications, such as asymmetry in clavicles. Clinical applications in pelvis reconstruction and studying of lumbar kinematics and studying thickness of bone and soft tissue are also discussed
    • …
    corecore