31,858 research outputs found

    Detection of curved lines with B-COSFIRE filters: A case study on crack delineation

    Full text link
    The detection of curvilinear structures is an important step for various computer vision applications, ranging from medical image analysis for segmentation of blood vessels, to remote sensing for the identification of roads and rivers, and to biometrics and robotics, among others. %The visual system of the brain has remarkable abilities to detect curvilinear structures in noisy images. This is a nontrivial task especially for the detection of thin or incomplete curvilinear structures surrounded with noise. We propose a general purpose curvilinear structure detector that uses the brain-inspired trainable B-COSFIRE filters. It consists of four main steps, namely nonlinear filtering with B-COSFIRE, thinning with non-maximum suppression, hysteresis thresholding and morphological closing. We demonstrate its effectiveness on a data set of noisy images with cracked pavements, where we achieve state-of-the-art results (F-measure=0.865). The proposed method can be employed in any computer vision methodology that requires the delineation of curvilinear and elongated structures.Comment: Accepted at Computer Analysis of Images and Patterns (CAIP) 201

    Vessel tractography using an intensity based tensor model with branch detection

    Get PDF
    In this paper, we present a tubular structure seg- mentation method that utilizes a second order tensor constructed from directional intensity measurements, which is inspired from diffusion tensor image (DTI) modeling. The constructed anisotropic tensor which is fit inside a vessel drives the segmen- tation analogously to a tractography approach in DTI. Our model is initialized at a single seed point and is capable of capturing whole vessel trees by an automatic branch detection algorithm developed in the same framework. The centerline of the vessel as well as its thickness is extracted. Performance results within the Rotterdam Coronary Artery Algorithm Evaluation framework are provided for comparison with existing techniques. 96.4% average overlap with ground truth delineated by experts is obtained in addition to other measures reported in the paper. Moreover, we demonstrate further quantitative results over synthetic vascular datasets, and we provide quantitative experiments for branch detection on patient Computed Tomography Angiography (CTA) volumes, as well as qualitative evaluations on the same CTA datasets, from visual scores by a cardiologist expert

    Neural Dynamics of Motion Perception: Direction Fields, Apertures, and Resonant Grouping

    Full text link
    A neural network model of global motion segmentation by visual cortex is described. Called the Motion Boundary Contour System (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyse how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The Motion BCS describes how preprocessing of motion signals by a Motion Oriented Contrast Filter (MOC Filter) is joined to long-range cooperative grouping mechanisms in a Motion Cooperative-Competitive Loop (MOCC Loop) to control phenomena such as motion capture. The Motion BCS is computed in parallel with the Static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the Motion BCS and the Static BCS, specialized to process movement directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1 --> MT and V1 --> V2 --> MT are made, notably the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the Motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions-of-contrast. Interactions of model simple cells, complex cells, hypercomplex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-91-J-4100

    Invertible Orientation Scores of 3D Images

    Full text link
    The enhancement and detection of elongated structures in noisy image data is relevant for many biomedical applications. To handle complex crossing structures in 2D images, 2D orientation scores were introduced, which already showed their use in a variety of applications. Here we extend this work to 3D orientation scores. First, we construct the orientation score from a given dataset, which is achieved by an invertible coherent state type of transform. For this transformation we introduce 3D versions of the 2D cake-wavelets, which are complex wavelets that can simultaneously detect oriented structures and oriented edges. For efficient implementation of the different steps in the wavelet creation we use a spherical harmonic transform. Finally, we show some first results of practical applications of 3D orientation scores.Comment: ssvm 2015 published version in LNCS contains a mistake (a switch notation spherical angles) that is corrected in this arxiv versio

    Locally Adaptive Frames in the Roto-Translation Group and their Applications in Medical Imaging

    Get PDF
    Locally adaptive differential frames (gauge frames) are a well-known effective tool in image analysis, used in differential invariants and PDE-flows. However, at complex structures such as crossings or junctions, these frames are not well-defined. Therefore, we generalize the notion of gauge frames on images to gauge frames on data representations U:Rd⋊Sd−1→RU:\mathbb{R}^{d} \rtimes S^{d-1} \to \mathbb{R} defined on the extended space of positions and orientations, which we relate to data on the roto-translation group SE(d)SE(d), d=2,3d=2,3. This allows to define multiple frames per position, one per orientation. We compute these frames via exponential curve fits in the extended data representations in SE(d)SE(d). These curve fits minimize first or second order variational problems which are solved by spectral decomposition of, respectively, a structure tensor or Hessian of data on SE(d)SE(d). We include these gauge frames in differential invariants and crossing preserving PDE-flows acting on extended data representation UU and we show their advantage compared to the standard left-invariant frame on SE(d)SE(d). Applications include crossing-preserving filtering and improved segmentations of the vascular tree in retinal images, and new 3D extensions of coherence-enhancing diffusion via invertible orientation scores

    Trainable COSFIRE filters for vessel delineation with application to retinal images

    Get PDF
    Retinal imaging provides a non-invasive opportunity for the diagnosis of several medical pathologies. The automatic segmentation of the vessel tree is an important pre-processing step which facilitates subsequent automatic processes that contribute to such diagnosis. We introduce a novel method for the automatic segmentation of vessel trees in retinal fundus images. We propose a filter that selectively responds to vessels and that we call B-COSFIRE with B standing for bar which is an abstraction for a vessel. It is based on the existing COSFIRE (Combination Of Shifted Filter Responses) approach. A B-COSFIRE filter achieves orientation selectivity by computing the weighted geometric mean of the output of a pool of Difference-of-Gaussians filters, whose supports are aligned in a collinear manner. It achieves rotation invariance efficiently by simple shifting operations. The proposed filter is versatile as its selectivity is determined from any given vessel-like prototype pattern in an automatic configuration process. We configure two B-COSFIRE filters, namely symmetric and asymmetric, that are selective for bars and bar-endings, respectively. We achieve vessel segmentation by summing up the responses of the two rotation-invariant B-COSFIRE filters followed by thresholding. The results that we achieve on three publicly available data sets (DRIVE: Se = 0.7655, Sp = 0.9704; STARE: Se = 0.7716, Sp = 0.9701; CHASE_DB1: Se = 0.7585, Sp = 0.9587) are higher than many of the state-of-the-art methods. The proposed segmentation approach is also very efficient with a time complexity that is significantly lower than existing methods.peer-reviewe
    • …
    corecore