22 research outputs found

    Delineation of line patterns in images using B-COSFIRE filters

    Get PDF
    Delineation of line patterns in images is a basic step required in various applications such as blood vessel detection in medical images, segmentation of rivers or roads in aerial images, detection of cracks in walls or pavements, etc. In this paper we present trainable B-COSFIRE filters, which are a model of some neurons in area V1 of the primary visual cortex, and apply it to the delineation of line patterns in different kinds of images. B-COSFIRE filters are trainable as their selectivity is determined in an automatic configuration process given a prototype pattern of interest. They are configurable to detect any preferred line structure (e.g. segments, corners, cross-overs, etc.), so usable for automatic data representation learning. We carried out experiments on two data sets, namely a line-network data set from INRIA and a data set of retinal fundus images named IOSTAR. The results that we achieved confirm the robustness of the proposed approach and its effectiveness in the delineation of line structures in different kinds of images.Comment: International Work Conference on Bioinspired Intelligence, July 10-13, 201

    Detection of curved lines with B-COSFIRE filters: A case study on crack delineation

    Full text link
    The detection of curvilinear structures is an important step for various computer vision applications, ranging from medical image analysis for segmentation of blood vessels, to remote sensing for the identification of roads and rivers, and to biometrics and robotics, among others. %The visual system of the brain has remarkable abilities to detect curvilinear structures in noisy images. This is a nontrivial task especially for the detection of thin or incomplete curvilinear structures surrounded with noise. We propose a general purpose curvilinear structure detector that uses the brain-inspired trainable B-COSFIRE filters. It consists of four main steps, namely nonlinear filtering with B-COSFIRE, thinning with non-maximum suppression, hysteresis thresholding and morphological closing. We demonstrate its effectiveness on a data set of noisy images with cracked pavements, where we achieve state-of-the-art results (F-measure=0.865). The proposed method can be employed in any computer vision methodology that requires the delineation of curvilinear and elongated structures.Comment: Accepted at Computer Analysis of Images and Patterns (CAIP) 201

    A Connected-Tube MPP Model for Object Detection with Application to Materials and Remotely-Sensed Images

    Get PDF
    International audienceIn this paper, we propose a connected-tube model based on a Marked Point Process (MPP) for strip feature extraction in images. This model incorporates a connection prior that favors certain connections between tubes based on their mutual positional relationship. Moreover, this model can easily be combined with other geometric models to form a mixed MPP model for more complex detection tasks. The proposed tube model is applied to fiber detection in microscopy images by combining connected-tube and ellipse models. The ellipse model is used for detecting short fibers, while the longer fibers are detected by tube model. We also test the model on road and building detection in remotely sensed images

    Adversarially Tuned Scene Generation

    Full text link
    Generalization performance of trained computer vision systems that use computer graphics (CG) generated data is not yet effective due to the concept of 'domain-shift' between virtual and real data. Although simulated data augmented with a few real world samples has been shown to mitigate domain shift and improve transferability of trained models, guiding or bootstrapping the virtual data generation with the distributions learnt from target real world domain is desired, especially in the fields where annotating even few real images is laborious (such as semantic labeling, and intrinsic images etc.). In order to address this problem in an unsupervised manner, our work combines recent advances in CG (which aims to generate stochastic scene layouts coupled with large collections of 3D object models) and generative adversarial training (which aims train generative models by measuring discrepancy between generated and real data in terms of their separability in the space of a deep discriminatively-trained classifier). Our method uses iterative estimation of the posterior density of prior distributions for a generative graphical model. This is done within a rejection sampling framework. Initially, we assume uniform distributions as priors on the parameters of a scene described by a generative graphical model. As iterations proceed the prior distributions get updated to distributions that are closer to the (unknown) distributions of target data. We demonstrate the utility of adversarially tuned scene generation on two real-world benchmark datasets (CityScapes and CamVid) for traffic scene semantic labeling with a deep convolutional net (DeepLab). We realized performance improvements by 2.28 and 3.14 points (using the IoU metric) between the DeepLab models trained on simulated sets prepared from the scene generation models before and after tuning to CityScapes and CamVid respectively.Comment: 9 pages, accepted at CVPR 201

    An Embedded Marked Point Process Framework for Three-Level Object Population Analysis

    Get PDF
    In this paper we introduce a probabilistic approach for extracting complex hierarchical object structures from digital images used by various vision applications. The proposed framework extends conventional Marked Point Process (MPP) models by (i) admitting object-subobject ensembles in parent-child relationships and (ii) allowing corresponding objects to form coherent object groups, by a Bayesian segmentation of the population. Different from earlier, highly domain specific attempts on MPP generalization, the proposed model is defined at an abstract level, providing clear interfaces for applications in various domains. We also introduce a global optimization process for the multi-layer framework for finding optimal entity configurations, considering the observed data, prior knowledge, and interactions between the neighboring and the hierarchically related objects. The proposed method is demonstrated in three different application areas: built in area analysis in remotely sensed images, traffic monitoring on airborne and mobile laser scanning (Lidar) data and optical circuit inspection. A new benchmark database is published for the three test cases, and the model's performance is quantitatively evaluated

    ROAD MARKING EXTRACTION USING A MODEL&DATA-DRIVEN RJ-MCMC

    Get PDF

    Marked point processes for the automatic detection of bomb craters in aerial wartime images

    Get PDF
    Many countries were the target of air strikes during the Second World War. The aftermath of such attacks is felt until today, as numerous unexploded bombs or duds still exist in the ground. Typically, such areas are documented in so-called impact maps, which are based on detected bomb craters. This paper proposes a stochastic approach to automatically detect bomb craters in aerial wartime images that were taken during World War II. In this work, one aspect we investigate is the type of object model for the crater: we compare circles with ellipses. The respective models are embedded in the probabilistic framework of marked point processes. By means of stochastic sampling the most likely configuration of objects within the scene is determined. Each configuration is evaluated using an energy function which describes the conformity with a predefined model. High gradient magnitudes along the border of the object are favoured and overlapping objects are penalized. In addition, a term that requires the grey values inside the object to be homogeneous is investigated. Reversible Jump Markov Chain Monte Carlo sampling in combination with simulated annealing provides the global optimum of the energy function. Afterwards, a probability map is generated from the automatic detections via kernel density estimation. By setting a threshold, areas around the detections are classified as contaminated or uncontaminated sites, respectively, which results in an impact map. Our results, based on 22 aerial wartime images, show the general potential of the method for the automated detection of bomb craters and the subsequent automatic generation of an impact map. © Authors 2019

    Using redundant information from multiple aerial images for the detection of bomb craters based on marked point processes

    Get PDF
    Many countries were the target of air strikes during World War II. Numerous unexploded bombs still exist in the ground. These duds can be tracked down with the help of bomb craters, indicating areas where unexploded bombs may be located. Such areas are documented in so-called impact maps based on detected bomb craters. In this paper, a stochastic approach based on marked point processes (MPPs) for the automatic detection of bomb craters in aerial images taken during World War II is presented. As most areas are covered by multiple images, the influence of redundant image information on the object detection result is investigated: We compare the results generated based on single images with those obtained by our new approach that combines the individual detection results of multiple images covering the same location. The object model for the bomb craters is represented by circles. Our MPP approach determines the most likely configuration of objects within the scene. The goal is reached by minimizing an energy function that describes the conformity with a predefined model by Reversible Jump Markov Chain Monte Carlo sampling in combination with simulated annealing. Afterwards, a probability map is generated from the automatic detections via kernel density estimation. By setting a threshold, areas around the detections are classified as contaminated or uncontaminated sites, respectively, which results in an impact map. Our results show a significant improvement with respect to its quality when redundant image information is used. © 2020 Copernicus GmbH. All rights reserved
    corecore