2 research outputs found

    Neuro-inspired edge feature fusion using Choquet integrals

    Get PDF
    It is known that the human visual system performs a hierarchical information process in which early vision cues (or primitives) are fused in the visual cortex to compose complex shapes and descriptors. While different aspects of the process have been extensively studied, such as lens adaptation or feature detection, some other aspects, such as feature fusion, have been mostly left aside. In this work, we elaborate on the fusion of early vision primitives using generalizations of the Choquet integral, and novel aggregation operators that have been extensively studied in recent years. We propose to use generalizations of the Choquet integral to sensibly fuse elementary edge cues, in an attempt to model the behaviour of neurons in the early visual cortex. Our proposal leads to a fully-framed edge detection algorithm whose performance is put to the test in state-of-the-art edge detection datasets.The authors gratefully acknowledge the financial support of the Spanish Ministry of Science and Technology (project PID2019-108392GB-I00 (AEI/10.13039/501100011033), the Research Services of Universidad Pública de Navarra, CNPq (307781/2016-0, 301618/2019-4), FAPERGS (19/2551-0001660) and PNPD/CAPES (464880/2019-00)

    Contour detection based on anisotropic edge strength and hierarchical superpixel contrast

    No full text
    Contour detection is a fundamental problem in computer vision, yet existing methods usually suffer from the interference of noise and textures. To address this problem, we present an unsupervised contour detection method based on anisotropic edge strength and hierarchical superpixel contrast. The anisotropic edge strength is obtained through the first derivative of anisotropic Gaussian kernels which incorporates an adaptive anisotropy factor. The anisotropic kernel improves the robustness to noise, while the adaptive anisotropic factor attenuates the anisotropy stretch effect. Using a method based on region merging, we obtain a hierarchical set of superpixel maps and thus compute superpixel contrast maps at different hierarchy levels. Consequently, the contour strength map is obtained by multiplying the anisotropic edge strength map by the average of the hierarchical superpixel contrast maps. Experimental results on two publicly available datasets validate the superiority of the proposed method over the competing methods. On the Berkeley Segmentation Dataset & Benchmark 300 and the Berkeley Segmentation Dataset & Benchmark 500, our method obtains (optimal dataset scale) F-measure values of 0.63 and 0.67, respectively, an improvement of at least 0.06 over the competing methods
    corecore