14 research outputs found

    A blind definition of shape

    Get PDF
    In this note, we propose a general definition of shape which is both compatible with the one proposed in phenomenology (gestaltism) and with a computer vision implementation. We reverse the usual order in Computer Vision. We do not define “shape recognition" as a task which requires a “model" pattern which is searched in all images of a certain kind. We give instead a “blind" definition of shapes relying only on invariance and repetition arguments. Given a set of images I\cal I, we call shape of this set any spatial pattern which can be found at several locations of some image, or in several different images of I\cal I. (This means that the shapes of a set of images are defined without any a priori assumption or knowledge.) The definition is powerful when it is invariant and we prove that the following invariance requirements can be matched in theory and in practice: local contrast invariance, robustness to blur, noise and sampling, affine deformations. We display experiments with single images and image pairs. In each case, we display the detected shapes. Surprisingly enough, but in accordance with Gestalt theory, the repetition of shapes is so frequent in human environment, that many shapes can even be learned from single images

    Efficient joint noise removal and multi exposure fusion

    Full text link
    Multi-exposure fusion (MEF) is a technique for combining different images of the same scene acquired with different exposure settings into a single image. All the proposed MEF algorithms combine the set of images, somehow choosing from each one the part with better exposure. We propose a novel multi-exposure image fusion chain taking into account noise removal. The novel method takes advantage of DCT processing and the multi-image nature of the MEF problem. We propose a joint fusion and denoising strategy taking advantage of spatio-temporal patch selection and collaborative 3D thresholding. The overall strategy permits to denoise and fuse the set of images without the need of recovering each denoised exposure image, leading to a very efficient procedure

    ON THE THEORY OF PLANAR SHAPE

    No full text
    One of the aims of computer vision in the past 30 years has been to recognize shapes by numerical algorithms. Now, what are the geometric features on which shape recognition can be based? In this paper, we review the mathematical arguments leading to a unique definition of planar shape elements. This definition is derived from the invariance requirement to not less than five classes of perturbations, namely noise, affine distortion, contrast changes, occlusion, and background. This leads to a single possibility: shape elements as the normalized, affine smoothed pieces of level lines of the image. As a main possible application, we show the existence of a generic image comparison technique able to find all shape elements common to two images

    Conditional image diffusion

    Get PDF
    In this paper, a theoretical framework for the conditional diffusion ofdigitalimagesispresented. Differentapproacheshavebeenproposed to solve this problem by extrapolating the idea of the anisotropic diffusionforagreylevelimagestovector-valuedimages. Then, thediffusion of each channel is conditioned to a direction which normally takes into account information from all channels. In our approach, the diffusion model assumes the a priori knowledge of the diffusion direction during all the process. The consistency of the model is shown by proving the existence and uniqueness of solution for the proposed equation from the viscosity solutions theory. Also a numerical scheme adapted to this equation based on the neighborhood filter is proposed. Finally, we discuss severalapplicationsand we comparethe correspondingnumerical schemes for the proposed model.

    Automatic color palette

    No full text
    We present a method for the automatic estimation of the minimum set of colors needed to describe an image. We call this minimal set “color palette”. The proposed method combines the well-known K-Means clustering technique with a thorough analysis of the color information of the image. The initial set of cluster seeds used in K-Means is automatically inferred from this analysis. Color information is analyzed by studying the 1D histograms associated to the hue, saturation and intensity components of the image colors. In order to achieve a proper parsing of these 1D histograms a new histogram segmentation technique is proposed. The experimental results seem to endorse the capacity of the method to obtain the most significant colors in the image, even if they belong to small details in the scene. The obtained palette can be combined with a dictionary of color names in order to provide a qualitative image description.
    corecore