7 research outputs found

    REGION HOMOGENEITY IN THE LOGARITHMIC IMAGE PROCESSING FRAMEWORK: APPLICATION TO REGION GROWING ALGORITHMS

    Get PDF
    In order to create an image segmentation method robust to lighting changes, two novel homogeneity criteria of an image region were studied. Both were defined using the Logarithmic Image Processing (LIP) framework whose laws model lighting changes. The first criterion estimates the LIP-additive homogeneity and is based on the LIP-additive law. It is theoretically insensitive to lighting changes caused by variations of the camera exposure-time or source intensity. The second, the LIP-multiplicative homogeneity criterion, is based on the LIP-multiplicative law and is insensitive to changes due to variations of the object thickness or opacity. Each criterion is then applied in Revol and Jourlin’s (1997) region growing method which is based on the homogeneity of an image region. The region growing method becomes therefore robust to the lighting changes specific to each criterion. Experiments on simulated and on real images presenting lighting variations prove the robustness of the criteria to those variations. Compared to a state-of the art method based on the image component-tree, ours is more robust. These results open the way to numerous applications where the lighting is uncontrolled or partially controlled

    QBRIX : a quantile-based approach to retinex

    Get PDF
    In this paper, we introduce a novel probabilistic version of retinex. It is based on a probabilistic formalization of the random spray retinex sampling and contributes to the investigation of the spatial properties of the model. Various versions available of the retinex algorithm are characterized by different procedures for exploring the image content (so as to obtain, for each pixel, a reference white value), then used to rescale the pixel lightness. Here we propose an alternative procedure, which computes the reference white value from the percentile values of the pixel population. We formalize two versions of the algorithm: one with global and one with local behavior, characterized by different computational costs

    High Dynamic Range Image Rendering Using a Retinex-Based Adaptive Filter

    Get PDF
    We propose a new method to render high dynamic range images that models global and local adaptation of the human visual system. Our method is based on the center-surround Retinex model. The novelties of our method is first to use an adaptive surround, whose shape follows the image high contrast edges, thus reducing halo artifacts common to other methods. Secondly, only the luminance channel is processed, which is defined by the first component of a principal component analysis. Principal component analysis provides orthogonality between channels and thus reduces the chromatic changes caused by the modification of luminance. We show that our method efficiently renders high dynamic range images and we compare our results with the current state of the art

    Tone mapping for high dynamic range images

    Get PDF
    Tone mapping is an essential step for the reproduction of "nice looking" images. It provides the mapping between the luminances of the original scene to the output device's display values. When the dynamic range of the captured scene is smaller or larger than that of the display device, tone mapping expands or compresses the luminance ratios. We address the problem of tone mapping high dynamic range (HDR) images to standard displays (CRT, LCD) and to HDR displays. With standard displays, the dynamic range of the captured HDR scene must be compressed significantly, which can induce a loss of contrast resulting in a loss of detail visibility. Local tone mapping operators can be used in addition to the global compression to increase the local contrast and thus improve detail visibility, but this tends to create artifacts. We developed a local tone mapping method that solves the problems generally encountered by local tone mapping algorithms. Namely, it does not create halo artifacts, nor graying-out of low contrast areas, and provides good color rendition. We then investigated specifically the rendition of color and confirmed that local tone mapping algorithms must be applied to the luminance channel only. We showed that the correlation between luminance and chrominance plays a role in the appearance of the final image but a perfect decorrelation is not necessary. Recently developed HDR monitors enable the display of HDR images with hardly any compression of their dynamic range. The arrival of these displays on the market create the need for new tone mapping algorithms. In particular, legacy images that were mapped to SDR displays must be re-rendered to HDR displays, taking best advantage of the increase in dynamic range. This operation can be seen as the reverse of the tone mapping to SDR. We propose a piecewise linear tone scale function that enhances the brightness of specular highlights so that the sensation of naturalness is improved. Our tone scale algorithm is based on the segmentation of the image into its diffuse and specular components as well as on the range of display luminance that is allocated to the specular component and the diffuse component, respectively. We performed a psychovisual experiment to validate the benefit of our tone scale. The results showed that, with HDR displays, allocating more luminance range to the specular component than what was allocated in the image rendered to SDR displays provides more natural looking images

    Reduced Complexity Retinex Algorithm via the Variational Approach

    No full text
    Retinex theory addresses the problem of separating the illumination from the reflectance in a given image, and thereby compensating for non-uniform lighting. In a previous paper (Kimmel et al., 2003), a variational model for the Retinex problem was introduced. This model was shown to unify previous methods, leading to a new illumination estimation algorithm. The main drawback with the above approach is its numerical implementation. The computational complexity of the illumination reconstruction algorithm is relatively high, since in the obtained Quadratic Programming (QP) problem, the whole image is the unknown. In addition, the process requirements for obtaining the optimal solution are not chosen a priori based on hardware/ software constraints. In this paper we propose a way to compromise between the full fledged solution of the theoretical model, and a variety of e#cient yet limited computational methods for which we develop optimal solutions. For computational methods parameterized linearly by a small set of free parameters, it is shown that a reduced size QP problem is obtained with a unique solution. Several special cases of this general solution are presented and analyzed: a Look-Up-Table (LUT), linear or nonlinear Volterra filters, and expansion using a truncated set of basis functions. The proposed solutions are sub-optimal compared to the original Retinex algorithm, yet their numerical implementations are much more e#cient. Results indicate that the proposed methodology can enhance images for a reduced computational e#ort
    corecore