2,148 research outputs found

    Physical-based optimization for non-physical image dehazing methods

    Get PDF
    Images captured under hazy conditions (e.g. fog, air pollution) usually present faded colors and loss of contrast. To improve their visibility, a process called image dehazing can be applied. Some of the most successful image dehazing algorithms are based on image processing methods but do not follow any physical image formation model, which limits their performance. In this paper, we propose a post-processing technique to alleviate this handicap by enforcing the original method to be consistent with a popular physical model for image formation under haze. Our results improve upon those of the original methods qualitatively and according to several metrics, and they have also been validated via psychophysical experiments. These results are particularly striking in terms of avoiding over-saturation and reducing color artifacts, which are the most common shortcomings faced by image dehazing methods

    Variational models for color image correction inspired by visual perception and neuroscience

    Get PDF
    Reproducing the perception of a real-world scene on a display device is a very challenging task which requires the understanding of the camera processing pipeline, the display process, and the way the human visual system processes the light it captures. Mathematical models based on psychophysical and physiological laws on color vision, named Retinex, provide efficient tools to handle degradations produced during the camera processing pipeline like the reduction of the contrast. In particular, Batard and Bertalmío [J Math. Imag. Vis. 60(6), 849-881 (2018)] described some psy-chophysical laws on brightness perception as covariant derivatives, included them into a variational model, and observed that the quality of the color image correction is correlated with the accuracy of the vision model it includes. Based on this observation, we postulate that this model can be improved by including more accurate data on vision with a special attention on visual neuro-science here. Then, inspired by the presence of neurons responding to different visual attributes in the area V1 of the visual cortex as orientation, color or movement, to name a few, and horizontal connections modeling the interactions between those neurons, we construct two variational models to process both local (edges, textures) and global (contrast) features. This is an improvement with respect to the model of Batard and Bertalmío as the latter can not process local and global features independently and simultaneously. Finally, we conduct experiments on color images which corroborate the improvement provided by the new models

    Manipulating Attributes of Natural Scenes via Hallucination

    Full text link
    In this study, we explore building a two-stage framework for enabling users to directly manipulate high-level attributes of a natural scene. The key to our approach is a deep generative network which can hallucinate images of a scene as if they were taken at a different season (e.g. during winter), weather condition (e.g. in a cloudy day) or time of the day (e.g. at sunset). Once the scene is hallucinated with the given attributes, the corresponding look is then transferred to the input image while preserving the semantic details intact, giving a photo-realistic manipulation result. As the proposed framework hallucinates what the scene will look like, it does not require any reference style image as commonly utilized in most of the appearance or style transfer approaches. Moreover, it allows to simultaneously manipulate a given scene according to a diverse set of transient attributes within a single model, eliminating the need of training multiple networks per each translation task. Our comprehensive set of qualitative and quantitative results demonstrate the effectiveness of our approach against the competing methods.Comment: Accepted for publication in ACM Transactions on Graphic

    Variational Principles and Perceptual Color Correction of Digital Images

    Get PDF
    Variational principles appear in a vast number of scientific disciplines. They often provide a 'view from above' which permits to better comprehend and analyse problems. In this paper we show how variational techniques can be used to modelise perceptual color correction of digital images. We will show that the basic human visual phenomenology defines a unique class of functionals whose minimization gives rise to color enhancement. This framework provides a unified home for noticeable models of perceptual color correction, as e.g. Retinex. This record was migrated from the OpenDepot repository service in June, 2017 before shutting down

    Generative Compression

    Full text link
    Traditional image and video compression algorithms rely on hand-crafted encoder/decoder pairs (codecs) that lack adaptability and are agnostic to the data being compressed. Here we describe the concept of generative compression, the compression of data using generative models, and suggest that it is a direction worth pursuing to produce more accurate and visually pleasing reconstructions at much deeper compression levels for both image and video data. We also demonstrate that generative compression is orders-of-magnitude more resilient to bit error rates (e.g. from noisy wireless channels) than traditional variable-length coding schemes

    Visibility recovery on images acquired in attenuating media. Application to underwater, fog, and mammographic imaging

    Get PDF
    136 p.When acquired in attenuating media, digital images of ten suffer from a particularly complex degradation that reduces their visual quality, hindering their suitability for further computational applications, or simply decreasing the visual pleasan tness for the user. In these cases, mathematical image processing reveals it self as an ideal tool to recover some of the information lost during the degradation process. In this dissertation,we deal with three of such practical scenarios in which this problematic is specially relevant, namely, underwater image enhancement, fogremoval and mammographic image processing. In the case of digital mammograms,X-ray beams traverse human tissue, and electronic detectorscapture them as they reach the other side. However, the superposition on a bidimensional image of three-dimensional structures produces low contraste dimages in which structures of interest suffer from a diminished visibility, obstructing diagnosis tasks. Regarding fog removal, the loss of contrast is produced by the atmospheric conditions, and white colour takes over the scene uniformly as distance increases, also reducing visibility.For underwater images, there is an added difficulty, since colour is not lost uniformly; instead, red colours decay the fastest, and green and blue colours typically dominate the acquired images. To address all these challenges,in this dissertation we develop new methodologies that rely on: a)physical models of the observed degradation, and b) the calculus of variations.Equipped with this powerful machinery, we design novel theoreticaland computational tools, including image-dependent functional energies that capture the particularities of each degradation model. These energie sare composed of different integral terms that are simultaneous lyminimized by means of efficient numerical schemes, producing a clean,visually-pleasant and use ful output image, with better contrast and increased visibility. In every considered application, we provide comprehensive qualitative (visual) and quantitative experimental results to validateour methods, confirming that the developed techniques out perform other existing approaches in the literature
    • …
    corecore