19 research outputs found

    Preserving Perceptual Contrast in Decolorization with Optimized Color Orders

    Get PDF
    Converting a color image to a grayscale image, namely decolorization, is an important process for many real-world applications. Previous methods build contrast loss functions to minimize the contrast differences between the color images and the resultant grayscale images. In this paper, we improve upon a widely used decolorization method with two extensions. First, we relax the need for heuristics on color orders, which the baseline method relies on when computing the contrast differences. In our method, the color orders are incorporated into the loss function and are determined through optimization. Moreover, we apply a nonlinear function on the grayscale contrast to better model human perception of contrast. Both qualitative and quantitative results on the standard benchmark demonstrate the effectiveness of our two extensions

    Two-level joint local Laplacian texture filtering

    Get PDF

    Learning a self-supervised tone mapping operator via feature contrast masking loss

    Get PDF
    High Dynamic Range (HDR) content is becoming ubiquitous due to the rapid development of capture technologies. Nevertheless, the dynamic range of common display devices is still limited, therefore tone mapping (TM) remains a key challenge for image visualization. Recent work has demonstrated that neural networks can achieve remarkable performance in this task when compared to traditional methods, however, the quality of the results of these learning-based methods is limited by the training data. Most existing works use as training set a curated selection of best-performing results from existing traditional tone mapping operators (often guided by a quality metric), therefore, the quality of newly generated results is fundamentally limited by the performance of such operators. This quality might be even further limited by the pool of HDR content that is used for training. In this work we propose a learning-based self-supervised tone mapping operator that is trained at test time specifically for each HDR image and does not need any data labeling. The key novelty of our approach is a carefully designed loss function built upon fundamental knowledge on contrast perception that allows for directly comparing the content in the HDR and tone mapped images. We achieve this goal by reformulating classic VGG feature maps into feature contrast maps that normalize local feature differences by their average magnitude in a local neighborhood, allowing our loss to account for contrast masking effects. We perform extensive ablation studies and exploration of parameters and demonstrate that our solution outperforms existing approaches with a single set of fixed parameters, as confirmed by both objective and subjective metrics

    StyleDiffusion: Controllable Disentangled Style Transfer via Diffusion Models

    Full text link
    Content and style (C-S) disentanglement is a fundamental problem and critical challenge of style transfer. Existing approaches based on explicit definitions (e.g., Gram matrix) or implicit learning (e.g., GANs) are neither interpretable nor easy to control, resulting in entangled representations and less satisfying results. In this paper, we propose a new C-S disentangled framework for style transfer without using previous assumptions. The key insight is to explicitly extract the content information and implicitly learn the complementary style information, yielding interpretable and controllable C-S disentanglement and style transfer. A simple yet effective CLIP-based style disentanglement loss coordinated with a style reconstruction prior is introduced to disentangle C-S in the CLIP image space. By further leveraging the powerful style removal and generative ability of diffusion models, our framework achieves superior results than state of the art and flexible C-S disentanglement and trade-off control. Our work provides new insights into the C-S disentanglement in style transfer and demonstrates the potential of diffusion models for learning well-disentangled C-S characteristics.Comment: Accepted by ICCV 202

    Efficient and effective objective image quality assessment metrics

    Get PDF
    Acquisition, transmission, and storage of images and videos have been largely increased in recent years. At the same time, there has been an increasing demand for high quality images and videos to provide satisfactory quality-of-experience for viewers. In this respect, high dynamic range (HDR) imaging with higher than 8-bit depth has been an interesting approach in order to capture more realistic images and videos. Objective image and video quality assessment plays a significant role in monitoring and enhancing the image and video quality in several applications such as image acquisition, image compression, multimedia streaming, image restoration, image enhancement and displaying. The main contributions of this work are to propose efficient features and similarity maps that can be used to design perceptually consistent image quality assessment tools. In this thesis, perceptually consistent full-reference image quality assessment (FR-IQA) metrics are proposed to assess the quality of natural, synthetic, photo-retouched and tone-mapped images. In addition, efficient no-reference image quality metrics are proposed to assess JPEG compressed and contrast distorted images. Finally, we propose a perceptually consistent color to gray conversion method, perform a subjective rating and evaluate existing color to gray assessment metrics. Existing FR-IQA metrics may have the following limitations. First, their performance is not consistent for different distortions and datasets. Second, better performing metrics usually have high complexity. We propose in this thesis an efficient and reliable full-reference image quality evaluator based on new gradient and color similarities. We derive a general deviation pooling formulation and use it to compute a final quality score from the similarity maps. Extensive experimental results verify high accuracy and consistent performance of the proposed metric on natural, synthetic and photo retouched datasets as well as its low complexity. In order to visualize HDR images on standard low dynamic range (LDR) displays, tone-mapping operators are used in order to convert HDR into LDR. Given different depth bits of HDR and LDR, traditional FR-IQA metrics are not able to assess the quality of tone-mapped images. The existing full-reference metric for tone-mapped images called TMQI converts both HDR and LDR to an intermediate color space and measure their similarity in the spatial domain. We propose in this thesis a feature similarity full-reference metric in which local phase of HDR is compared with the local phase of LDR. Phase is an important information of images and previous studies have shown that human visual system responds strongly to points in an image where the phase information is ordered. Experimental results on two available datasets show the very promising performance of the proposed metric. No-reference image quality assessment (NR-IQA) metrics are of high interest because in the most present and emerging practical real-world applications, the reference signals are not available. In this thesis, we propose two perceptually consistent distortion-specific NR-IQA metrics for JPEG compressed and contrast distorted images. Based on edge statistics of JPEG compressed images, an efficient NR-IQA metric for blockiness artifact is proposed which is robust to block size and misalignment. Then, we consider the quality assessment of contrast distorted images which is a common distortion. Higher orders of Minkowski distance and power transformation are used to train a low complexity model that is able to assess contrast distortion with high accuracy. For the first time, the proposed model is used to classify the type of contrast distortions which is very useful additional information for image contrast enhancement. Unlike its traditional use in the assessment of distortions, objective IQA can be used in other applications. Examples are the quality assessment of image fusion, color to gray image conversion, inpainting, background subtraction, etc. In the last part of this thesis, a real-time and perceptually consistent color to gray image conversion methodology is proposed. The proposed correlation-based method and state-of-the-art methods are compared by subjective and objective evaluation. Then, a conclusion is made on the choice of the objective quality assessment metric for the color to gray image conversion. The conducted subjective ratings can be used in the development process of quality assessment metrics for the color to gray image conversion and to test their performance

    Variational image fusion

    Get PDF
    The main goal of this work is the fusion of multiple images to a single composite that offers more information than the individual input images. We approach those fusion tasks within a variational framework. First, we present iterative schemes that are well-suited for such variational problems and related tasks. They lead to efficient algorithms that are simple to implement and well-parallelisable. Next, we design a general fusion technique that aims for an image with optimal local contrast. This is the key for a versatile method that performs well in many application areas such as multispectral imaging, decolourisation, and exposure fusion. To handle motion within an exposure set, we present the following two-step approach: First, we introduce the complete rank transform to design an optic flow approach that is robust against severe illumination changes. Second, we eliminate remaining misalignments by means of brightness transfer functions that relate the brightness values between frames. Additional knowledge about the exposure set enables us to propose the first fully coupled method that jointly computes an aligned high dynamic range image and dense displacement fields. Finally, we present a technique that infers depth information from differently focused images. In this context, we additionally introduce a novel second order regulariser that adapts to the image structure in an anisotropic way.Das Hauptziel dieser Arbeit ist die Fusion mehrerer Bilder zu einem Einzelbild, das mehr Informationen bietet als die einzelnen Eingangsbilder. Wir verwirklichen diese Fusionsaufgaben in einem variationellen Rahmen. ZunĂ€chst prĂ€sentieren wir iterative Schemata, die sich gut fĂŒr solche variationellen Probleme und verwandte Aufgaben eignen. Danach entwerfen wir eine Fusionstechnik, die ein Bild mit optimalem lokalen Kontrast anstrebt. Dies ist der SchlĂŒssel fĂŒr eine vielseitige Methode, die gute Ergebnisse fĂŒr zahlreiche Anwendungsbereiche wie Multispektralaufnahmen, BildentfĂ€rbung oder Belichtungsreihenfusion liefert. Um Bewegungen in einer Belichtungsreihe zu handhaben, prĂ€sentieren wir folgenden Zweischrittansatz: Zuerst stellen wir die komplette Rangtransformation vor, um eine optische Flussmethode zu entwerfen, die robust gegenĂŒber starken BeleuchtungsĂ€nderungen ist. Dann eliminieren wir verbleibende Registrierungsfehler mit der Helligkeitstransferfunktion, welche die Helligkeitswerte zwischen Bildern in Beziehung setzt. ZusĂ€tzliches Wissen ĂŒber die Belichtungsreihe ermöglicht uns, die erste vollstĂ€ndig gekoppelte Methode vorzustellen, die gemeinsam ein registriertes Hochkontrastbild sowie dichte Bewegungsfelder berechnet. Final prĂ€sentieren wir eine Technik, die von unterschiedlich fokussierten Bildern Tiefeninformation ableitet. In diesem Kontext stellen wir zusĂ€tzlich einen neuen Regularisierer zweiter Ordnung vor, der sich der Bildstruktur anisotrop anpasst

    Variational image fusion

    Get PDF
    The main goal of this work is the fusion of multiple images to a single composite that offers more information than the individual input images. We approach those fusion tasks within a variational framework. First, we present iterative schemes that are well-suited for such variational problems and related tasks. They lead to efficient algorithms that are simple to implement and well-parallelisable. Next, we design a general fusion technique that aims for an image with optimal local contrast. This is the key for a versatile method that performs well in many application areas such as multispectral imaging, decolourisation, and exposure fusion. To handle motion within an exposure set, we present the following two-step approach: First, we introduce the complete rank transform to design an optic flow approach that is robust against severe illumination changes. Second, we eliminate remaining misalignments by means of brightness transfer functions that relate the brightness values between frames. Additional knowledge about the exposure set enables us to propose the first fully coupled method that jointly computes an aligned high dynamic range image and dense displacement fields. Finally, we present a technique that infers depth information from differently focused images. In this context, we additionally introduce a novel second order regulariser that adapts to the image structure in an anisotropic way.Das Hauptziel dieser Arbeit ist die Fusion mehrerer Bilder zu einem Einzelbild, das mehr Informationen bietet als die einzelnen Eingangsbilder. Wir verwirklichen diese Fusionsaufgaben in einem variationellen Rahmen. ZunĂ€chst prĂ€sentieren wir iterative Schemata, die sich gut fĂŒr solche variationellen Probleme und verwandte Aufgaben eignen. Danach entwerfen wir eine Fusionstechnik, die ein Bild mit optimalem lokalen Kontrast anstrebt. Dies ist der SchlĂŒssel fĂŒr eine vielseitige Methode, die gute Ergebnisse fĂŒr zahlreiche Anwendungsbereiche wie Multispektralaufnahmen, BildentfĂ€rbung oder Belichtungsreihenfusion liefert. Um Bewegungen in einer Belichtungsreihe zu handhaben, prĂ€sentieren wir folgenden Zweischrittansatz: Zuerst stellen wir die komplette Rangtransformation vor, um eine optische Flussmethode zu entwerfen, die robust gegenĂŒber starken BeleuchtungsĂ€nderungen ist. Dann eliminieren wir verbleibende Registrierungsfehler mit der Helligkeitstransferfunktion, welche die Helligkeitswerte zwischen Bildern in Beziehung setzt. ZusĂ€tzliches Wissen ĂŒber die Belichtungsreihe ermöglicht uns, die erste vollstĂ€ndig gekoppelte Methode vorzustellen, die gemeinsam ein registriertes Hochkontrastbild sowie dichte Bewegungsfelder berechnet. Final prĂ€sentieren wir eine Technik, die von unterschiedlich fokussierten Bildern Tiefeninformation ableitet. In diesem Kontext stellen wir zusĂ€tzlich einen neuen Regularisierer zweiter Ordnung vor, der sich der Bildstruktur anisotrop anpasst

    Versatile whole-organ/body staining and imaging based on electrolyte-gel properties of biological tissues

    Get PDF
    Whole-organ/body three-dimensional (3D) staining and imaging have been enduring challenges in histology. By dissecting the complex physicochemical environment of the staining system, we developed a highly optimized 3D staining imaging pipeline based on CUBIC. Based on our precise characterization of biological tissues as an electrolyte gel, we experimentally evaluated broad 3D staining conditions by using an artificial tissue-mimicking material. The combination of optimized conditions allows a bottom-up design of a superior 3D staining protocol that can uniformly label whole adult mouse brains, an adult marmoset brain hemisphere, an ~1 cm3 tissue block of a postmortem adult human cerebellum, and an entire infant marmoset body with dozens of antibodies and cell-impermeant nuclear stains. The whole-organ 3D images collected by light-sheet microscopy are used for computational analyses and whole-organ comparison analysis between species. This pipeline, named CUBIC-HistoVIsion, thus offers advanced opportunities for organ- and organism-scale histological analysis of multicellular systems

    A Cookbook of Self-Supervised Learning

    Full text link
    Self-supervised learning, dubbed the dark matter of intelligence, is a promising path to advance machine learning. Yet, much like cooking, training SSL methods is a delicate art with a high barrier to entry. While many components are familiar, successfully training a SSL method involves a dizzying set of choices from the pretext tasks to training hyper-parameters. Our goal is to lower the barrier to entry into SSL research by laying the foundations and latest SSL recipes in the style of a cookbook. We hope to empower the curious researcher to navigate the terrain of methods, understand the role of the various knobs, and gain the know-how required to explore how delicious SSL can be
    corecore