979 research outputs found

    Colour displays for categorical images

    Get PDF
    We propose a method for identifying a set of colours for displaying 2-D and 3-D categorical images when the categories are unordered labels. The principle is to find maximally distinct sets of colours. We either generate colours sequentially, to maximise the dissimilarity or distance between a new colour and the set of colours already chosen, or use a simulated annealing algorithm to find a set of colours of specified size. In both cases, we use a Euclidean metric on the perceptual colour space, CIE-LAB, to specify distances

    Controlling Perceptual Factors in Neural Style Transfer

    Full text link
    Neural Style Transfer has shown very exciting results enabling new forms of image manipulation. Here we extend the existing method to introduce control over spatial location, colour information and across spatial scale. We demonstrate how this enhances the method by allowing high-resolution controlled stylisation and helps to alleviate common failure cases such as applying ground textures to sky regions. Furthermore, by decomposing style into these perceptual factors we enable the combination of style information from multiple sources to generate new, perceptually appealing styles from existing ones. We also describe how these methods can be used to more efficiently produce large size, high-quality stylisation. Finally we show how the introduced control measures can be applied in recent methods for Fast Neural Style Transfer.Comment: Accepted at CVPR201

    A Pipeline for Lenslet Light Field Quality Enhancement

    Full text link
    In recent years, light fields have become a major research topic and their applications span across the entire spectrum of classical image processing. Among the different methods used to capture a light field are the lenslet cameras, such as those developed by Lytro. While these cameras give a lot of freedom to the user, they also create light field views that suffer from a number of artefacts. As a result, it is common to ignore a significant subset of these views when doing high-level light field processing. We propose a pipeline to process light field views, first with an enhanced processing of RAW images to extract subaperture images, then a colour correction process using a recent colour transfer algorithm, and finally a denoising process using a state of the art light field denoising approach. We show that our method improves the light field quality on many levels, by reducing ghosting artefacts and noise, as well as retrieving more accurate and homogeneous colours across the sub-aperture images.Comment: IEEE International Conference on Image Processing 2018, 5 pages, 7 figure

    Quantifying the specificity of near-duplicate image classification functions

    Get PDF
    There are many published methods for detecting similar and near-duplicate images. Here, we consider their use in the context of unsupervised near-duplicate detection, where the task is to find a (relatively small) near-duplicate intersection of two large candidate sets. Such scenarios are of particular importance in forensic near-duplicate detection. The essential properties of a such a function are: performance, sensitivity, and specificity. We show that, as collection sizes increase, then specificity becomes the most important of these, as without very high specificity huge numbers of false positive matches will be identified. This makes even very fast, highly sensitive methods completely useless. Until now, to our knowledge, no attempt has been made to measure the specificity of near-duplicate finders, or even to compare them with each other. Recently, a benchmark set of near-duplicate images has been established which allows such assessment by giving a near-duplicate ground truth over a large general image collection. Using this we establish a methodology for calculating specificity. A number of the most likely candidate functions are compared with each other and accurate measurement of sensitivity vs. specificity are given. We believe these are the first such figures be to calculated for any such function

    Name Your Colour For the Task: Artificially Discover Colour Naming via Colour Quantisation Transformer

    Full text link
    The long-standing theory that a colour-naming system evolves under dual pressure of efficient communication and perceptual mechanism is supported by more and more linguistic studies, including analysing four decades of diachronic data from the Nafaanra language. This inspires us to explore whether machine learning could evolve and discover a similar colour-naming system via optimising the communication efficiency represented by high-level recognition performance. Here, we propose a novel colour quantisation transformer, CQFormer, that quantises colour space while maintaining the accuracy of machine recognition on the quantised images. Given an RGB image, Annotation Branch maps it into an index map before generating the quantised image with a colour palette; meanwhile the Palette Branch utilises a key-point detection way to find proper colours in the palette among the whole colour space. By interacting with colour annotation, CQFormer is able to balance both the machine vision accuracy and colour perceptual structure such as distinct and stable colour distribution for discovered colour system. Very interestingly, we even observe the consistent evolution pattern between our artificial colour system and basic colour terms across human languages. Besides, our colour quantisation method also offers an efficient quantisation method that effectively compresses the image storage while maintaining high performance in high-level recognition tasks such as classification and detection. Extensive experiments demonstrate the superior performance of our method with extremely low bit-rate colours, showing potential to integrate into quantisation network to quantities from image to network activation. The source code is available at https://github.com/ryeocthiv/CQForme

    DualVAE: Controlling Colours of Generated and Real Images

    Full text link
    Colour controlled image generation and manipulation are of interest to artists and graphic designers. Vector Quantised Variational AutoEncoders (VQ-VAEs) with autoregressive (AR) prior are able to produce high quality images, but lack an explicit representation mechanism to control colour attributes. We introduce DualVAE, a hybrid representation model that provides such control by learning disentangled representations for colour and geometry. The geometry is represented by an image intensity mapping that identifies structural features. The disentangled representation is obtained by two novel mechanisms: (i) a dual branch architecture that separates image colour attributes from geometric attributes, and (ii) a new ELBO that trains the combined colour and geometry representations. DualVAE can control the colour of generated images, and recolour existing images by transferring the colour latent representation obtained from an exemplar image. We demonstrate that DualVAE generates images with FID nearly two times better than VQ-GAN on a diverse collection of datasets, including animated faces, logos and artistic landscapes

    Optimising Light Source Spectrum to Reduce the Energy Absorbed by Objects

    Get PDF
    Light is used to illuminate objects in the built environment. Humans can only observe light reflected from an object. Light absorbed by an object turns into heat and does not contribute to visibility. Since the spectral output of the new lighting technologies can be tuned, it is possible to imagine a lighting system that detects the colours of objects and emits customised light to minimise the absorbed energy. Previous optimisation studies investigated the use of narrowband LEDs to maximise the efficiency and colour quality of a light source. While these studies aimed to tune a white light source for general use, the lighting system proposed here minimises the energy consumed by lighting by detecting colours of objects and emitting customised light onto each coloured part of the object. This thesis investigates the feasibility of absorption-minimising light source spectra and their impact on the colour appearance of objects and energy consumption. Two computational studies were undertaken to form the theoretical basis of the absorption-minimising light source spectra. Computational simulations show that the theoretical single-peak spectra can lower the energy consumption up to around 38 % to 62 %, and double-peak test spectra can result in energy savings up to 71 %, without causing colour shifts. In these studies, standard reference illuminants, theoretical test spectra and coloured test samples were used. These studies are followed by the empirical evidence collected from two psychophysical experiments. Data from the experiments show that observers find the colour appearance of objects equally natural and attractive under spectrally optimised spectra and reference white light sources. An increased colour difference, to a certain extent, is found acceptable, which allows even higher energy savings. However, the translucent nature of some objects may negatively affect the results
    corecore