557 research outputs found

    Print engine color management using customer image content

    Get PDF
    The production of quality color prints requires that color accuracy and reproducibility be maintained to within very tight tolerances when transferred to different media. Variations in the printing process commonly produce color shifts that result in poor color reproduction. The primary function of a color management system is maintaining color quality and consistency. Currently these systems are tuned in the factory by printing a large set of test color patches, measuring them, and making necessary adjustments. This time-consuming procedure should be repeated as needed once the printer leaves the factory. In this work, a color management system that compensates for print color shifts in real-time using feedback from an in-line full-width sensor is proposed. Instead of printing test patches, this novel attempt at color management utilizes the output pixels already rendered in production pages, for a continuous printer characterization. The printed pages are scanned in-line and the results are utilized to update the process by which colorimetric image content is translated into engine specific color separations (e.g. CIELAB-\u3eCMYK). The proposed system provides a means to perform automatic printer characterization, by simply printing a set of images that cover the gamut of the printer. Moreover, all of the color conversion features currently utilized in production systems (such as Gray Component Replacement, Gamut Mapping, and Color Smoothing) can be achieved with the proposed system

    Implementing an ICC printer profile visualization software

    Get PDF
    Device color gamut plays a crucial role in ICC-based color management systems. Accurately visualizing a device\u27s gamut boundary is important in the analysis of color conversion and gamut mapping. ICC profiles contain all the information which can be used to better understand the capabilities of the device. This thesis project has implemented a printer profile visualization software. The project uses A2B 1 tag in a printer profile as gamut data source, then renders gamut of device the profile represents in CIELAB space with a convex hull algorithm. Gamut can be viewed interactively from any view points. The software also gets the gamut data set using CMM with different intent to do color conversion from a specified printer profile to a generic lab profile (short for A2B conversion) or from a generic CIELAB profile to a specified printer pro file and back to the generic CIELAB profile (short for B2A2B). Gamut can be rendered as points, wire frame or solid surface. Two-dimension a*b* and L*C* gamut slice analytic tools were also developed. The 2D gamut slice algorithm is based on dividing gamut into small sections according to lightness and hue angle. The point with maximum chroma on each section can be used to present a*b* gamut slice on a constant lightness plane or L*C* gamut slice on a constant hue angle plane. Gamut models from two or more device profiles can be viewed in the same window. Through the comparison, we can better understand the device reproduction capacities and proofing problems. This thesis also explained printer profile in details, and examined what gamut data source was the best for gamut visualization. At the same time, some gamut boundary descriptor algorithms were discussed. Convex hull algorithm and device space to CIELAB space mapping algorithm were chosen to render 3D gamut in this thesis project. Finally, an experiment was developed to validate the gamut data generated from the software. The experiment used the same method with profile visualization software to get gamut data set source from Photoshop 6.0. The results of the experiment were showed that the data set derived from visualization software was consistent with those from Photoshop 6.0

    Crowd-sourced data and its applications for new algorithms in photographic imaging

    Get PDF
    This thesis comprises two main themes. The first of these is concerned primarily with the validity and utility of data acquired from web-based psychophysical experiments. In recent years web-based experiments, and the crowd-sourced data they can deliver, have been rising in popularity among the research community for several key reasons – primarily ease of administration and easy access to a large population of diverse participants. However, the level of control with which traditional experiments are performed, and the severe lack of control we have over web-based alternatives may lead us to believe that these benefits come at the cost of reliable data. Indeed, the results reported early in this thesis support this assumption. However, we proceed to show that it is entirely possible to crowd-source data that is comparable with lab-based results. The second theme of the thesis explores the possibilities presented by the use of crowd-sourced data, taking a popular colour naming experiment as an example. After using the crowd-sourced data to construct a model for computational colour naming, we consider the value of colour names as image descriptors, with particular relevance to illuminant estimation and object indexing. We discover that colour names represent a particularly useful quantisation of colour space, allowing us to construct compact image descriptors for object indexing. We show that these descriptors are somewhat tolerant to errors in illuminant estimation and that their perceptual relevance offers even further utility. We go on to develop a novel algorithm which delivers perceptually-relevant, illumination-invariant image descriptors based on colour names

    Novel workflow for image-guided gamut mapping

    Full text link

    Human-centered display design : balancing technology & perception

    Get PDF

    Estimating varying illuminant colours in images

    Get PDF
    Colour Constancy is the ability to perceive colours independently of varying illumi-nation colour. A human could tell that a white t-shirt was indeed white, even under the presence of blue or red illumination. These illuminant colours would actually make the reflectance colour of the t-shirt bluish or reddish. Humans can, to a good extent, see colours constantly. Getting a computer to achieve the same goal, with a high level of accuracy has proven problematic. Particularly if we wanted to use colour as a main cue in object recognition. If we trained a system on object colours under one illuminant and then tried to recognise the objects under another illuminant, the system would likely fail. Early colour constancy algorithms assumed that an image contains a single uniform illuminant. They would then attempt to estimate the colour of the illuminant to apply a single correction to the entire image. It’s not hard to imagine a scenario where a scene is lit by more than one illuminant. If we take the case of an outdoors scene on a typical summers day, we would see objects brightly lit by sunlight and others that are in shadow. The ambient light in shadows is known to be a different colour to that of direct sunlight (bluish and yellowish respectively). This means that there are at least two illuminant colours to be recovered in this scene. This thesis focuses on the harder case of recovering the illuminant colours when more than one are present in a scene. Early work on this subject made the empirical observation that illuminant colours are actually very predictable compared to surface colours. Real-world illuminants tend not to be greens or purples, but rather blues, yellows and reds. We can think of an illuminant mapping as the function which takes a scene from some unknown illuminant to a known illuminant. We model this mapping as a simple multiplication of the Red, Green and Blue channels of a pixel. It turns out that the set of realistic mappings approximately lies on a line segment in chromaticity space. We propose an algorithm that uses this knowledge and only requires two pixels of the same surface under two illuminants as input. We can then recover an estimate for the surface reflectance colour, and subsequently the two illuminants. Additionally in this thesis, we propose a more robust algorithm that can use vary-ing surface reflectance data in a scene. One of the most successful colour constancy algorithms, known Gamut Mappping, was developed by Forsyth (1990). He argued that the illuminant colour of a scene naturally constrains the surfaces colours that are possible to perceive. We couldn’t perceive a very chromatic red under a deep blue illuminant. We introduce our multiple illuminant constraint in a Gamut Mapping context and are able to further improve it’s performance. The final piece of work proposes a method for detecting shadow-edges, so that we can automatically recover estimates for the illuminant colours in and out of shadow. We also formulate our illuminant estimation algorithm in a voting scheme, that probabilistically chooses an illuminant estimate on both sides of the shadow edge. We test the performance of all our algorithms experimentally on well known datasets, as well as our new proposed shadow datasets

    Colour constancy beyond the classical receptive field

    Get PDF
    The problem of removing illuminant variations to preserve the colours of objects (colour constancy) has already been solved by the human brain using mechanisms that rely largely on centre-surround computations of local contrast. In this paper we adopt some of these biological solutions described by long known physiological findings into a simple, fully automatic, functional model (termed Adaptive Surround Modulation or ASM). In ASM, the size of a visual neuron's receptive field (RF) as well as the relationship with its surround varies according to the local contrast within the stimulus, which in turn determines the nature of the centre-surround normalisation of cortical neurons higher up in the processing chain. We modelled colour constancy by means of two overlapping asymmetric Gaussian kernels whose sizes are adapted based on the contrast of the surround pixels, resembling the change of RF size. We simulated the contrast-dependent surround modulation by weighting the contribution of each Gaussian according to the centre-surround contrast. In the end, we obtained an estimation of the illuminant from the set of the most activated RFs' outputs. Our results on three single-illuminant and one multi-illuminant benchmark datasets show that ASM is highly competitive against the state-of-the-art and it even outperforms learning-based algorithms in one case. Moreover, the robustness of our model is more tangible if we consider that our results were obtained using the same parameters for all datasets, that is, mimicking how the human visual system operates. These results suggest a dynamical adaptation mechanisms contribute to achieving higher accuracy in computational colour constancy
    • …
    corecore