93 research outputs found

    Spectral print reproduction modeling and feasibility

    Get PDF

    Spectral methods for multimodal data analysis

    Get PDF
    Spectral methods have proven themselves as an important and versatile tool in a wide range of problems in the fields of computer graphics, machine learning, pattern recognition, and computer vision, where many important problems boil down to constructing a Laplacian operator and finding a few of its eigenvalues and eigenfunctions. Classical examples include the computation of diffusion distances on manifolds in computer graphics, Laplacian eigenmaps, and spectral clustering in machine learning. In many cases, one has to deal with multiple data spaces simultaneously. For example, clustering multimedia data in machine learning applications involves various modalities or ``views'' (e.g., text and images), and finding correspondence between shapes in computer graphics problems is an operation performed between two or more modalities. In this thesis, we develop a generalization of spectral methods to deal with multiple data spaces and apply them to problems from the domains of computer graphics, machine learning, and image processing. Our main construction is based on simultaneous diagonalization of Laplacian operators. We present an efficient numerical technique for computing joint approximate eigenvectors of two or more Laplacians in challenging noisy scenarios, which also appears to be the first general non-smooth manifold optimization method. Finally, we use the relation between joint approximate diagonalizability and approximate commutativity of operators to define a structural similarity measure for images. We use this measure to perform structure-preserving color manipulations of a given image

    N-colour separation methods for accurate reproduction of spot colours

    Full text link
    In packaging, spot colours are used to print key information like brand logos and elements for which the colour accuracy is critical. The present study investigates methods to aid the accurate reproduction of these spot colours with the n-colour printing process. Typical n-colour printing systems consist of supplementary inks in addition to the usual CMYK inks. Adding these inks to the traditional CMYK set increases the attainable colour gamut, but the added complexity creates several challenges in generating suitable colour separations for rendering colour images. In this project, the n-colour separation is achieved by the use of additional sectors for intermediate inks. Each sector contains four inks with the achromatic ink (black) common to all sectors. This allows the extension of the principles of the CMYK printing process to these additional sectors. The methods developed in this study can be generalised to any number of inks. The project explores various aspects of the n-colour printing process including the forward characterisation methods, gamut prediction of the n-colour process and the inverse characterisation to calculate the n-colour separation for target spot colours. The scope of the study covers different printing technologies including lithographic offset, flexographic, thermal sublimation and inkjet printing. A new method is proposed to characterise the printing devices. This method, the spot colour overprint (SCOP) model, was evaluated for the n-colour printing process with different printing technologies. In addition, a set of real-world spot colours were converted to n-colour separations and printed with the 7-colour printing process to evaluate against the original spot colours. The results show that the proposed methods can be effectively used to replace the spot coloured inks with the n-colour printing process. This can save significant material, time and costs in the packaging industry

    Computer mediated colour fidelity and communication

    Get PDF
    Developments in technology have meant that computercontrolled imaging devices are becoming more powerful and more affordable. Despite their increasing prevalence, computer-aided design and desktop publishing software has failed to keep pace, leading to disappointing colour reproduction across different devices. Although there has been a recent drive to incorporate colour management functionality into modern computer systems, in general this is limited in scope and fails to properly consider the way in which colours are perceived. Furthermore, differences in viewing conditions or representation severely impede the communication of colour between groups of users. The approach proposed here is to provide WYSIWYG colour across a range of imaging devices through a combination of existing device characterisation and colour appearance modeling techniques. In addition, to further facilitate colour communication, various common colour notation systems are defined by a series of mathematical mappings. This enables both the implementation of computer-based colour atlases (which have a number of practical advantages over physical specifiers) and also the interrelation of colour represented in hitherto incompatible notations. Together with the proposed solution, details are given of a computer system which has been implemented. The system was used by textile designers for a real task. Prior to undertaking this work, designers were interviewed in order to ascertain where colour played an important role in their work and where it was found to be a problem. A summary of the findings of these interviews together with a survey of existing approaches to the problems of colour fidelity and communication in colour computer systems are also given. As background to this work, the topics of colour science and colour imaging are introduced

    High Dynamic Range Image Rendering Using a Retinex-Based Adaptive Filter

    Get PDF
    We propose a new method to render high dynamic range images that models global and local adaptation of the human visual system. Our method is based on the center-surround Retinex model. The novelties of our method is first to use an adaptive surround, whose shape follows the image high contrast edges, thus reducing halo artifacts common to other methods. Secondly, only the luminance channel is processed, which is defined by the first component of a principal component analysis. Principal component analysis provides orthogonality between channels and thus reduces the chromatic changes caused by the modification of luminance. We show that our method efficiently renders high dynamic range images and we compare our results with the current state of the art

    High-fidelity colour reproduction for high-dynamic-range imaging

    Get PDF
    The aim of this thesis is to develop a colour reproduction system for high-dynamic-range (HDR) imaging. Classical colour reproduction systems fail to reproduce HDR images because current characterisation methods and colour appearance models fail to cover the dynamic range of luminance present in HDR images. HDR tone-mapping algorithms have been developed to reproduce HDR images on low-dynamic-range media such as LCD displays. However, most of these models have only considered luminance compression from a photographic point of view and have not explicitly taken into account colour appearance. Motivated by the idea to bridge the gap between crossmedia colour reproduction and HDR imaging, this thesis investigates the fundamentals and the infrastructure of cross-media colour reproduction. It restructures cross-media colour reproduction with respect to HDR imaging, and develops a novel cross-media colour reproduction system for HDR imaging. First, our HDR characterisation method enables us to measure HDR radiance values to a high accuracy that rivals spectroradiometers. Second, our colour appearance model enables us to predict human colour perception under high luminance levels. We first built a high-luminance display in order to establish a controllable high-luminance viewing environment. We conducted a psychophysical experiment on this display device to measure perceptual colour attributes. A novel numerical model for colour appearance was derived from our experimental data, which covers the full working range of the human visual system. Our appearance model predicts colour and luminance attributes under high luminance levels. In particular, our model predicts perceived lightness and colourfulness to a significantly higher accuracy than other appearance models. Finally, a complete colour reproduction pipeline is proposed using our novel HDR characterisation and colour appearance models. Results indicate that our reproduction system outperforms other reproduction methods with statistical significance. Our colour reproduction system provides high-fidelity colour reproduction for HDR imaging, and successfully bridges the gap between cross-media colour reproduction and HDR imaging

    Particle Filters for Colour-Based Face Tracking Under Varying Illumination

    Get PDF
    Automatic human face tracking is the basis of robotic and active vision systems used for facial feature analysis, automatic surveillance, video conferencing, intelligent transportation, human-computer interaction and many other applications. Superior human face tracking will allow future safety surveillance systems which monitor drowsy drivers, or patients and elderly people at the risk of seizure or sudden falls and will perform with lower risk of failure in unexpected situations. This area has actively been researched in the current literature in an attempt to make automatic face trackers more stable in challenging real-world environments. To detect faces in video sequences, features like colour, texture, intensity, shape or motion is used. Among these feature colour has been the most popular, because of its insensitivity to orientation and size changes and fast process-ability. The challenge of colour-based face trackers, however, has been dealing with the instability of trackers in case of colour changes due to the drastic variation in environmental illumination. Probabilistic tracking and the employment of particle filters as powerful Bayesian stochastic estimators, on the other hand, is increasing in the visual tracking field thanks to their ability to handle multi-modal distributions in cluttered scenes. Traditional particle filters utilize transition prior as importance sampling function, but this can result in poor posterior sampling. The objective of this research is to investigate and propose stable face tracker capable of dealing with challenges like rapid and random motion of head, scale changes when people are moving closer or further from the camera, motion of multiple people with close skin tones in the vicinity of the model person, presence of clutter and occlusion of face. The main focus has been on investigating an efficient method to address the sensitivity of the colour-based trackers in case of gradual or drastic illumination variations. The particle filter is used to overcome the instability of face trackers due to nonlinear and random head motions. To increase the traditional particle filter\u27s sampling efficiency an improved version of the particle filter is introduced that considers the latest measurements. This improved particle filter employs a new colour-based bottom-up approach that leads particles to generate an effective proposal distribution. The colour-based bottom-up approach is a classification technique for fast skin colour segmentation. This method is independent to distribution shape and does not require excessive memory storage or exhaustive prior training. Finally, to address the adaptability of the colour-based face tracker to illumination changes, an original likelihood model is proposed based of spatial rank information that considers both the illumination invariant colour ordering of a face\u27s pixels in an image or video frame and the spatial interaction between them. The original contribution of this work lies in the unique mixture of existing and proposed components to improve colour-base recognition and tracking of faces in complex scenes, especially where drastic illumination changes occur. Experimental results of the final version of the proposed face tracker, which combines the methods developed, are provided in the last chapter of this manuscript

    Color to gray conversions for stereo matching

    Get PDF
    The thesis belongs to the Computer Graphics and Computer Vision fields, it copes with the image color to grayscale conversion problem with the intent of improving the results in the context of stereo matching. Many different state of the art color to grayscale conversion algorithms have been evaluated, implemented and tested inside the stereo matching context, and a new ad-hoc algorithm has been proposed that optimizes the conversion process by evaluating the whole set of images to be matched simultaneously. La tesi si colloca nel settore della Computer Graphics e della Computer Vision e affronta il problema della conversione di un immmagine a colori in toni di grigio allo scopo di migliorare il processo di calcolo delle corrispondenze tra coppie di immagini. In questo ambito sono stati analizzati, implementati e valutati diversi algoritmi per la conversione in toni di grigio noti in letteratura e proposto un nuovo algoritmo specifico per questa problematica. La soluzione proposta affronta la conversione valutando contemporanemente tutto l'insieme di immagini da far corrispondere

    Illumination Invariant Outdoor Perception

    Get PDF
    This thesis proposes the use of a multi-modal sensor approach to achieve illumination invariance in images taken in outdoor environments. The approach is automatic in that it does not require user input for initialisation, and is not reliant on the input of atmospheric radiative transfer models. While it is common to use pixel colour and intensity as features in high level vision algorithms, their performance is severely limited by the uncontrolled lighting and complex geometric structure of outdoor scenes. The appearance of a material is dependent on the incident illumination, which can vary due to spatial and temporal factors. This variability causes identical materials to appear differently depending on their location. Illumination invariant representations of the scene can potentially improve the performance of high level vision algorithms as they allow discrimination between pixels to occur based on the underlying material characteristics. The proposed approach to obtaining illumination invariance utilises fused image and geometric data. An approximation of the outdoor illumination is used to derive per-pixel scaling factors. This has the effect of relighting the entire scene using a single illuminant that is common in terms of colour and intensity for all pixels. The approach is extended to radiometric normalisation and the multi-image scenario, meaning that the resultant dataset is both spatially and temporally illumination invariant. The proposed illumination invariance approach is evaluated on several datasets and shows that spatial and temporal invariance can be achieved without loss of spectral dimensionality. The system requires very few tuning parameters, meaning that expert knowledge is not required in order for its operation. This has potential implications for robotics and remote sensing applications where perception systems play an integral role in developing a rich understanding of the scene
    corecore