8,598 research outputs found

    Towards a color prediction model for printed patches

    Get PDF
    A novel color prediction model is presented which unifies, within a framework based on matrices, the phenomena of surface reflection, light absorption, diffuse light sources, superposition of multiple ink layers, and other

    Tracking by Prediction: A Deep Generative Model for Mutli-Person localisation and Tracking

    Full text link
    Current multi-person localisation and tracking systems have an over reliance on the use of appearance models for target re-identification and almost no approaches employ a complete deep learning solution for both objectives. We present a novel, complete deep learning framework for multi-person localisation and tracking. In this context we first introduce a light weight sequential Generative Adversarial Network architecture for person localisation, which overcomes issues related to occlusions and noisy detections, typically found in a multi person environment. In the proposed tracking framework we build upon recent advances in pedestrian trajectory prediction approaches and propose a novel data association scheme based on predicted trajectories. This removes the need for computationally expensive person re-identification systems based on appearance features and generates human like trajectories with minimal fragmentation. The proposed method is evaluated on multiple public benchmarks including both static and dynamic cameras and is capable of generating outstanding performance, especially among other recently proposed deep neural network based approaches.Comment: To appear in IEEE Winter Conference on Applications of Computer Vision (WACV), 201

    Spectral printing of paintings using a seven-color digital press

    Get PDF
    The human visual system is trichromatic and therefore reduces higher dimensional spectral data to three dimensions. Two stimuli with different spectral power curve shapes can result in the same cone response and therefore match each other. Color reproduction systems take advantage of this effect and match color by creating the same cone response as the original but with different colorants. ICC color management transforms all colors into a three-dimensional reference color space, which is independent from any input or output devices. This concept works well for a single defined observer and illumination conditions, but in practice, it is not possible to control viewing conditions leading to severe color mismatches, particularly for paintings. Paintings pose unique challenges because of the large variety of available colorants resulting in a very large color gamut and considerable spectral variability. This research explored spectral color reproduction using a seven-color electrophotographic printing process, the HP Indigo 7000. Because of the restriction to seven inks from the 12 basic inks supplied with the press, the research identified both the optimal seven inks and a set of eight artist paints which can be spectrally reproduced. The set of inks was Cyan, Magenta, Yellow, Black, Reflex Blue, Violet and Orange. The eight paints were Cadmium Red Medium, Cadmium Orange, Cadmium Yellow Light, Dioxazine Purple, Phthalo Blue Green Shade, Ultramarine Blue, Quinacridone Crimson and Carbon Black. The selection was based on both theoretical and experimental analyses. The final testing was computational indicating the possibility of both spectral and colorimetric color reproduction of paintings

    N-colour separation methods for accurate reproduction of spot colours

    Full text link
    In packaging, spot colours are used to print key information like brand logos and elements for which the colour accuracy is critical. The present study investigates methods to aid the accurate reproduction of these spot colours with the n-colour printing process. Typical n-colour printing systems consist of supplementary inks in addition to the usual CMYK inks. Adding these inks to the traditional CMYK set increases the attainable colour gamut, but the added complexity creates several challenges in generating suitable colour separations for rendering colour images. In this project, the n-colour separation is achieved by the use of additional sectors for intermediate inks. Each sector contains four inks with the achromatic ink (black) common to all sectors. This allows the extension of the principles of the CMYK printing process to these additional sectors. The methods developed in this study can be generalised to any number of inks. The project explores various aspects of the n-colour printing process including the forward characterisation methods, gamut prediction of the n-colour process and the inverse characterisation to calculate the n-colour separation for target spot colours. The scope of the study covers different printing technologies including lithographic offset, flexographic, thermal sublimation and inkjet printing. A new method is proposed to characterise the printing devices. This method, the spot colour overprint (SCOP) model, was evaluated for the n-colour printing process with different printing technologies. In addition, a set of real-world spot colours were converted to n-colour separations and printed with the 7-colour printing process to evaluate against the original spot colours. The results show that the proposed methods can be effectively used to replace the spot coloured inks with the n-colour printing process. This can save significant material, time and costs in the packaging industry

    Test Targets 8.0: A Collaborative effort exploring the use of scientific methods for color imaging and process control

    Get PDF
    Publishing is both a journey and a destination. In the case of Test Targets, the act of creating and editing content, paginating and managing digital assets, represents the journey. The hard copy is the result or destination that readers can see and touch. Like the space exploration program, everyone saw the spacecraft that landed on the moon. It was the rocket booster that made the journey from the earth to the moon possible. This article portrays the process of capturing ideas in the form of digital data. It also describes the process of managing digital assets that produces the Test Targets publication

    Modeling Color Appearance in Augmented Reality

    Get PDF
    Augmented reality (AR) is a developing technology that is expected to become the next interface between humans and computers. One of the most common designs of AR devices is the optical see-through head- mounted display (HMD). In this design, the virtual content presented on the displays embedded inside the device gets optically superimposed on the real world which results in the virtual content being transparent. Color appearance in see-through designs of AR is a complicated subject, because it depends on many factors including the ambient light, the color appearance of the virtual content and color appearance of the real background. Similar to display technology, it is vital to control the color appearance of content for many applications of AR. In this research, color appearance in the see-through design of augmented reality environment is studied and modeled. Using a bench-top optical mixing apparatus as an AR simulator, objective measurements of mixed colors in AR were performed to study the light behavior in AR environment. Psychophysical color matching experiments were performed to understand color perception in AR. These experiments were performed first for simple 2D stimuli with single color both as background and foreground and later for more visually complex stimuli to better represent real content that is presented in AR. Color perception in AR environment was compared to color perception on a display which showed they are different from each other. The applicability of the CAM16 color appearance model, one of the most comprehensive current color appearance models, in AR environment was evaluated. The results showed that the CAM16 is not accurate in predicting the color appearance in AR environment. In order to model color appearance in AR environment, four approaches were developed using modifications in tristimulus and color appearance spaces, and the best performance was found to be for Approach 2 which was based on predicting the tristimulus values of the mixed content from the background and foreground color
    corecore