56 research outputs found

    The effect of image size on the color appearance of image reproductions

    Get PDF
    Original and reproduced art are usually viewed under quite different viewing conditions. One of the interesting differences in viewing condition is size difference. The main focus of this research was investigation of the effect of image size on color perception of rendered images. This research had several goals. The first goal was to develop an experimental paradigm for measuring the effect of image size on color appearance. The second goal was to identify the most affected image attributes for changes of image size. The final goal was to design and evaluate algorithms to compensate for the change of visual angle (size). To achieve the first goal, an exploratory experiment was performed using a colorimetrically characterized digital projector and LCD. The projector and LCD were light emitting devices and in this sense were similar soft-copy media. The physical sizes of the reproduced images on the LCD and projector screen could be very different. Additionally, one could benefit from flexibility of soft-copy reproduction devices such as real-time image rendering, which is essential for adjustment experiments. The capability of the experimental paradigm in revealing the change of appearance for a change of visual angle (size) was demonstrated by conducting a paired-comparison experiment. Through contrast matching experiments, achromatic and chromatic contrast and mean luminance of an image were identified as the most affected attributes for changes of image size. Measurement of the extent and trend of changes for each attribute were measured using matching experiments. Proper algorithms to compensate for the image size effect were design and evaluated. The correction algorithms were tested versus traditional colorimetric image rendering using a paired-comparison technique. The paired-comparison results confirmed superiority of the algorithms over the traditional colorimetric image rendering for the size effect compensation

    Color to gray conversions for stereo matching

    Get PDF
    The thesis belongs to the Computer Graphics and Computer Vision fields, it copes with the image color to grayscale conversion problem with the intent of improving the results in the context of stereo matching. Many different state of the art color to grayscale conversion algorithms have been evaluated, implemented and tested inside the stereo matching context, and a new ad-hoc algorithm has been proposed that optimizes the conversion process by evaluating the whole set of images to be matched simultaneously. La tesi si colloca nel settore della Computer Graphics e della Computer Vision e affronta il problema della conversione di un immmagine a colori in toni di grigio allo scopo di migliorare il processo di calcolo delle corrispondenze tra coppie di immagini. In questo ambito sono stati analizzati, implementati e valutati diversi algoritmi per la conversione in toni di grigio noti in letteratura e proposto un nuovo algoritmo specifico per questa problematica. La soluzione proposta affronta la conversione valutando contemporanemente tutto l'insieme di immagini da far corrispondere

    Contours and contrast

    Get PDF
    Contrast in photographic and computer-generated imagery communicates colour and lightness differences that would be perceived when viewing the represented scene. Due to depiction constraints, the amount of displayable contrast is limited, reducing the image's ability to accurately represent the scene. A local contrast enhancement technique called unsharp masking can overcome these constraints by adding high-frequency contours to an image that increase its apparent contrast. In three novel algorithms inspired by unsharp masking, specialized local contrast enhancements are shown to overcome constraints of a limited dynamic range, overcome an achromatic palette, and to improve the rendering of 3D shapes and scenes. The Beyond Tone Mapping approach restores original HDR contrast to its tone mapped LDR counterpart by adding highfrequency colour contours to the LDR image while preserving its luminance. Apparent Greyscale is a multi-scale two-step technique that first converts colour images and video to greyscale according to their chromatic lightness, then restores diminished colour contrast with high-frequency luminance contours. Finally, 3D Unsharp Masking performs scene coherent enhancement by introducing 3D high-frequency luminance contours to emphasize the details, shapes, tonal range and spatial organization of a 3D scene within the rendering pipeline. As a perceptual justification, it is argued that a local contrast enhancement made with unsharp masking is related to the Cornsweet illusion, and that this may explain its effect on apparent contrast.Seit vielen Jahren ist die realistische Erzeugung von virtuellen Charakteren ein zentraler Teil der Computergraphikforschung. Dennoch blieben bisher einige Probleme ungelöst. Dazu zählt unter anderem die Erzeugung von Charakteranimationen, welche unter der Benutzung der traditionellen, skelettbasierten Ansätze immer noch zeitaufwändig sind. Eine weitere Herausforderung stellt auch die passive Erfassung von Schauspielern in alltäglicher Kleidung dar. Darüber hinaus existieren im Gegensatz zu den zahlreichen skelettbasierten Ansätzen nur wenige Methoden zur Verarbeitung und Veränderung von Netzanimationen. In dieser Arbeit präsentieren wir Algorithmen zur Lösung jeder dieser Aufgaben. Unser erster Ansatz besteht aus zwei Netz-basierten Verfahren zur Vereinfachung von Charakteranimationen. Obwohl das kinematische Skelett beiseite gelegt wird, können beide Verfahren direkt in die traditionelle Pipeline integriert werden, wobei die Erstellung von Animationen mit wirklichkeitsgetreuen Körperverformungen ermöglicht wird. Im Anschluss präsentieren wir drei passive Aufnahmemethoden für Körperbewegung und Schauspiel, die ein deformierbares 3D-Modell zur Repräsentation der Szene benutzen. Diese Methoden können zur gemeinsamen Rekonstruktion von zeit- und raummässig kohärenter Geometrie, Bewegung und Oberflächentexturen benutzt werden, die auch zeitlich veränderlich sein dürfen. Aufnahmen von lockerer und alltäglicher Kleidung sind dabei problemlos möglich. Darüber hinaus ermöglichen die qualitativ hochwertigen Rekonstruktionen die realistische Darstellung von 3D Video-Sequenzen. Schließlich werden zwei neuartige Algorithmen zur Verarbeitung von Netz-Animationen beschrieben. Während der erste Algorithmus die vollautomatische Umwandlung von Netz-Animationen in skelettbasierte Animationen ermöglicht, erlaubt der zweite die automatische Konvertierung von Netz-Animationen in so genannte Animations-Collagen, einem neuen Kunst-Stil zur Animationsdarstellung. Die in dieser Dissertation beschriebenen Methoden können als Lösungen spezieller Probleme, aber auch als wichtige Bausteine größerer Anwendungen betrachtet werden. Zusammengenommen bilden sie ein leistungsfähiges System zur akkuraten Erfassung, zur Manipulation und zum realistischen Rendern von künstlerischen Aufführungen, dessen Fähigkeiten über diejenigen vieler verwandter Capture-Techniken hinausgehen. Auf diese Weise können wir die Bewegung, die im Zeitverlauf variierenden Details und die Textur-Informationen eines Schauspielers erfassen und sie in eine mit vollständiger Information versehene Charakter-Animation umwandeln, die unmittelbar weiterverwendet werden kann, sich aber auch zur realistischen Darstellung des Schauspielers aus beliebigen Blickrichtungen eignet

    The LLAB model for quantifying colour appearance

    Get PDF
    A reliable colour appearance model is desired by industry to achieve high colour fidelity between images produced using a range of different imaging devices. The aim of this study was to derive a reliable colour appearance model capable of predicting the change of perceived attributes of colour appearance under a wide range of media/viewing conditions. The research was divided into three parts: characterising imaging devices, conducting a psychophysical experiment, and developing a colour appearance model. Various imaging devices were characterised including a graphic art scanner, a Cromalin proofing system, an IRIS ink jet printer, and a Barco Calibrator. For the former three devices, each colour is described by four primaries: cyan (C), magenta (M), yellow (Y), and black (K). Three set of characterisation samples (120 and 31 black printer, and cube data sets) were produced and measured for deriving and testing the printing characterisation models. Four black printer algorithms (BPA), were derived. Each included both forward and reverse processes. A 2nd BPA printing model taking into account additivity failure, grey component replacement (GCR) algorithm gave the most accurate prediction to the characterisation data set than the other BPA models. The PLCC (Piecewise Linear interpolation assuming Constant Chromaticity coordinates) monitor model was also implemented to characterise the Barco monitor. The psychophysical experiment was conducted to compare Cromalin hardcopy images viewed in a viewing cabinet and softcopy images presented on a monitor under a wide range of illuminants (white points) including: D93, D65, D50 and A. Two scaling methods: category judgement and paired comparison, were employed by viewing a pair of images. Three classes of colour models were evaluated: uniform colour spaces, colour appearance models and chromatic adaptation transforms. Six images were selected and processed via each colour model. The results indicated that the BFD chromatic transform gave the most accurate predictions of the visual results. Finally, a colour appearance model, LLAB, was developed. It is a combination of the BFD chromatic transform and a modified version of CIELAB uniform colour space to fit the LUTCRI Colour Appearance Data previously accumulated. The form of the LLAB model is much simpler and its performance is more precise to fit experimental data than those of the other models

    Conference Proceedings of the Euroregio / BNAM 2022 Joint Acoustic Conference

    Get PDF

    Digital neuromorphic auditory systems

    Get PDF
    This dissertation presents several digital neuromorphic auditory systems. Neuromorphic systems are capable of running in real-time at a smaller computing cost and consume lower power than on widely available general computers. These auditory systems are considered neuromorphic as they are modelled after computational models of the mammalian auditory pathway and are capable of running on digital hardware, or more specifically on a field-programmable gate array (FPGA). The models introduced are categorised into three parts: a cochlear model, an auditory pitch model, and a functional primary auditory cortical (A1) model. The cochlear model is the primary interface of an input sound signal and transmits the 2D time-frequency representation of the sound to the pitch models as well as to the A1 model. In the pitch model, pitch information is extracted from the sound signal in the form of a fundamental frequency. From the A1 model, timbre information in the form of time-frequency envelope information of the sound signal is extracted. Since the computational auditory models mentioned above are required to be implemented on FPGAs that possess fewer computational resources than general-purpose computers, the algorithms in the models are optimised so that they fit on a single FPGA. The optimisation includes using simplified hardware-implementable signal processing algorithms. Computational resource information of each model on FPGA is extracted to understand the minimum computational resources required to run each model. This information includes the quantity of logic modules, register quantity utilised, and power consumption. Similarity comparisons are also made between the output responses of the computational auditory models on software and hardware using pure tones, chirp signals, frequency-modulated signal, moving ripple signals, and musical signals as input. The limitation of the responses of the models to musical signals at multiple intensity levels is also presented along with the use of an automatic gain control algorithm to alleviate such limitations. With real-world musical signals as their inputs, the responses of the models are also tested using classifiers – the response of the auditory pitch model is used for the classification of monophonic musical notes, and the response of the A1 model is used for the classification of musical instruments with their respective monophonic signals. Classification accuracy results are shown for model output responses on both software and hardware. With the hardware implementable auditory pitch model, the classification score stands at 100% accuracy for musical notes from the 4th and 5th octaves containing 24 classes of notes. With the hardware implementation auditory timbre model, the classification score is 92% accuracy for 12 classes musical instruments. Also presented is the difference in memory requirements of the model output responses on both software and hardware – pitch and timbre responses used for the classification exercises use 24 and 2 times less memory space for hardware than software
    corecore