3,304 research outputs found

    A Paradigm for color gamut mapping of pictorial images

    Get PDF
    In this thesis, a paradigm was generated for color gamut mapping of pictorial images. This involved the development and testing of: 1.) a hue-corrected version of the CIELAB color space, 2.) an image-dependent sigmoidal-lightness-rescaling process, 3.) an image-gamut- based chromatic-compression process, and 4.) a gamut-expansion process. This gamut-mapping paradigm was tested against some gamut-mapping strategies published in the literature. Reproductions generated by gamut mapping in a hue-corrected CIELAB color space more accurately preserved the perceived hue of the original scenes compared to reproductions generated using the CIELAB color space. The results of three gamut-mapping experiments showed that the contrast-preserving nature of the sigmoidal-lightness-remapping strategy generated gamut-mapped reproductions that were better matches to the originals than reproductions generated using linear-lightness-compression functions. In addition, chromatic-scaling functions that compressed colors at a higher rate near the gamut surface and less near the achromatic axis produced better matches to the originals than algorithms that performed linear chroma compression throughout color space. A constrained gamut-expansion process, similar to the inverse of the best gamut-compression process found in this experiment, produced reproductions preferred over an expansion process utilizing unconstrained linear expansion

    Harnessing Collaborative Technologies: Helping Funders Work Together Better

    Get PDF
    This report was produced through a joint research project of the Monitor Institute and the Foundation Center. The research included an extensive literature review on collaboration in philanthropy, detailed analysis of trends from a recent Foundation Center survey of the largest U.S. foundations, interviews with 37 leading philanthropy professionals and technology experts, and a review of over 170 online tools.The report is a story about how new tools are changing the way funders collaborate. It includes three primary sections: an introduction to emerging technologies and the changing context for philanthropic collaboration; an overview of collaborative needs and tools; and recommendations for improving the collaborative technology landscapeA "Key Findings" executive summary serves as a companion piece to this full report

    Algorithms for compression of high dynamic range images and video

    Get PDF
    The recent advances in sensor and display technologies have brought upon the High Dynamic Range (HDR) imaging capability. The modern multiple exposure HDR sensors can achieve the dynamic range of 100-120 dB and LED and OLED display devices have contrast ratios of 10^5:1 to 10^6:1. Despite the above advances in technology the image/video compression algorithms and associated hardware are yet based on Standard Dynamic Range (SDR) technology, i.e. they operate within an effective dynamic range of up to 70 dB for 8 bit gamma corrected images. Further the existing infrastructure for content distribution is also designed for SDR, which creates interoperability problems with true HDR capture and display equipment. The current solutions for the above problem include tone mapping the HDR content to fit SDR. However this approach leads to image quality associated problems, when strong dynamic range compression is applied. Even though some HDR-only solutions have been proposed in literature, they are not interoperable with current SDR infrastructure and are thus typically used in closed systems. Given the above observations a research gap was identified in the need for efficient algorithms for the compression of still images and video, which are capable of storing full dynamic range and colour gamut of HDR images and at the same time backward compatible with existing SDR infrastructure. To improve the usability of SDR content it is vital that any such algorithms should accommodate different tone mapping operators, including those that are spatially non-uniform. In the course of the research presented in this thesis a novel two layer CODEC architecture is introduced for both HDR image and video coding. Further a universal and computationally efficient approximation of the tone mapping operator is developed and presented. It is shown that the use of perceptually uniform colourspaces for internal representation of pixel data enables improved compression efficiency of the algorithms. Further proposed novel approaches to the compression of metadata for the tone mapping operator is shown to improve compression performance for low bitrate video content. Multiple compression algorithms are designed, implemented and compared and quality-complexity trade-offs are identified. Finally practical aspects of implementing the developed algorithms are explored by automating the design space exploration flow and integrating the high level systems design framework with domain specific tools for synthesis and simulation of multiprocessor systems. The directions for further work are also presented

    First Steps Towards an Ethics of Robots and Artificial Intelligence

    Get PDF
    This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognise that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable sub-set of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities

    Unravelling semiotics in 2022: A year in review

    Get PDF
    Unravelling semiotics in 2022: A year in revie

    Metodologia para a geração de imagens High Dynamic Range em iluminação romana

    Get PDF
    Comunicação apresentada no International Association for the Scientific Knowledge - InterTIC'07, Porto, 2007.Num futuro muito próximo, o modo como veremos conteúdos num qualquer dispositivo de visualização irá sofrer profundas alterações. A luz captada pelo olho humano num simples passeio pela praia num radioso dia de sol, pode atingir valores de intensidade e cromaticidade verdadeiramente astronómicos. No entanto, grande parte dessa amplitude dinâmica não tem representação possível no modelo RGB, usado praticamente na totalidade dos dispositivos de visualização actuais. High Dynamic Range (HDR) é uma área de investigação que se dedica ao estudo de formas e métodos que visam suprir essa lacuna. Para atingir tal intento, têm sido desenvolvidas novas técnicas para a geração, armazenamento e representação de imagens que consigam preservar a elevada amplitude dinâmica captada pelo Sistema Visual Humano. Neste artigo apresentamos uma metodologia de trabalho que utiliza este novo paradigma de visualização onde o seu potencial é verdadeiramente apropriado, a arqueologia. A Casa dos Repuxos é o espaço mais belo e imponente existente nas ruínas de Conimbriga (Portugal) e que ainda hoje preserva alguns dos frescos e mosaicos originais. O nosso objectivo centra-se na geração de imagens HDR desses frescos e mosaicos iluminados por luminárias desse período, de modo a que a experiência visual seja a mais próxima possível de um habitante daquela mesma casa

    High-fidelity colour reproduction for high-dynamic-range imaging

    Get PDF
    The aim of this thesis is to develop a colour reproduction system for high-dynamic-range (HDR) imaging. Classical colour reproduction systems fail to reproduce HDR images because current characterisation methods and colour appearance models fail to cover the dynamic range of luminance present in HDR images. HDR tone-mapping algorithms have been developed to reproduce HDR images on low-dynamic-range media such as LCD displays. However, most of these models have only considered luminance compression from a photographic point of view and have not explicitly taken into account colour appearance. Motivated by the idea to bridge the gap between crossmedia colour reproduction and HDR imaging, this thesis investigates the fundamentals and the infrastructure of cross-media colour reproduction. It restructures cross-media colour reproduction with respect to HDR imaging, and develops a novel cross-media colour reproduction system for HDR imaging. First, our HDR characterisation method enables us to measure HDR radiance values to a high accuracy that rivals spectroradiometers. Second, our colour appearance model enables us to predict human colour perception under high luminance levels. We first built a high-luminance display in order to establish a controllable high-luminance viewing environment. We conducted a psychophysical experiment on this display device to measure perceptual colour attributes. A novel numerical model for colour appearance was derived from our experimental data, which covers the full working range of the human visual system. Our appearance model predicts colour and luminance attributes under high luminance levels. In particular, our model predicts perceived lightness and colourfulness to a significantly higher accuracy than other appearance models. Finally, a complete colour reproduction pipeline is proposed using our novel HDR characterisation and colour appearance models. Results indicate that our reproduction system outperforms other reproduction methods with statistical significance. Our colour reproduction system provides high-fidelity colour reproduction for HDR imaging, and successfully bridges the gap between cross-media colour reproduction and HDR imaging

    Bayesian Methods for Radiometric Calibration in Motion Picture Encoding Workflows

    Get PDF
    A method for estimating the Camera Response Function (CRF) of an electronic motion picture camera is presented in this work. The accurate estimation of the CRF allows for proper encoding of camera exposures into motion picture post-production workflows, like the Academy Color Encoding Specification (ACES), this being a necessary step to correctly combine images from different capture sources into one cohesive final production and minimize non-creative manual adjustments. Although there are well known standard CRFs implemented in typical video camera workflows, motion picture workflows and newer High Dynamic Range (HDR) imaging workflows have introduced new standard CRFs as well as custom and proprietary CRFs that need to be known for proper post-production encoding of the camera footage. Current methods to estimate this function rely on the use of measurement charts, using multiple static images taken under different exposures or lighting conditions, or assume a simplistic model of the function’s shape. All these methods become problematic and tough to fit into motion picture production and post-production workflows where the use of test charts and varying camera or scene setups becomes impractical and where a method based solely on camera footage, comprised of a single image or a series of images, would be advantageous. This work presents a methodology initially based on the work of Lin, Gu, Yamazaki and Shum that takes into account edge color mixtures in an image or image sequence, that are affected by the non-linearity introduced by a CRF. In addition, a novel feature based on image noise is introduced to overcome some of the limitations of edge color mixtures. These features provide information that is included in the likelihood probability distribution in a Bayesian framework to estimate the CRF as the expected value of a posterior probability distribution, which is itself approximated by a Markov Chain Monte Carlo (MCMC) sampling algorithm. This allows for a more complete description of the CRF over methods like Maximum Likelihood (ML) and Maximum A Posteriori (MAP). The CRF function is modeled by Principal Component Analysis (PCA) of the Database of Response Functions (DoRF) compiled by Grossberg and Nayar, and the prior probability distribution is modeled by a Gaussian Mixture Model (GMM) of the PCA coefficients for the responses in the DoRF. CRF estimation results are presented for an ARRI electronic motion picture camera, showing the improved estimation accuracy and practicality of this method over previous methods for motion picture post-production workflows

    Perception and Mitigation of Artifacts in a Flat Panel Tiled Display System

    Get PDF
    Flat panel displays continue to dominate the display market. Larger, higher resolution flat panel displays are now in demand for scientific, business, and entertainment purposes. Manufacturing such large displays is currently difficult and expensive. Alternately, larger displays can be constructed by tiling smaller flat panel displays. While this approach may prove to be more cost effective, appropriate measures must be taken to achieve visual seamlessness and uniformity. In this project we conducted a set of experiments to study the perception and mitigation of image artifacts in tiled display systems. In the first experiment we used a prototype tiled display to investigate its current viability and to understand what critical perceptible visual artifacts exist in this system. Based on word frequencies of the survey responses, the most disruptive artifacts perceived were ranked. On the basis of these findings, we conducted a second experiment to test the effectiveness of image processing algorithms designed to mitigate some of the most distracting artifacts without changing the physical properties of the display system. Still images were processed using several algorithms and evaluated by observers using magnitude scaling. Participants in the experiment noticed statistically significant improvement in image quality from one of the two algorithms. Similar testing should be conducted to evaluate the effectiveness of the algorithms on video content. While much work still needs to be done, the contributions of this project should enable the development of an image processing pipeline to mitigate perceived artifacts in flat panel display systems and provide the groundwork for extending such a pipeline to realtime applications
    corecore