26 research outputs found

    A Study of Colour Rendering in the In-Camera Imaging Pipeline

    Get PDF
    Consumer cameras such as digital single-lens reflex camera (DSLR) and smartphone cameras have onboard hardware that applies a series of processing steps to transform the initial captured raw sensor image to the final output image that is provided to the user. These processing steps collectively make up the in-camera image processing pipeline. This dissertation aims to study the processing steps related to colour rendering which can be categorized into two stages. The first stage is to convert an image's sensor-specific raw colour space to a device-independent perceptual colour space. The second stage is to further process the image into a display-referred colour space and includes photo-finishing routines to make the image appear visually pleasing to a human. This dissertation makes four contributions towards the study of camera colour rendering. The first contribution is the development of a software-based research platform that closely emulates the in-camera image processing pipeline hardware. This platform allows the examination of the various image states of the captured image as it is processed from the sensor response to the final display output. Our second contribution is to demonstrate the advantage of having access to intermediate image states within the in-camera pipeline that provide more accurate colourimetric consistency among multiple cameras. Our third contribution is to analyze the current colourimetric method used by consumer cameras and to propose a modification that is able to improve its colour accuracy. Our fourth contribution is to describe how to customize a camera imaging pipeline using machine vision cameras to produce high-quality perceptual images for dermatological applications. The dissertation concludes with a summary and future directions

    Stereoscopic high dynamic range imaging

    Get PDF
    Two modern technologies show promise to dramatically increase immersion in virtual environments. Stereoscopic imaging captures two images representing the views of both eyes and allows for better depth perception. High dynamic range (HDR) imaging accurately represents real world lighting as opposed to traditional low dynamic range (LDR) imaging. HDR provides a better contrast and more natural looking scenes. The combination of the two technologies in order to gain advantages of both has been, until now, mostly unexplored due to the current limitations in the imaging pipeline. This thesis reviews both fields, proposes stereoscopic high dynamic range (SHDR) imaging pipeline outlining the challenges that need to be resolved to enable SHDR and focuses on capture and compression aspects of that pipeline. The problems of capturing SHDR images that would potentially require two HDR cameras and introduce ghosting, are mitigated by capturing an HDR and LDR pair and using it to generate SHDR images. A detailed user study compared four different methods of generating SHDR images. Results demonstrated that one of the methods may produce images perceptually indistinguishable from the ground truth. Insights obtained while developing static image operators guided the design of SHDR video techniques. Three methods for generating SHDR video from an HDR-LDR video pair are proposed and compared to the ground truth SHDR videos. Results showed little overall error and identified a method with the least error. Once captured, SHDR content needs to be efficiently compressed. Five SHDR compression methods that are backward compatible are presented. The proposed methods can encode SHDR content to little more than that of a traditional single LDR image (18% larger for one method) and the backward compatibility property encourages early adoption of the format. The work presented in this thesis has introduced and advanced capture and compression methods for the adoption of SHDR imaging. In general, this research paves the way for a novel field of SHDR imaging which should lead to improved and more realistic representation of captured scenes

    Investigations into colour constancy by bridging human and computer colour vision

    Get PDF
    PhD ThesisThe mechanism of colour constancy within the human visual system has long been of great interest to researchers within the psychophysical and image processing communities. With the maturation of colour imaging techniques for both scientific and artistic applications the importance of colour capture accuracy has consistently increased. Colour offers a great deal more information for the viewer than grayscale imagery, ranging from object detection to food ripeness and health estimation amongst many others. However these tasks rely upon the colour constancy process in order to discount scene illumination to allow these tasks to be carried out. Psychophysical studies have attempted to uncover the inner workings of this mechanism, which would allow it to be reproduced algorithmically. This would allow the development of devices which can eventually capture and perceive colour in the same manner as a human viewer. These two communities have approached this challenge from opposite ends, and as such very different and largely unconnected approaches. This thesis investigates the development of studies and algorithms which bridge the two communities. Utilising findings from psychophysical studies as inspiration to firstly improve an existing image enhancement algorithm. Results are then compared to state of the art methods. Then, using further knowledge, and inspiration, of the human visual system to develop a novel colour constancy approach. This approach attempts to mimic and replicate the mechanism of colour constancy by investigating the use of a physiological colour space and specific scene contents to estimate illumination. Performance of the colour constancy mechanism within the visual system is then also investigated. The performance of the mechanism across different scenes and commonly and uncommonly encountered illuminations is tested. The importance of being able to bridge these two communities, with a successful colour constancy method, is then further illustrated with a case study investigating the human visual perception of the agricultural produce of tomatoes.EPSRC DTA: Institute of Neuroscience, Newcastle University

    An Investigation Of The Relationship Between Visual Effects And Object Identification Using Eye-tracking

    Get PDF
    The visual content represented on information displays used in training environments prescribe display attributes as brightness, color, contrast, and motion blur, but considerations regarding cognitive processes corresponding to these visual features require further attention in order to optimize the display for training applications. This dissertation describes an empirical study with which information display features, specifically color and motion blur reduction, were investigated to assess their impact in a training scenario involving visual search and threat detection. Presented in this document is a review of the theory and literature describing display technology, its applications to training, and how eye-tracking systems can be used to objectively measure cognitive activity. The experiment required participants to complete a threat identification task, while altering the displays settings beforehand, to assess the utility of the display capabilities. The data obtained led to the conclusion that motion blur had a stronger impact on perceptual load than the addition of color. The increased perceptual load resulted in approximately 8- 10% longer fixation durations for all display conditions and a similar decrease in the number of saccades, but only when motion blur reduction was used. No differences were found in terms of threat location or threat identification accuracy, so it was concluded that the effects of perceptual load were independent of germane cognitive load

    Inverse tone mapping

    Get PDF
    The introduction of High Dynamic Range Imaging in computer graphics has produced a novelty in Imaging that can be compared to the introduction of colour photography or even more. Light can now be captured, stored, processed, and finally visualised without losing information. Moreover, new applications that can exploit physical values of the light have been introduced such as re-lighting of synthetic/real objects, or enhanced visualisation of scenes. However, these new processing and visualisation techniques cannot be applied to movies and pictures that have been produced by photography and cinematography in more than one hundred years. This thesis introduces a general framework for expanding legacy content into High Dynamic Range content. The expansion is achieved avoiding artefacts, producing images suitable for visualisation and re-lighting of synthetic/real objects. Moreover, it is presented a methodology based on psychophysical experiments and computational metrics to measure performances of expansion algorithms. Finally, a compression scheme, inspired by the framework, for High Dynamic Range Textures, is proposed and evaluated

    Efficient streaming for high fidelity imaging

    Get PDF
    Researchers and practitioners of graphics, visualisation and imaging have an ever-expanding list of technologies to account for, including (but not limited to) HDR, VR, 4K, 360°, light field and wide colour gamut. As these technologies move from theory to practice, the methods of encoding and transmitting this information need to become more advanced and capable year on year, placing greater demands on latency, bandwidth, and encoding performance. High dynamic range (HDR) video is still in its infancy; the tools for capture, transmission and display of true HDR content are still restricted to professional technicians. Meanwhile, computer graphics are nowadays near-ubiquitous, but to achieve the highest fidelity in real or even reasonable time a user must be located at or near a supercomputer or other specialist workstation. These physical requirements mean that it is not always possible to demonstrate these graphics in any given place at any time, and when the graphics in question are intended to provide a virtual reality experience, the constrains on performance and latency are even tighter. This thesis presents an overall framework for adapting upcoming imaging technologies for efficient streaming, constituting novel work across three areas of imaging technology. Over the course of the thesis, high dynamic range capture, transmission and display is considered, before specifically focusing on the transmission and display of high fidelity rendered graphics, including HDR graphics. Finally, this thesis considers the technical challenges posed by incoming head-mounted displays (HMDs). In addition, a full literature review is presented across all three of these areas, detailing state-of-the-art methods for approaching all three problem sets. In the area of high dynamic range capture, transmission and display, a framework is presented and evaluated for efficient processing, streaming and encoding of high dynamic range video using general-purpose graphics processing unit (GPGPU) technologies. For remote rendering, state-of-the-art methods of augmenting a streamed graphical render are adapted to incorporate HDR video and high fidelity graphics rendering, specifically with regards to path tracing. Finally, a novel method is proposed for streaming graphics to a HMD for virtual reality (VR). This method utilises 360° projections to transmit and reproject stereo imagery to a HMD with minimal latency, with an adaptation for the rapid local production of depth maps

    Inverse tone mapping

    Get PDF
    The introduction of High Dynamic Range Imaging in computer graphics has produced a novelty in Imaging that can be compared to the introduction of colour photography or even more. Light can now be captured, stored, processed, and finally visualised without losing information. Moreover, new applications that can exploit physical values of the light have been introduced such as re-lighting of synthetic/real objects, or enhanced visualisation of scenes. However, these new processing and visualisation techniques cannot be applied to movies and pictures that have been produced by photography and cinematography in more than one hundred years. This thesis introduces a general framework for expanding legacy content into High Dynamic Range content. The expansion is achieved avoiding artefacts, producing images suitable for visualisation and re-lighting of synthetic/real objects. Moreover, it is presented a methodology based on psychophysical experiments and computational metrics to measure performances of expansion algorithms. Finally, a compression scheme, inspired by the framework, for High Dynamic Range Textures, is proposed and evaluated.EThOS - Electronic Theses Online ServiceEngineering and Physical Sciences Research Council (EPSRC) (EP/D032148)GBUnited Kingdo

    Displaying colourimetrically calibrated images on a high dynamic range display

    No full text
    When colourimetrically characterising a high dynamic range display (HDR) built from an LCD panel and an LED backlight one is faced with several problems: the channels may not be constant; they may not be independent and there may be a significant radiant output at the black level. But crucially, colour transforms are underdetermined, which means that the number of colourimetric dimensions is smaller than the number of device channels. While the first three problems are associated with the LCD, the fourth problem stems from the additional channel in the HDR, the backlight. A 37" flat-panel Brightside DR37-P HDR display was characterised. Using a spectroradiometer we recorded spectral radiance, chromaticities and luminance and estimated the true increase in gamut of the display due to the additional LED layer. We present a basic characterisation, propose a method for accurately presenting a desired luminance and chromaticity output despite the underdetermined problem and give an estimate of the available gamut

    An aptamer-based sensing platform for luteinising hormone pulsatility measurement

    Get PDF
    Normal fertility in human involves highly orchestrated communication across the hypothalamic-pituitary-gonadal (HPG) axis. The pulsatile release of Luteinising Hormone (LH) is a critical element for downstream regulation of sex steroid hormone synthesis and the production of mature eggs. Changes in LH pulsatile pattern have been linked to hypothalamic dysfunction, resulting in multiple reproductive and growth disorders including Polycystic Ovary Syndrome (PCOS), Hypothalamic Amenorrhea (HA), and delayed/precocious puberty. Therefore, assessing the pulsatility of LH is important not only for academic investigation of infertility, but also for clinical decisions and monitoring of treatment. However, there is currently no clinically available tool for measuring human LH pulsatility. The immunoassay system is expensive and requires large volumes of patient blood, limiting its application for LH pulsatility monitoring. In this thesis, I propose a novel method using aptamer-enabled sensing technology to develop a device platform to measure LH pulsatility. I first generated a novel aptamer binding molecule against LH by a nitrocellulose membrane-based in vitro selection then characterised its high affinity and specific binding properties by multiple biophysical/chemical methods. I then developed a sensitive electrochemical-based detection method using this aptamer. The principal mechanism is that structure switching upon binding is associated with the electron transfer rate changes of the MB redox label. I then customised this assay to numerous device platforms under our rapid prototyping strategy including 96 well automated platform, continuous sensing platform and chip-based multiple electrode platform. The best-performing device was found to be the AELECAP (Automated ELEctroChemical Aptamer Platform) – a 96-well plate based automatic micro-wire sensing platform capable of measuring a series of low volume luteinising hormone within a short time. Clinical samples were evaluated using AELECAP. A series of clinical samples were measured including LH pulsatility profile of menopause female (high LH amplitude), normal female/male (normal LH amplitude) and female with hypothalamic amenorrhea (no LH pulsatility). Total patient numbers were 12 of each type, with 50 blood samples collected every 10 mins in 8 hours. Results showed that the system can distinguish LH pulsatile pattern among the cohorts and pulsatility profiles were consistent with the result measured by clinical assays. AELECAP shows high potential as a novel approach for clinical aptamer-based sensing. AELECAP competes with current automated immunometric assays system with lower costs, lower reagent use, and a simpler setup. There is potential for this approach to be further developed as a tool for infertility research and to assist clinicians in personalised treatment with hormonal therapy.Open Acces
    corecore