183 research outputs found

    Full-Color Stereoscopic Imaging With a Single-Pixel Photodetector

    Get PDF
    We present an optical system for stereoscopic color imaging by using a single-pixel detector. The system works by illuminating the input scene with a sequence of microstructured light patterns generated by a color digital light projector (DLP). A single monochromatic photodiode, synchronized with the DLP, measures the light scattered by the object for each pattern. The image is recovered computationally by applying compressive sensing techniques. The RGB chromatic components of the image are discriminated by exploiting the time-multiplexed color codification of the DLP. The stereoscopic pair is obtained by splitting the light field generated by the DLP and projecting microstructured light patterns onto the sample from two different directions. The experimental setup is configured by simple optical components, a commercial photodiode and an off-the-shelf DLP projector. Color stereoscopic images of a 3D scene obtained with this system are shown.This work was supported in part by MINECO under Grant FIS2013-40666-P, Generalitat Valenciana under Grant PROMETEO2012-021 and Grant ISIC 2012/013, and Universitat Jaume I under Grant P1-1B2012-55

    NICE : A Computational solution to close the gap from colour perception to colour categorization

    Get PDF
    The segmentation of visible electromagnetic radiation into chromatic categories by the human visual system has been extensively studied from a perceptual point of view, resulting in several colour appearance models. However, there is currently a void when it comes to relate these results to the physiological mechanisms that are known to shape the pre-cortical and cortical visual pathway. This work intends to begin to fill this void by proposing a new physiologically plausible model of colour categorization based on Neural Isoresponsive Colour Ellipsoids (NICE) in the cone-contrast space defined by the main directions of the visual signals entering the visual cortex. The model was adjusted to fit psychophysical measures that concentrate on the categorical boundaries and are consistent with the ellipsoidal isoresponse surfaces of visual cortical neurons. By revealing the shape of such categorical colour regions, our measures allow for a more precise and parsimonious description, connecting well-known early visual processing mechanisms to the less understood phenomenon of colour categorization. To test the feasibility of our method we applied it to exemplary images and a popular ground-truth chart obtaining labelling results that are better than those of current state-of-the-art algorithms

    Bayesian Methods for Radiometric Calibration in Motion Picture Encoding Workflows

    Get PDF
    A method for estimating the Camera Response Function (CRF) of an electronic motion picture camera is presented in this work. The accurate estimation of the CRF allows for proper encoding of camera exposures into motion picture post-production workflows, like the Academy Color Encoding Specification (ACES), this being a necessary step to correctly combine images from different capture sources into one cohesive final production and minimize non-creative manual adjustments. Although there are well known standard CRFs implemented in typical video camera workflows, motion picture workflows and newer High Dynamic Range (HDR) imaging workflows have introduced new standard CRFs as well as custom and proprietary CRFs that need to be known for proper post-production encoding of the camera footage. Current methods to estimate this function rely on the use of measurement charts, using multiple static images taken under different exposures or lighting conditions, or assume a simplistic model of the function’s shape. All these methods become problematic and tough to fit into motion picture production and post-production workflows where the use of test charts and varying camera or scene setups becomes impractical and where a method based solely on camera footage, comprised of a single image or a series of images, would be advantageous. This work presents a methodology initially based on the work of Lin, Gu, Yamazaki and Shum that takes into account edge color mixtures in an image or image sequence, that are affected by the non-linearity introduced by a CRF. In addition, a novel feature based on image noise is introduced to overcome some of the limitations of edge color mixtures. These features provide information that is included in the likelihood probability distribution in a Bayesian framework to estimate the CRF as the expected value of a posterior probability distribution, which is itself approximated by a Markov Chain Monte Carlo (MCMC) sampling algorithm. This allows for a more complete description of the CRF over methods like Maximum Likelihood (ML) and Maximum A Posteriori (MAP). The CRF function is modeled by Principal Component Analysis (PCA) of the Database of Response Functions (DoRF) compiled by Grossberg and Nayar, and the prior probability distribution is modeled by a Gaussian Mixture Model (GMM) of the PCA coefficients for the responses in the DoRF. CRF estimation results are presented for an ARRI electronic motion picture camera, showing the improved estimation accuracy and practicality of this method over previous methods for motion picture post-production workflows

    leave a trace - A People Tracking System Meets Anomaly Detection

    Full text link
    Video surveillance always had a negative connotation, among others because of the loss of privacy and because it may not automatically increase public safety. If it was able to detect atypical (i.e. dangerous) situations in real time, autonomously and anonymously, this could change. A prerequisite for this is a reliable automatic detection of possibly dangerous situations from video data. This is done classically by object extraction and tracking. From the derived trajectories, we then want to determine dangerous situations by detecting atypical trajectories. However, due to ethical considerations it is better to develop such a system on data without people being threatened or even harmed, plus with having them know that there is such a tracking system installed. Another important point is that these situations do not occur very often in real, public CCTV areas and may be captured properly even less. In the artistic project leave a trace the tracked objects, people in an atrium of a institutional building, become actor and thus part of the installation. Visualisation in real-time allows interaction by these actors, which in turn creates many atypical interaction situations on which we can develop our situation detection. The data set has evolved over three years and hence, is huge. In this article we describe the tracking system and several approaches for the detection of atypical trajectories

    Spectrally Based Material Color Equivalency: Modeling and Manipulation

    Get PDF
    A spectrally based normalization methodology (Wpt normalization) for linearly transforming cone excitations or sensor values (sensor excitations) to a representation that preserves the perceptive concepts of lightness, chroma and hue is proposed resulting in a color space with the axes labeled W , p, t. Wpt (pronounced “Waypoint ) has been demonstrated to be an effective material color equivalency space that provides the basis for defining Material Adjustment Transforms that predict the changes in sensor excitations of material spectral reflectance colors due to variations in observer or illuminant. This is contrasted with Chromatic Adaptation Transforms that predict color appearance as defined by corresponding color experiments. Material color equivalency as provided by Wpt and Wpt normalization forms the underlying foundation of this doctoral research. A perceptually uniform material color equivalency space (“Waypoint Lab or WLab) was developed that represents a non-linear transformation of Wpt coordinates, and Euclidean WLab distances were found to not be statistically different from ∆E⋆94 and ∆E00 color differences. Sets of Wpt coordinates for variations in reflectance, illumination, or observers were used to form the basis of defining Wpt shift manifolds. WLab distances of corresponding points within or between these manifolds were utilized to define metrics for color inconstancy, metamerism, observer rendering, illuminant rendering, and differences in observing conditions. Spectral estimation and manipulation strategies are presented that preserve various aspects of “Wpt shift potential as represented by changes in Wpt shift manifolds. Two methods were explored for estimating Wpt normalization matrices based upon direct utilization of sensor excitations, and the use of a Wpt based Material Adjustment Transform to convert Cone Fundamentals to ”XYZ-like Color Matching Functions was investigated and contrasted with other methods such as direct regression and prediction of a common color matching primaries. Finally, linear relationships between Wpt and spectral reflectances were utilized to develop approaches for spectral estimation and spectral manipulation within a general spectral reflectance manipulation framework – thus providing the ability to define and achieve “spectrally preferred color rendering objectives. The presented methods of spectral estimation, spectral manipulation, and material adjustment where utilized to: define spectral reflectances for Munsell colors that minimize Wpt shift potential; manipulate spectral reflectances of actual printed characterization data sets to achieve colorimetry of reference printing conditions; and lastly to demonstrate the spectral estimation and manipulation of spectral reflectances using images and spectrally based profiles within an iccMAX color management workflow

    High dynamic range video merging, tone mapping, and real-time implementation

    Get PDF
    Although High Dynamic Range (High Dynamic Range (HDR)) imaging has been the subject of significant research over the past fifteen years, the goal of cinemaquality HDR video has not yet been achieved. This work references an optical method patented by Contrast Optical which is used to capture sequences of Low Dynamic Range (LDR) images that can be used to form HDR images as the basis for HDR video. Because of the large diverence in exposure spacing of the LDR images captured by this camera, present methods of merging LDR images are insufficient to produce cinema quality HDR images and video without significant visible artifacts. Thus the focus of the research presented is two fold. The first contribution is a new method of combining LDR images with exposure differences of greater than 3 stops into an HDR image. The second contribution is a method of tone mapping HDR video which solves potential problems of HDR video flicker and automated parameter control of the tone mapping operator. A prototype of this HDR video capture technique along with the combining and tone mapping algorithms have been implemented in a high-definition HDR-video system. Additionally, Field Programmable Gate Array (FPGA) hardware implementation details are given to support real time HDR video. Still frames from the acquired HDR video system which have been merged used the merging and tone mapping techniques will be presented

    Calibrating and Stabilizing Spectropolarimeters with Charge Shuffling and Daytime Sky Measurements

    Full text link
    Well-calibrated spectropolarimetry studies at resolutions of R>R>10,000 with signal-to-noise ratios (SNRs) better than 0.01\% across individual line profiles, are becoming common with larger aperture telescopes. Spectropolarimetric studies require high SNR observations and are often limited by instrument systematic errors. As an example, fiber-fed spectropolarimeters combined with advanced line-combination algorithms can reach statistical error limits of 0.001\% in measurements of spectral line profiles referenced to the continuum. Calibration of such observations is often required both for cross-talk and for continuum polarization. This is not straightforward since telescope cross-talk errors are rarely less than \sim1\%. In solar instruments like the Daniel K. Inouye Solar Telescope (DKIST), much more stringent calibration is required and the telescope optical design contains substantial intrinsic polarization artifacts. This paper describes some generally useful techniques we have applied to the HiVIS spectropolarimeter at the 3.7m AEOS telescope on Haleakala. HiVIS now yields accurate polarized spectral line profiles that are shot-noise limited to 0.01\% SNR levels at our full spectral resolution of 10,000 at spectral sampling of \sim100,000. We show line profiles with absolute spectropolarimetric calibration for cross-talk and continuum polarization in a system with polarization cross-talk levels of essentially 100\%. In these data the continuum polarization can be recovered to one percent accuracy because of synchronized charge-shuffling model now working with our CCD detector. These techniques can be applied to other spectropolarimeters on other telescopes for both night and day-time applications such as DKIST, TMT and ELT which have folded non-axially symmetric foci.Comment: Accepted to A&

    The time-course of colour vision

    Get PDF
    Four experiments are presented, each investigating temporal properties of colour vision processing in human observers. The first experiment replicates and extends an experiment by Stromeyer et al. (1991). We look for a phase difference between combined temporal modulations in orthogonal directions in colour space, which might null the often-claimed latency of signals originating from the short-wavelength sensitive cones (S-cones). We provide another estimate of the magnitude of this latency, and give evidence to suggest that it originates early in the chromatic pathway, before signals from S-cones are combined with those that receive opposed L- and M-cone input. In the second experiment we adapt observers to two stimuli that are matched in the mean and amplitude of modulation they offer to the cone classes and to the cardinal opponent mechanisms, but that differ in chromatic appearance, and hence their modulation of later colour mechanisms. Chromatic discrimination thresholds after adaptation to these two stimuli differ along intermediate directions in colour space, and we argue that these differences reveal the adaptation response of central colour mechanisms. In the third experiment we demonstrate similar adaptation using the same stimuli, measured with reaction times rather than thresholds. In the final experiment, we measure the degree to which colour constancy is achieved as a function of time in a simulated stimulus environment in which the illuminant changes periodically. We find that perfect constancy is not achieved instantaneously after an illuminant chromaticity shift and that constancy of colour appearance judgements increases over several seconds

    Vision in an abundant North American bird: The Red-winged Blackbird

    Get PDF
    Avian vision is fundamentally different from human vision; however, even within birds there are substantial between species differences in visual perception in terms of visual acuity, visual coverage, and color vision. However, there are not many species that have all these visual traits described, which can constrain our ability to study the evolution of visual systems in birds. To start addressing this gap, we characterized multiple traits of the visual system (visual coverage, visual acuity, centers of acute vision, and color vision) of the Red-winged Blackbird (Agelaius phoeniceus), one of the most abundant and studied birds in North America. We found that Red-winged Blackbirds have: wide visual coverage; one center of acute vision per eye (fovea) projecting fronto-laterally with high density of single and double cones, making it the center of both chromatic and achromatic vision; a wide binocular field that does not have the input of the centers of acute vision; and an ultraviolet sensitive visual system. With this information, we parameterized a Red-winged Blackbird-specific perceptual model considering different plumage patches. We found that the male red epaulet was chromatically conspicuous but with minimal achromatic signal, but the male yellow patch had a lower chromatic but a higher achromatic signal, which may be explained by the pigment composition of the feathers. However, the female epaulet was not visually conspicuous in both the chromatic and achromatic dimensions compared with other female feather patches. We discuss the implications of this visual system configuration relative to the foraging, antipredator, mate choice, and social behaviors of Red-winged Blackbirds. Our findings can be used for comparative studies as well as for making more species-specific predictions about different visual behaviors for future empirical testing
    corecore