2,620 research outputs found

    Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    Full text link
    The quality of modern astronomical data, the power of modern computers and the agility of current image-processing software enable the creation of high-quality images in a purely digital form. The combination of these technological advancements has created a new ability to make color astronomical images. And in many ways it has led to a new philosophy towards how to create them. A practical guide is presented on how to generate astronomical images from research data with powerful image-processing programs. These programs use a layering metaphor that allows for an unlimited number of astronomical datasets to be combined in any desired color scheme, creating an immense parameter space to be explored using an iterative approach. Several examples of image creation are presented. A philosophy is also presented on how to use color and composition to create images that simultaneously highlight scientific detail and are aesthetically appealing. This philosophy is necessary because most datasets do not correspond to the wavelength range of sensitivity of the human eye. The use of visual grammar, defined as the elements which affect the interpretation of an image, can maximize the richness and detail in an image while maintaining scientific accuracy. By properly using visual grammar, one can imply qualities that a two-dimensional image intrinsically cannot show, such as depth, motion and energy. In addition, composition can be used to engage viewers and keep them interested for a longer period of time. The use of these techniques can result in a striking image that will effectively convey the science within the image, to scientists and to the public.Comment: 104 pages, 38 figures, submitted to A

    Individual Colorimetric Observers for Personalized Color Imaging

    Get PDF
    Colors are typically described by three values such as RGB, XYZ, and HSV. This is rooted to the fact that humans possess three types of photoreceptors under photopic conditions, and human color vision can be characterized by a set of three color matching functions (CMFs). CMFs integrate spectra to produce three colorimetric values that are related to visual responses. In reality, large variations in CMFs exist among color-normal populations. Thus, a pair of two spectrally different stimuli might be a match for one person but a mismatch for another person, also known as observer metamerism. Observer metamerism is a serious issue in color-critical applications such as soft proofing in graphic arts and color grading in digital cinema, where colors are compared on different displays. Due to observer metamerism, calibrated displays might not appear correctly, and one person might disagree with color adjustments made by another person. The recent advent of wide color gamut display technologies (e.g., LEDs, OLEDs, lasers, and Quantum Dots) has made observer metamerism even more serious due to their spectrally narrow primaries. The variations among normal color vision and observer metamerism have been overlooked for many years. The current typical color imaging workflow uses a single standard observer assuming all the color-normal people possess the same CMFs. This dissertation provides a possible solution for observer metamerism in color-critical applications by personalized color imaging introducing individual colorimetric observers. In this dissertation, at first, color matching data were collected to derive and validate CMFs for individual colorimetric observers. The data from 151 color-normal observers were obtained at four different locations. Second, two types of individual colorimetric observer functions were derived and validated. One is an individual colorimetric observer model, an extension of the CIE 2006 physiological observer incorporating eight physiological parameters to model individuals in addition to age and field size inputs. The other is a set of categorical observer functions providing a more convenient approach towards the personalized color imaging. Third, two workflows were proposed to characterize human color vision: one using a nomaloscope and the other using proposed spectral pseudoisochromatic images. Finally, the personalized color imaging was evaluated in a color image matching study on an LCD monitor and a laser projector and in a perceived color difference study on a SHARP Quattron display. The personalized color imaging was implemented using a newly introduced ICC profile, iccMAX

    Design and Construction of a Multispectral Camera for Spectral and Colorimetric Reproduction

    Get PDF
    Multi-spectral imaging and spectral reflectance reconstruction can be used in cultural-heritage institutes to digitalize their collections for documentation purposes. It can be used to simulate artwork under any lighting condition, and to analyze colorants that were used. The basic idea of a multi-spectral imaging system is to sub-sample spectral reflectance factor, producing results similar to a spectrophotometer. The sampled data are used to reconstruct reflectance for the visible spectrum. In this thesis, a wide band multispectral camera was designed and constructed to achieve high spectral and color accuracy as well as high image quality. Noise propagation theory was introduced and tested. A seven channel band- pass filter set was modeled using Gaussian functions and optimized to yield high spectral and colorimetric reproduction accuracy as well as low colori- metric noise. Single and sandwich filters were selected from o!-the-shelf absorption filters using the Gaussian bandpass filter model. Experiments were conducted to test the spectral, color and noise performance of the novel sandwich filters and compared with interference filters. The novel sandwich fil- ters led to increased colorimetric accuracy along with a reduction colorimetric noise. This imaging system will be used as part of a recommended workflow for museum archiving, and will be an important addition to the spectral imaging capabilities at MCSL

    Reconfigurable photonic logic architecture

    Get PDF
    The amorphous silicon photo-sensor studied in this thesis, is a double pin structure (p(a-SiC:H)-i’(a-SiC:H)-n(a-SiC:H)-p(a-SiC:H)-i(a-Si:H)-n(a-Si:H)) sandwiched between two transparent contacts deposited over transparent glass thus with the possibility of illumination on both sides, responding to wave-lengths from the ultra-violet, visible to the near infrared range. The frontal il-lumination surface, glass side, is used for light signal inputs. Both surfaces are used for optical bias, which changes the dynamic characteristics of the photo-sensor resulting in different outputs for the same input. Experimental studies were made with the photo-sensor to evaluate its applicability in multiplexing and demultiplexing several data communication channels. The digital light sig-nal was defined to implement simple logical operations like the NOT, AND, OR, and complex like the XOR, MAJ, full-adder and memory effect. A pro-grammable pattern emission system was built and also those for the validation and recovery of the obtained signals. This photo-sensor has applications in op-tical communications with several wavelengths, as a wavelength detector and to execute directly logical operations over digital light input signals

    Single-image RGB Photometric Stereo With Spatially-varying Albedo

    Full text link
    We present a single-shot system to recover surface geometry of objects with spatially-varying albedos, from images captured under a calibrated RGB photometric stereo setup---with three light directions multiplexed across different color channels in the observed RGB image. Since the problem is ill-posed point-wise, we assume that the albedo map can be modeled as piece-wise constant with a restricted number of distinct albedo values. We show that under ideal conditions, the shape of a non-degenerate local constant albedo surface patch can theoretically be recovered exactly. Moreover, we present a practical and efficient algorithm that uses this model to robustly recover shape from real images. Our method first reasons about shape locally in a dense set of patches in the observed image, producing shape distributions for every patch. These local distributions are then combined to produce a single consistent surface normal map. We demonstrate the efficacy of the approach through experiments on both synthetic renderings as well as real captured images.Comment: 3DV 2016. Project page at http://www.ttic.edu/chakrabarti/rgbps

    Unsupervised hyperspectral image segmentation of films: a hierarchical clustering-based approach

    Get PDF
    Hyperspectral imaging (HSI) has been drastically applied in recent years to cultural heritage (CH) analysis, conservation, and also digital restoration. However, the efficient processing of the large datasets registered remains challenging and still in development. In this paper, we propose to use the hierarchical clustering algorithm (HCA) as an alternative machine learning approach to the most common practices, such as principal component analysis(PCA). HCA has shown its potential in the past decades for spectral data classification and segmentation in many other fields, maximizing the information to be extracted from the high-dimensional spectral dataset via the formation of the agglomerative hierarchical tree. However, to date, there has been very limited implementation of HCA in the field of cultural heritage. Data used in this experiment were acquired on real historic film samples with various degradation degrees, using a custom-made push-broom VNIR hyperspectral camera (380–780nm). With the proposed HCA workflow, multiple samples in the entire dataset were processed simultaneously and the degradation areas with distinctive characteristics were successfully segmented into clusters with various hierarchies. A range of algorithmic parameters was tested, including the grid sizes, metrics, and agglomeration methods, and the best combinations were proposed at the end. This novel application of the semi-automating and unsupervised HCA could provide a basis for future digital unfading, and show the potential to solve other CH problems such as pigment mapping

    JERS-1 SAR and LANDSAT-5 TM image data fusion: An application approach for lithological mapping

    Get PDF
    Satellite image data fusion is an image processing set of procedures utilise either for image optimisation for visual photointerpretation, or for automated thematic classification with low error rate and high accuracy. Lithological mapping using remote sensing image data relies on the spectral and textural information of the rock units of the area to be mapped. These pieces of information can be derived from Landsat optical TM and JERS-1 SAR images respectively. Prior to extracting such information (spectral and textural) and fusing them together, geometric image co-registration between TM and the SAR, atmospheric correction of the TM, and SAR despeckling are required. In this thesis, an appropriate atmospheric model is developed and implemented utilising the dark pixel subtraction method for atmospheric correction. For SAR despeckling, an efficient new method is also developed to test whether the SAR filter used remove the textural information or not. For image optimisation for visual photointerpretation, a new method of spectral coding of the six bands of the optical TM data is developed. The new spectral coding method is used to produce efficient colour composite with high separability between the spectral classes similar to that if the whole six optical TM bands are used together. This spectral coded colour composite is used as a spectral component, which is then fused with the textural component represented by the despeckled JERS-1 SAR using the fusion tools, including the colour transform and the PCT. The Grey Level Cooccurrence Matrix (GLCM) technique is used to build the textural data set using the speckle filtered JERS-1 SAR data making seven textural GLCM measures. For automated thematic mapping and by the use of both the six TM spectral data and the seven textural GLCM measures, a new method of classification has been developed using the Maximum Likelihood Classifier (MLC). The method is named the sequential maximum likelihood classification and works efficiently by comparison the classified textural pixels, the classified spectral pixels, and the classified textural-spectral pixels, and gives the means of utilising the textural and spectral information for automated lithological mapping

    Evaluation and optimal design of spectral sensitivities for digital color imaging

    Get PDF
    The quality of an image captured by color imaging system primarily depends on three factors: sensor spectral sensitivity, illumination and scene. While illumination is very important to be known, the sensitivity characteristics is critical to the success of imaging applications, and is necessary to be optimally designed under practical constraints. The ultimate image quality is judged subjectively by human visual system. This dissertation addresses the evaluation and optimal design of spectral sensitivity functions for digital color imaging devices. Color imaging fundamentals and device characterization are discussed in the first place. For the evaluation of spectral sensitivity functions, this dissertation concentrates on the consideration of imaging noise characteristics. Both signal-independent and signal-dependent noises form an imaging noise model and noises will be propagated while signal is processed. A new colorimetric quality metric, unified measure of goodness (UMG), which addresses color accuracy and noise performance simultaneously, is introduced and compared with other available quality metrics. Through comparison, UMG is designated as a primary evaluation metric. On the optimal design of spectral sensitivity functions, three generic approaches, optimization through enumeration evaluation, optimization of parameterized functions, and optimization of additional channel, are analyzed in the case of the filter fabrication process is unknown. Otherwise a hierarchical design approach is introduced, which emphasizes the use of the primary metric but the initial optimization results are refined through the application of multiple secondary metrics. Finally the validity of UMG as a primary metric and the hierarchical approach are experimentally tested and verified

    Efficient training procedures for multi-spectral demosaicing

    Get PDF
    The simultaneous acquisition of multi-spectral images on a single sensor can be efficiently performed by single shot capture using a mutli-spectral filter array. This paper focused on the demosaicing of color and near-infrared bands and relied on a convolutional neural network (CNN). To train the deep learning model robustly and accurately, it is necessary to provide enough training data, with sufficient variability. We focused on the design of an efficient training procedure by discovering an optimal training dataset. We propose two data selection strategies, motivated by slightly different concepts. The general term that will be used for the proposed models trained using data selection is data selection-based multi-spectral demosaicing (DSMD). The first idea is clustering-based data selection (DSMD-C), with the goal to discover a representative subset with a high variance so as to train a robust model. The second is an adaptive-based data selection (DSMD-A), a self-guided approach that selects new data based on the current model accuracy. We performed a controlled experimental evaluation of the proposed training strategies and the results show that a careful selection of data does benefit the speed and accuracy of training. We are still able to achieve high reconstruction accuracy with a lightweight model
    corecore