217 research outputs found

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    A Neural Network Architecture for Figure-ground Separation of Connected Scenic Figures

    Full text link
    A neural network model, called an FBF network, is proposed for automatic parallel separation of multiple image figures from each other and their backgrounds in noisy grayscale or multi-colored images. The figures can then be processed in parallel by an array of self-organizing Adaptive Resonance Theory (ART) neural networks for automatic target recognition. An FBF network can automatically separate the disconnected but interleaved spirals that Minsky and Papert introduced in their book Perceptrons. The network's design also clarifies why humans cannot rapidly separate interleaved spirals, yet can rapidly detect conjunctions of disparity and color, or of disparity and motion, that distinguish target figures from surrounding distractors. Figure-ground separation is accomplished by iterating operations of a Feature Contour System (FCS) and a Boundary Contour System (BCS) in the order FCS-BCS-FCS, hence the term FBF, that have been derived from an analysis of biological vision. The FCS operations include the use of nonlinear shunting networks to compensate for variable illumination and nonlinear diffusion networks to control filling-in. A key new feature of an FBF network is the use of filling-in for figure-ground separation. The BCS operations include oriented filters joined to competitive and cooperative interactions designed to detect, regularize, and complete boundaries in up to 50 percent noise, while suppressing the noise. A modified CORT-X filter is described which uses both on-cells and off-cells to generate a boundary segmentation from a noisy image.Air Force Office of Scientific Research (90-0175); Army Research Office (DAAL-03-88-K0088); Defense Advanced Research Projects Agency (90-0083); Hughes Research Laboratories (S1-804481-D, S1-903136); American Society for Engineering Educatio

    Evaluation and optimal design of spectral sensitivities for digital color imaging

    Get PDF
    The quality of an image captured by color imaging system primarily depends on three factors: sensor spectral sensitivity, illumination and scene. While illumination is very important to be known, the sensitivity characteristics is critical to the success of imaging applications, and is necessary to be optimally designed under practical constraints. The ultimate image quality is judged subjectively by human visual system. This dissertation addresses the evaluation and optimal design of spectral sensitivity functions for digital color imaging devices. Color imaging fundamentals and device characterization are discussed in the first place. For the evaluation of spectral sensitivity functions, this dissertation concentrates on the consideration of imaging noise characteristics. Both signal-independent and signal-dependent noises form an imaging noise model and noises will be propagated while signal is processed. A new colorimetric quality metric, unified measure of goodness (UMG), which addresses color accuracy and noise performance simultaneously, is introduced and compared with other available quality metrics. Through comparison, UMG is designated as a primary evaluation metric. On the optimal design of spectral sensitivity functions, three generic approaches, optimization through enumeration evaluation, optimization of parameterized functions, and optimization of additional channel, are analyzed in the case of the filter fabrication process is unknown. Otherwise a hierarchical design approach is introduced, which emphasizes the use of the primary metric but the initial optimization results are refined through the application of multiple secondary metrics. Finally the validity of UMG as a primary metric and the hierarchical approach are experimentally tested and verified

    HYPERSPECTRAL IMAGING: CALIBRATION AND APPLICATIONS WITH NATURAL SCENES

    Get PDF

    Neural Dynamics of Motion Perception: Direction Fields, Apertures, and Resonant Grouping

    Full text link
    A neural network model of global motion segmentation by visual cortex is described. Called the Motion Boundary Contour System (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyse how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The Motion BCS describes how preprocessing of motion signals by a Motion Oriented Contrast Filter (MOC Filter) is joined to long-range cooperative grouping mechanisms in a Motion Cooperative-Competitive Loop (MOCC Loop) to control phenomena such as motion capture. The Motion BCS is computed in parallel with the Static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the Motion BCS and the Static BCS, specialized to process movement directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1 --> MT and V1 --> V2 --> MT are made, notably the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the Motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions-of-contrast. Interactions of model simple cells, complex cells, hypercomplex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-91-J-4100

    Spectral Ray Tracing for Generation of Spatial Color Constancy Training Data

    Get PDF
    Computational color constancy is a fundamental step in digital cameras that estimates the chromaticity of illumination. Most of automatic white balance (AWB) algorithms that perform computational color constancy assume that there is a single illuminant in the scene. This widely-known assumption is frequently violated in the real world. It could be argued that the main reason for the assumption of single illuminant comes from the limited amount of available mixed illuminant datasets and the laborious annotation process. Annotation of mixed illuminated images is orders of magnitude more laborious compared to a single illuminant case, due to the spatial complexity that requires pixel-wise ground truth illumination chromaticity in various ratios of existing illuminants. Spectral ray tracing is a 3D rendering method to create physically realistic images and animations using the spectral representations of materials and light sources rather than a trichromatic representation such as red-green-blue (RGB). In this thesis, this physically correct image signal generation method is used in creation of spatially varying mixed illuminated image dataset with pixel-wise ground truth illumination chromaticity. In complex 3D scenes, materials are defined based on a database of real world spectral reflectance measurements and light sources are defined based on the spectral power distribution definitions that have been released by the International Commission on Illumination (CIE). Rendering is done by using Blender Cycles rendering engine in the visible spectrum wavelengths from 395nm to 705nm with 5nm equal bins resulting in 63 channel full-spectrum image. The resulting full-spectrum images can be turned into the raw response of any camera as long as the spectral sensitivity of the camera module is known. This is a big advantage of spectral ray tracing since color constancy is mostly camera module-dependent. Pixel-wise white balance gain is calculated through the linear average of illuminant chromaticities depending on their contribution to the mixed illuminated raw image. The raw image signal and pixel-wise white balance gain are fundamentally needed in spatial color constancy dataset. This study implements an image generation pipeline that starts from the spectral definitions of illuminants and materials and ends with an sRGB image created from a 3D scene. 6 different 3D Blender scenes are created, each having 7 different virtual cameras located throughout the scene. 406 single illuminated and 1015 spatially varying mixed illuminated images are created including their pixel-wise ground truth illumination chromaticity. Created dataset can be used to improve mixed illumination color constancy algorithms and paves the way for further research and testing in the field

    Comparative study of spectral reflectance estimation based on broad-band imaging systems

    Get PDF
    We have been practicing spectral color estimation for museum artwork imaging and spectral estimation. We have had success using both narrow-band imaging based on a liquid crystal tunable filter (LCTF) and various broad-band imaging approaches using the same monochromatic digital camera system. Details about our spectral color imaging system description, imaging procedures and the performance of spectral estimation methods used can be found in our previous technical reports.1,2 In previous reports we focused in methods of reconstruction from narrow-band images using LCTF, while we only reported preliminary analyses of reconstruction from wide-band images using six glass filtered images and a red-green-blue filter combined with and without a light-blue Wratten filter. There are practical advantages of using commercially available RGB cameras with this method if such a broad-band image acquisition system has sufficient estimation accuracy. We previously captured two sets of six broad-band images obtained by glass filters mounted in a wheel with glass filters, with and without extra absorption filter.1 In this report, we expand the analyses of spectral estimation using wide-band images by switching the red filter with a long-red filter in order to test the concept of using long-red, green and blue channels of the camera combined with and without lightblue absorption filter. The performance of this new configuration is compared to the imaging using all six filters of the filter wheel, as well as the configuration using six channels derived from red-green-blue filters without and with absorption filter

    Multichannel analysis of object-color spectra

    Get PDF
    An optimization program was written to determine a set of channel responses for measuring object-color spectra. The program incorporated the Complex method of optimization to search the feasible space. The optimum set was determined based upon minimization of the number of channels, the average color difference (AE*ab) over a set of 116 colors and three illuminants, and the average reflectance factor difference between the actual and estimated spectra. It was expected that it would be possible to identify a system which would fall between current spectrophotometers and the ideal but unrealizable system whose responses are the three CIE standard color-matching functions weighted by the three illuminants. It was found that even with as few as six channels, each a gaussian with specific mean and bandwidth, reasonable performance could be attained
    • …
    corecore