238,463 research outputs found

    The synthesis and analysis of color images

    Get PDF
    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficient by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled

    Hubble Space Telescope Pixel Analysis of the Interacting Face-on Spiral Galaxy NGC 5194 (M51A)

    Full text link
    A pixel analysis is carried out on the interacting face-on spiral galaxy NGC 5194 (M51A), using the HST/ACS images in the F435W, F555W and F814W (BVI) bands. After 4x4 binning of the HST/ACS images to secure a sufficient signal-to-noise ratio for each pixel, we derive several quantities describing the pixel color-magnitude diagram (pCMD) of NGC 5194: blue/red color cut, red pixel sequence parameters, blue pixel sequence parameters and blue-to-red pixel ratio. The red sequence pixels are mostly older than 1 Gyr, while the blue sequence pixels are mostly younger than 1 Gyr, in their luminosity-weighted mean stellar ages. The color variation in the red pixel sequence from V = 20 mag arcsec^(-2) to V = 17 mag arcsec^(-2) corresponds to a metallicity variation of \Delta[Fe/H] ~ 2 or an optical depth variation of \Delta\tau_V ~ 4 by dust, but the actual sequence is thought to originate from the combination of those two effects. At V < 20 mag arcsec^(-2), the color variation in the blue pixel sequence corresponds to an age variation from 5 Myr to 300 Myr under the assumption of solar metallicity and \tau_V = 1. To investigate the spatial distributions of stellar populations, we divide pixel stellar populations using the pixel color-color diagram and population synthesis models. As a result, we find that the pixel population distributions across the spiral arms agree with a compressing process by spiral density waves: dense dust \rightarrow newly-formed stars. The tidal interaction between NGC 5194 and NGC 5195 appears to enhance the star formation at the tidal bridge connecting the two galaxies. We find that the pixels corresponding to the central active galactic nucleus (AGN) area of NGC 5194 show a tight sequence at the bright-end of the pCMD, which are in the region of R ~ 100 pc and may be a photometric indicator of AGN properties.Comment: 27 pages, 20 figures, accepted for publication in Ap

    Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    Get PDF
    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying

    A Rich Cluster of Galaxies Near the Quasar B2 1335+28 at z=1.1: Color Distribution and Star-Formation Properties

    Get PDF
    We previously reported a significant clustering of red galaxies (R-K=3.5--6) around the radio-loud quasar B2 1335+28 at z=1.086. In this paper, we establish the existence of a rich cluster at the quasar redshift, and study the properties of the cluster galaxies through further detailed analysis of the photometric data. The color distribution of the galaxies in the cluster is quite broad and the fraction of blue galaxies (\sim 70%) is much larger than in intermediate-redshift clusters. Using evolutionary synthesis models, we show that this color distribution can be explained by galaxies with various amounts of star-formation activity mixed with the old stellar populations. Notably, there are about a dozen galaxies which show very red optical-NIR colors but also show significant UV excess with respect to passive-evolution models. They can be interpreted as old early-type galaxies with a small amount of star formation. The fact that the UV-excess red galaxies are more abundant than the quiescent red ones suggests that a large fraction of old galaxies in this cluster are still forming stars to some extent. However, a sequence of quiescent red galaxies is clearly identified on the R-K versus K color-magnitude (C-M) diagram. The slope and zero point of their C-M relation appear to be consistent with those expected for the precursors of the C-M relation of present-day cluster ellipticals when observed at z=1.1. We estimate the Abell richness class of the cluster to be R \sim 1. New X-ray data presented here place an upper limit of L_x < 2 10^{44} erg s^{-1} for the cluster luminosity. Inspections of the wider optical images reveal some lumpy structure, suggesting that the whole system is still dynamically young.Comment: 54 pages including 13 Postscript figures, 1 jpg figure, and 1 table, uses aasms4.sty and epsf.sty. Accepted for publication in ApJ: Replaced as the older verison was missed to include the figure 2c, 2d, and figure

    Color segmentation and neural networks for automatic graphic relief of the state of conservation of artworks

    Get PDF
    none5noThis paper proposes a semi-automated methodology based on a sequence of analysis processes performed on multispectral images of artworks and aimed at the extraction of vector maps regarding their state of conservation. The graphic relief of the artwork represents the main instrument of communication and synthesis of information and data acquired on cultural heritage during restoration. Despite the widespread use of informatics tools, currently, these operations are still extremely subjective and require high execution times and costs. In some cases, manual execution is particularly complicated and almost impossible to carry out. The methodology proposed here allows supervised, partial automation of these procedures avoids approximations and drastically reduces the work times, as it makes a vector drawing by extracting the areas directly from the raster images. We propose a procedure for color segmentation based on principal/independent component analysis (PCA/ICA) and SOM neural networks and, as a case study, present the results obtained on a set of multispectral reproductions of a painting on canvas.openAnnamaria Amura, Anna Tonazzini, Emanuele Salerno, Stefano Pagnotta, Vincenzo PalleschiAmura, Annamaria; Tonazzini, Anna; Salerno, Emanuele; Pagnotta, Stefano; Palleschi, Vincenz

    Relit-NeuLF: Efficient Relighting and Novel View Synthesis via Neural 4D Light Field

    Full text link
    In this paper, we address the problem of simultaneous relighting and novel view synthesis of a complex scene from multi-view images with a limited number of light sources. We propose an analysis-synthesis approach called Relit-NeuLF. Following the recent neural 4D light field network (NeuLF), Relit-NeuLF first leverages a two-plane light field representation to parameterize each ray in a 4D coordinate system, enabling efficient learning and inference. Then, we recover the spatially-varying bidirectional reflectance distribution function (SVBRDF) of a 3D scene in a self-supervised manner. A DecomposeNet learns to map each ray to its SVBRDF components: albedo, normal, and roughness. Based on the decomposed BRDF components and conditioning light directions, a RenderNet learns to synthesize the color of the ray. To self-supervise the SVBRDF decomposition, we encourage the predicted ray color to be close to the physically-based rendering result using the microfacet model. Comprehensive experiments demonstrate that the proposed method is efficient and effective on both synthetic data and real-world human face data, and outperforms the state-of-the-art results. We publicly released our code on GitHub. You can find it here: https://github.com/oppo-us-research/RelitNeuLFComment: 10 page

    STEFANN: Scene Text Editor using Font Adaptive Neural Network

    Full text link
    Textual information in a captured scene plays an important role in scene interpretation and decision making. Though there exist methods that can successfully detect and interpret complex text regions present in a scene, to the best of our knowledge, there is no significant prior work that aims to modify the textual information in an image. The ability to edit text directly on images has several advantages including error correction, text restoration and image reusability. In this paper, we propose a method to modify text in an image at character-level. We approach the problem in two stages. At first, the unobserved character (target) is generated from an observed character (source) being modified. We propose two different neural network architectures - (a) FANnet to achieve structural consistency with source font and (b) Colornet to preserve source color. Next, we replace the source character with the generated character maintaining both geometric and visual consistency with neighboring characters. Our method works as a unified platform for modifying text in images. We present the effectiveness of our method on COCO-Text and ICDAR datasets both qualitatively and quantitatively.Comment: Accepted in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 202
    corecore