17,029 research outputs found

    Fully-automatic inverse tone mapping algorithm based on dynamic mid-level tone mapping

    Get PDF
    High Dynamic Range (HDR) displays can show images with higher color contrast levels and peak luminosities than the common Low Dynamic Range (LDR) displays. However, most existing video content is recorded and/or graded in LDR format. To show LDR content on HDR displays, it needs to be up-scaled using a so-called inverse tone mapping algorithm. Several techniques for inverse tone mapping have been proposed in the last years, going from simple approaches based on global and local operators to more advanced algorithms such as neural networks. Some of the drawbacks of existing techniques for inverse tone mapping are the need for human intervention, the high computation time for more advanced algorithms, limited low peak brightness, and the lack of the preservation of the artistic intentions. In this paper, we propose a fully-automatic inverse tone mapping operator based on mid-level mapping capable of real-time video processing. Our proposed algorithm allows expanding LDR images into HDR images with peak brightness over 1000 nits, preserving the artistic intentions inherent to the HDR domain. We assessed our results using the full-reference objective quality metrics HDR-VDP-2.2 and DRIM, and carrying out a subjective pair-wise comparison experiment. We compared our results with those obtained with the most recent methods found in the literature. Experimental results demonstrate that our proposed method outperforms the current state-of-the-art of simple inverse tone mapping methods and its performance is similar to other more complex and time-consuming advanced techniques

    The alternating least squares technique for nonuniform intensity color correction

    Get PDF
    Color correction involves mapping device RGBs to display counterparts or to corresponding XYZs. A popular methodology is to take an image of a color chart and then solve for the best 3 × 3 matrix that maps the RGBs to the corresponding known XYZs. However, this approach fails at times when the intensity of the light varies across the chart. This variation needs to be removed before estimating the correction matrix. This is typically achieved by acquiring an image of a uniform gray chart in the same location, and then dividing the color checker image by the gray-chart image. Of course, taking images of two charts doubles the complexity of color correction. In this article, we present an alternative color correction algorithm that simultaneously estimates the intensity variation and the 3 × 3 transformation matrix from a single image of a color chart. We show that the color correction problem, that is, finding the 3 × 3 correction matrix, can be solved using a simple alternating least-squares procedure. Experiments validate our approach. © 2014 Wiley Periodicals, Inc. Col Res Appl, 40, 232–242, 201

    A Novel Gaussian Extrapolation Approach for 2D Gel Electrophoresis Saturated Protein Spots

    Get PDF
    Analysis of images obtained from two-dimensional gel electrophoresis (2D-GE) is a topic of utmost importance in bioinformatics research, since commercial and academic software available currently has proven to be neither completely effective nor fully automatic, often requiring manual revision and refinement of computer generated matches. In this work, we present an effective technique for the detection and the reconstruction of over-saturated protein spots. Firstly, the algorithm reveals overexposed areas, where spots may be truncated, and plateau regions caused by smeared and overlapping spots. Next, it reconstructs the correct distribution of pixel values in these overexposed areas and plateau regions, using a two-dimensional least-squares fitting based on a generalized Gaussian distribution. Pixel correction in saturated and smeared spots allows more accurate quantification, providing more reliable image analysis results. The method is validated for processing highly exposed 2D-GE images, comparing reconstructed spots with the corresponding non-saturated image, demonstrating that the algorithm enables correct spot quantificatio

    Contrast Enhancement of Brightness-Distorted Images by Improved Adaptive Gamma Correction

    Full text link
    As an efficient image contrast enhancement (CE) tool, adaptive gamma correction (AGC) was previously proposed by relating gamma parameter with cumulative distribution function (CDF) of the pixel gray levels within an image. ACG deals well with most dimmed images, but fails for globally bright images and the dimmed images with local bright regions. Such two categories of brightness-distorted images are universal in real scenarios, such as improper exposure and white object regions. In order to attenuate such deficiencies, here we propose an improved AGC algorithm. The novel strategy of negative images is used to realize CE of the bright images, and the gamma correction modulated by truncated CDF is employed to enhance the dimmed ones. As such, local over-enhancement and structure distortion can be alleviated. Both qualitative and quantitative experimental results show that our proposed method yields consistently good CE results

    Real-time Model-based Image Color Correction for Underwater Robots

    Full text link
    Recently, a new underwater imaging formation model presented that the coefficients related to the direct and backscatter transmission signals are dependent on the type of water, camera specifications, water depth, and imaging range. This paper proposes an underwater color correction method that integrates this new model on an underwater robot, using information from a pressure depth sensor for water depth and a visual odometry system for estimating scene distance. Experiments were performed with and without a color chart over coral reefs and a shipwreck in the Caribbean. We demonstrate the performance of our proposed method by comparing it with other statistic-, physic-, and learning-based color correction methods. Applications for our proposed method include improved 3D reconstruction and more robust underwater robot navigation.Comment: Accepted at the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    Atmospheric Parameters and Metallicities for 2191 stars in the Globular Cluster M4

    Get PDF
    We report new metallicities for stars of Galactic globular cluster M4 using the largest number of stars ever observed at high spectral resolution in any cluster. We analyzed 7250 spectra for 2771 cluster stars gathered with the VLT FLAMES+GIRAFFE spectrograph at VLT. These medium resolution spectra cover by a small wavelength range, and often have very low signal-to-noise ratios. We attacked this dataset by reconsidering the whole method of abundance analysis of large stellar samples from beginning to end. We developed a new algorithm that automatically determines the atmospheric parameters of a star. Nearly all data preparation steps for spectroscopic analyses are processed on the syntheses, not the observed spectra. For 322 Red Giant Branch stars with V14.7V \leq 14.7 we obtain a nearly constant metallicity, =1.07 = -1.07 (σ\sigma = 0.02). No difference in the metallicity at the level of 0.01 dex0.01 ~\textrm{dex} is observed between the two RGB sequences identified by \cite{Monelli:2013us}. For 1869 Subgiant and Main Sequence Stars V>14.7V > 14.7 we obtain =1.16 = -1.16 (σ\sigma = 0.09) after fixing the microturbulent velocity. These values are consistent with previous studies that have performed detailed analyses of brighter RGB stars at higher spectroscopic resolution and wavelength coverage. It is not clear if the small mean metallicity difference between brighter and fainter M4 members is real or is the result of the low signal-to-noise characteristics of the fainter stars. The strength of our approach is shown by recovering a metallicity close to a single value for more than two thousand stars, using a dataset that is non-optimal for atmospheric analyses. This technique is particularly suitable for noisy data taken in difficult observing conditions.Comment: 17 pages, 20 figures, 3 tables. Accepted for publication in The Astronomical Journa

    The Hubble Space Telescope Treasury Program on the Orion Nebula Cluster

    Full text link
    The Hubble Space Telescope (HST) Treasury Program on the Orion Nebula Cluster has used 104 orbits of HST time to image the Great Orion Nebula region with the Advanced Camera for Surveys (ACS), the Wide-Field/Planetary Camera 2 (WFPC2) and the Near Infrared Camera and Multi Object Spectrograph (NICMOS) instruments in 11 filters ranging from the U-band to the H-band equivalent of HST. The program has been intended to perform the definitive study of the stellar component of the ONC at visible wavelengths, addressing key questions like the cluster IMF, age spread, mass accretion, binarity and cirumstellar disk evolution. The scanning pattern allowed to cover a contiguous field of approximately 600 square arcminutes with both ACS and WFPC2, with a typical exposure time of approximately 11 minutes per ACS filter, corresponding to a point source depth AB(F435W) = 25.8 and AB(F775W)=25.2 with 0.2 magnitudes of photometric error. We describe the observations, data reduction and data products, including images, source catalogs and tools for quick look preview. In particular, we provide ACS photometry for 3399 stars, most of them detected at multiple epochs, WFPC2 photometry for 1643 stars, 1021 of them detected in the U-band, and NICMOS JH photometry for 2116 stars. We summarize the early science results that have been presented in a number of papers. The final set of images and the photometric catalogs are publicly available through the archive as High Level Science Products at the STScI Multimission Archive hosted by the Space Telescope Science Institute.Comment: Accepted for publication on the Astrophysical Journal Supplement Series, March 27, 201

    Intensity preserving cast removal in color images using particle swarm optimization

    Get PDF
    In this paper, we present an optimal image enhancement technique for color cast images by preserving their intensity. There are methods which improves the appearance of the affected images under different cast like red, green, blue etc but up to some extent. The proposed color cast method is corrected by using transformation function based on gamma values. These optimal values of gamma are obtained through particle swarm optimization (PSO). This technique preserves the image intensity and maintains the originality of color by satisfying the modified gray world assumptions. For the performance analysis, the image distance metric criteria of CIELAB color space is used. The effectiveness of the proposed approach is illustrated by testing the proposed method on color cast images. It has been found that distance between the reference image and the corrected proposed image is negligible. The calculated value of image distance depicts that the enhanced image results of the proposed algorithm are closer to the reference images in comparison with other existing methods
    corecore