154,586 research outputs found

    Real-time Model-based Image Color Correction for Underwater Robots

    Full text link
    Recently, a new underwater imaging formation model presented that the coefficients related to the direct and backscatter transmission signals are dependent on the type of water, camera specifications, water depth, and imaging range. This paper proposes an underwater color correction method that integrates this new model on an underwater robot, using information from a pressure depth sensor for water depth and a visual odometry system for estimating scene distance. Experiments were performed with and without a color chart over coral reefs and a shipwreck in the Caribbean. We demonstrate the performance of our proposed method by comparing it with other statistic-, physic-, and learning-based color correction methods. Applications for our proposed method include improved 3D reconstruction and more robust underwater robot navigation.Comment: Accepted at the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    The Internal Ultraviolet-to-Optical Color Dispersion: Quantifying the Morphological K-Correction

    Full text link
    We present a quantitative measure of the internal color dispersion within galaxies, which quantifies differences in morphology as a function of wavelength. We apply this statistic to a local galaxy sample with archival images at 1500 and 2500 Angstroms from the Ultraviolet Imaging Telescope, and ground-based B-band observations to investigate how the color dispersion relates to global galaxy properties. The intenal color dispersion generally correlates with transformations in galaxy morphology as a function of wavelength, i.e., it quantifies the morphological K-correction. Mid-type spiral galaxies exhibit the highest dispersion in their internal colors, which stems from differences in the bulge, disk, and spiral-arm components. Irregulars and late-type spirals show moderate internal color dispersion, which implies that young stars generally dominate the colors. Ellipticals, lenticulars, and early-type spirals generally have low or negligible internal color dispersion, which indicates that the stars contributing to the UV-to-optical emission have a very homogeneous distribution. We discuss the application of the internal color dispersion to high-redshift galaxies in deep, Hubble Space Telescope images. By simulating local galaxies at cosmological distances, many of the galaxies have luminosities that are sufficiently bright at rest--frame optical wavelengths to be detected within the limits of the currently deepest near-infrared surveys even with no evolution. Under assumptions that the luminosity and color evolution of the local galaxies conform with the measured values of high-redshift objects, we show that galaxies' intrinsic internal color dispersion remains measurable out to z ~ 3.Comment: Accepted for publication in the Astrophysical Journal. 41 pages, 13 figures (3 color). Full resolution version (~8 Mb) available at http://mips.as.arizona.edu/~papovich/papovich_astroph.p

    Does Dehazing Model Preserve Color Information?

    No full text
    International audience—Image dehazing aims at estimating the image information lost caused by the presence of fog, haze and smoke in the scene during acquisition. Degradation causes a loss in contrast and color information, thus enhancement becomes an inevitable task in imaging applications and consumer photography. Color information has been mostly evaluated perceptually along with quality, but no work addresses specifically this aspect. We demonstrate how dehazing model affects color information on simulated and real images. We use a convergence model from perception of transparency to simulate haze on images. We evaluate color loss in terms of angle of hue in IPT color space, saturation in CIE LUV color space and perceived color difference in CIE LAB color space. Results indicate that saturation is critically changed and hue is changed for achromatic colors and blue/yellow colors, where usual image processing space are not showing constant hue lines. we suggest that a correction model based on color transparency perception could help to retrieve color information as an additive layer on dehazing algorithms

    Deterministic Neural Illumination Mapping for Efficient Auto-White Balance Correction

    Full text link
    Auto-white balance (AWB) correction is a critical operation in image signal processors for accurate and consistent color correction across various illumination scenarios. This paper presents a novel and efficient AWB correction method that achieves at least 35 times faster processing with equivalent or superior performance on high-resolution images for the current state-of-the-art methods. Inspired by deterministic color style transfer, our approach introduces deterministic illumination color mapping, leveraging learnable projection matrices for both canonical illumination form and AWB-corrected output. It involves feeding high-resolution images and corresponding latent representations into a mapping module to derive a canonical form, followed by another mapping module that maps the pixel values to those for the corrected version. This strategy is designed as resolution-agnostic and also enables seamless integration of any pre-trained AWB network as the backbone. Experimental results confirm the effectiveness of our approach, revealing significant performance improvements and reduced time complexity compared to state-of-the-art methods. Our method provides an efficient deep learning-based AWB correction solution, promising real-time, high-quality color correction for digital imaging applications. Source code is available at https://github.com/birdortyedi/DeNIM/Comment: 9 pages, 5 figures, ICCV 2023 Workshops (RCV 2023

    Using color management to automate the color reproduction of 3-D images procured via a digital camera/3-D scanner

    Get PDF
    The use of digital photography is migrating from the major applications in pho tojournalism to professional studio photography. Traditional service bureaus such as professional photo labs and prepress trade shops are adding digital imaging services to their film-based services. Also, businesses such as advertising agencies and publishers, who traditionally outsource work to service bureaus, are bringing digital imaging services in-house. State of the art imaging technology empowers users with new tools, but does not guarantee that the task of generating accept able image reproductions will be easier. The basic problem in the desktop color prepress environment is that each com ponent in this open system handles color differently. Miscommunication between devices results in user frustration with an unpredictable, inconsistent, and inaccurate color system. The solution to this problem is to assess one\u27s workflow and adopt a color management system (CMS). The purpose of CMSs is to help users maintain color integrity throughout their desktop system and to automate the color separation process. This thesis project investigated the possibility of applying a comprehensive CMS to automate the color reproduction of 3-D images procured with a digital camera. Automatic exposure by Leaf System\u27s Lumina digital camera and automatic adjustments for tone reproduction, gray balance, and color correction by Kodak\u27s PCS100 CMS were employed. The experimental design began with the calibration of each component in the imaging chain. Next, a three-dimensional test scene of objects displaying tone and color variety was digitized by the Lumina camera under specific studio lighting conditions. And, under the exact studio conditions, a Kodak Q-60 test target was digitized; this image file was used to characterize a device profile for the Lumina digital camera. The digitized 3-D test scene file was sent through a color-managed workflow for automatic color reproduction. IX The automated, color-managed reproduction process was as follows: 1) select monitor, input, effect, and output profiles in the PCS100 Color Manager 2) acquire image via Photoshop on a Macintosh 3) image color conversion with Kodak\u27s PCS100 plug-ins by applying custom input profile, output simulation profile, and 3M Matchprint output profile 4) film output via Agfa Selectset 5000, and 5) 3M Matchprint color proofing to SWOP (Specifications for Web Offset Printing). Subjective evaluation was based on the single stimulus method. Visual assess ments were performed by twenty color-tested judges with experience in printing or photography. A set of ten color proofs of identical image were individually evaluated for acceptability. The criteria for acceptable color reproduction includ ed tone reproduction, gray balance, and color correction. Proofs that received high average scores (\u3e80%) were determined acceptable. Analysis of the results determined that with proper calibration and CMS color conversion technology, one can deliver acceptable tone reproduction and pleasing color. Gray balance was determined unacceptable for all proofs based solely on a perceived yellowish-green cast in the MacBeth ColorChecker\u27s three-quarter tone patch. Excluding the gray balance factor, four proofs were determined acceptable for tone and color reproduction. Objective evaluation was made to further assess the color accuracy from original to acceptable proof, and to correlate colorimetric differences with the visual assessments. Quantitative assessment was based on colorimetric CIEL*a*b* mea surements and calculated color differences (AE, AL*, AC*, AHab*) of MacBeth color patches and 3-D objects. Objects in the original scene and corresponding image areas in the proofs were measured in order to study variations in hue, light ness, and saturation. Analysis of the results demonstrated that overall, the images in the proof were lighter, less saturated, and had small hue shifts compared to the original. The proofed image would probably be a poor match to the original in terms of objective color accuracy. But for this thesis project, color proof accept ability was determined by the subjective, visual evaluations

    HST/WFPC2 morphologies and color maps of distant luminous infrared galaxies

    Full text link
    Using HST/WFPC2 imaging in F606W (or F450W) and F814W filters, we obtained the color maps in observed frame for 36 distant (0.4<z<1.2) luminous infrared galaxies (LIRGs), with average star formation rates of ~100 M_sun/yr. Stars and compact sources are taken as references to align images after correction of geometric distortion. This leads to an alignment accuracy of 0.15 pixel, which is a prerequisite for studying the detailed color properties of galaxies with complex morphologies. A new method is developed to quantify the reliability of each pixel in the color map without any bias against very red or blue color regions.Based on analyses of two-dimensional structure and spatially resolved color distribution, we carried out morphological classification for LIRGs. About 36% of the LIRGs were classified as disk galaxies and 22% as irregulars. Only 6 (17%) systems are obvious ongoing major mergers. An upper limit of 58% was found for the fraction of mergers in LIRGs with all the possible merging/interacting systems included. Strikingly, the fraction of compact sources is as high as 25%, similar to that found in optically selected samples. From their K band luminosities, LIRGs are relatively massive systems, with an average stellar mass of about 1.1x10^11 solar mass. They are related to the formation of massive and large disks, from their morphologies and also from the fact that they represent a significant fraction of distant disks selected by their sizes. The compact LIRGs show blue cores, which could be associated with the formation of the central region of these galaxies. We suggest that there are many massive disks still forming a large fraction of their stellar mass since z=1. For most of them, their central parts (bulge?) were formed prior to the formation of their disks.Comment: 20 pages, 14 figures, accepted for publication in A&

    ORGB: Offset Correction in RGB Color Space for Illumination-Robust Image Processing

    Full text link
    Single materials have colors which form straight lines in RGB space. However, in severe shadow cases, those lines do not intersect the origin, which is inconsistent with the description of most literature. This paper is concerned with the detection and correction of the offset between the intersection and origin. First, we analyze the reason for forming that offset via an optical imaging model. Second, we present a simple and effective way to detect and remove the offset. The resulting images, named ORGB, have almost the same appearance as the original RGB images while are more illumination-robust for color space conversion. Besides, image processing using ORGB instead of RGB is free from the interference of shadows. Finally, the proposed offset correction method is applied to road detection task, improving the performance both in quantitative and qualitative evaluations.Comment: Project website: https://baidut.github.io/ORGB

    Comparison of AIS Versus TMS Data Collected over the Virginia Piedmont

    Get PDF
    The Airborne Imaging Spectrometer (AIS, NS001 Thematic Mapper Simlulator (TMS), and Zeiss camera collected remotely sensed data simultaneously on October 27, 1983, at an altitude of 6860 meters (22,500 feet). AIS data were collected in 32 channels covering 1200 to 1500 nm. A simple atmospheric correction was applied to the AIS data, after which spectra for four cover types were plotted. Spectra for these ground cover classes showed a telescoping effect for the wavelength endpoints. Principal components were extracted from the shortwave region of the AIS (1200 to 1280 nm), full spectrum AIS (1200 to 1500 nm) and TMS (450 to 12,500 nm) to create three separate three-component color image composites. A comparison of the TMS band 5 (1000 to 1300 nm) to the six principal components from the shortwave AIS region (1200 to 1280 nm) showed improved visual discrimination of ground cover types. Contrast of color image composites created from principal components showed the AIS composites to exhibit a clearer demarcation between certain ground cover types but subtle differences within other regions of the imagery were not as readily seen

    COLOR RESOLVED CHERENKOV IMAGING ALLOWS FOR DIFFERENTIAL SIGNAL DETECTION IN BLOOD AND MELANIN CONTENT

    Get PDF
    Cherenkov imaging in radiation therapy allows a video display of the irradiation beam on the patient’s tissue, for visualization of the treatment. High energy radiation from a linear accelerator (Linac) results in the production of spectrally-continuous broadband light inside tissue due to the Cherenkov effect; this light is then attenuated by tissue features from transport and exits from the delivery site. Progress with the development of color Cherenkov imaging has opened the possibility for some level of spectroscopic imaging of the light-tissue interaction and interpretation of the specific nature of the tissue being irradiated. Generally, there is a linear relationship between Cherenkov emission and dose in a homogenous medium; however human tissue has multiple factors of scatter and absorption that result in the distortion of this linear relationship. This project investigated what color Cherenkov imaging could be used for, in the situation of tissue with different levels of pigmentation present in skin and/or different levels of hemoglobin present inside the tissue. A custom-developed time-gated three-channel intensified camera was used to image the Red Green and Blue (RGB) Cherenkov emission from tissue phantoms that had synthetic epidermal layers and blood. The hypothesis was that RGB color Cherenkov imaging would allow for the detection of signals that varied uniquely in these channels in response to changes in blood content or melanin content, because of their different absorption spectra in the RGB channels. Oxy-hemoglobin in the blood is highly absorbing in the blue & green, but not as much in the red, whereas the melanin is highly absorbing across the channels, falling slightly from blue through green and red. The results showed that these spectral absorption differences did indeed lead to different amounts of exiting light, predominantly in the red wavelength band, where melanin has a higher relative absorption than blood. This observation leads to the provision for future color distortion corrections, and interpretation of more accurate Cherenkov imaging via color-based modeling or correction for dose quantification. Based on this work, it is possible to separate the effects of attenuation from skin color or blood volume based upon the colors seen in the Cherenkov images, as these are emissions that are specific to the patient

    Practical Camera Sensor Spectral Response and Uncertainty Estimation

    Get PDF
    Knowledge of the spectral response of a camera is important in many applications such as illumination estimation, spectrum estimation in multi-spectral camera systems, and color consistency correction for computer vision. We present a practical method for estimating the camera sensor spectral response and uncertainty, consisting of an imaging method and an algorithm. We use only 15 images (four diffraction images and 11 images of color patches of known spectra to obtain high-resolution spectral response estimates) and obtain uncertainty estimates by training an ensemble of response estimation models. The algorithm does not assume any strict priors that would limit the possible spectral response estimates and is thus applicable to any camera sensor, at least in the visible range. The estimates have low errors for estimating color channel values from known spectra, and are consistent with previously reported spectral response estimates.Peer reviewe
    corecore