382 research outputs found

    Automated detection of effective scene illuminant chromaticity from specular highlights in digital images

    Get PDF
    An advanced, automated method is presented for determining an effective scene illuminant chromaticity (scene illuminant plus imaging system variables) from specular highlights in digital images subsequent to image capture. Underlying theories are presented based on a two component reflection model where the scene illuminant relative spectral power distribution is preserved in the specular component. Related methodologies for extracting scene illuminant information as well as alternative methods for achieving color constancy are presented along with factors which inhibit successful implementation. Following, development of a more robust algorithm is discussed. This algorithm is based on locating the center of convergence of a radial line pattern in the two-dimensional chromaticity histogram which theoretically identifies the effective scene illuminant chromaticity. This is achieved by using a radiality index to quantify the relative correlation between a radial mask and the histogram radial line pattern at discrete chromaticity coordinates within a specified search region. The coordinates associated with the strongest radiality index are adopted to represent the effective scene illuminant chromaticity. For a set of controlled test images, the physics-based specular highlight algorithm determined effective scene illuminant chromaticities to a level of accuracy which was nearly three times better than that of a benchmark statistically-based gray-world algorithm. The primary advantage of the specular highlight algorithm was its sustained performance when presented with image conditions of dominant colors, weak specular reflections, and strong interreflections

    Color Calibration via Natural Food Colors

    Get PDF
    Color image calibration is usually done with the aid of a color chart such as the Macbeth ColorChecker containing a set of carefully produced color patches. However, in many consumer applications such as Internet shopping, for which the correct reproduction of color can be very important, most users will not have a color chart readily available, and probably are not interested in purchasing one in any case. We propose using the colors of the fleshy interior parts of oranges, lemons and limes, along with cooked egg white as a means of creating a simple color ‘chart’. A sample of oranges, lemons and limes from North America and Australia has shown their color to be quite consistent, and therefore potentially suitable as a set of reference colors for color image calibration. Figure 1 shows one of the images used in measuring the colors of the fruits and vegetables. In the case of Internet sales, a seller photographing color-sensitive merchandise, such as clothing, could simply include one or two of these foods in each picture. This would provide an immediate point of reference for the purchaser as to whether or not the image colors are correct. Clearly, if the food colors do not look right, neither will the merchandise when it is delivered

    Initialization Requirement in Developing of Mobile Learning 'Molearn' for Biology Students Using Inquiry-based learning

    Get PDF
    Inquiry-based learning is kind of learning activities that involves students’ entire capabilities in exploring and investigating particular objects or phenomenon using critical thinking skills. Recently, information technology tangibly contributes in any education aspects, including the existence of e-learning, a widely spreading learning model in the 21st century education. This study aims at initializing needs of developing mobile learning ‘Molearn’ based on inquiry-based method. By cooperating with Biology teacher community in senior high school, ‘Molearn’ provides IT-based medium in Biology learning process

    Using the contrast ratio method and achromatic transmission densitometry as a substitute for Status A transmission densitometry with the Photographic Activity Test For Enclosure Materials

    Get PDF
    The discontinuation of conventional photographic spot-reading transmission densitometers -including the widely adopted X-Rite model 310- due to the rapid decrease in demand for analogic photographic laboratory work has had a broadly felt effect in the conservation community. In the cultural heritage conservation field, instruments like the X-Rite 310 are widely used, specifically in the performance of the Photographic Activity Test (PAT) for the preservation of photographic materials. In the present research, five possible alternate metrics were investigated as substitutes for the increasingly unavailable spot reading transmission densitometers in Status-A readings as mandated by the current PAT. The analyzed metrics were: (1) ratio in reflection using normal illumination geometry and circumferential 45° viewing (0/45:c), (2) contrast ratio in reflection using diffuse illumination and 8° viewing geometry with specular component included (d/8:i), (3) contrast ratio in reflection using diffuse illumination and 8° viewing geometry with specular component excluded (d/8:e), (4) Ortho-transmission densitometry and (5) UV- transmission densitometry. The contrast ratio metric can be obtained with commonly available reflection spectrophotometers, such as the X-Rite 939 and the X-Rite SP64. The use of contrast ratio metric could open up new possibilities for measurement in the field of art reproduction and cultural heritage preservation to analyze changes in density and opacity. The proposed work analyzed the readings obtained by three measurement instruments: (1) X-Rite 361T, (2) X-Rite 939 and (3) X-Rite SP64, in a set of three achromatic transmission step-wedges (15-Step Transmission Stouffer© Graphic Arts T1530CC step-wedge) used as a surrogate for the colloidal silver strip used in the PAT. The goal was to evaluate the performance of the five proposed metrics and geometries as a possible alternative to transmission densitometry measurements when recording data using the Photographic Activity Test. The results indicate that there exists a near-perfect linear relationship between the readings using the X-Rite 361T in Ortho-transmission densitometry channel and the readings from the Status-A transmission density using the X-Rite 310 across the entire densitometric range represented by the Stouffer wedge. The UV channel measurements also exhibit a near seamless linear regression model with the Status-A readings. Both relationships were found to be statistically significant. On the other hand, the measurements with the setups using contrast ratio measurements did not exhibit the same linear relationship when the entire measurement range is considered. However, in readings of less than .95 opacity, the contrast ratio measurements did exhibit a meaningful linear relationship when compared to the Status-A transmission readings with a density value of less than 1.8, albeit still with lower correlation than both readings with the X-Rite 361T

    Physical Characteristics, Sensors and Applications of 2D/3DIntegrated CMOS Photodiodes

    Get PDF
    Two-dimensional photodiodes are reversely biased at a reasonable voltage whereas 3D photodiodes are likely operated at the Geiger mode. How to design integrated 2D and 3D photodiodes is investigated in terms of quantum efficiency, dark current, crosstalk, response time and so on. Beyond photodiodes, a charge supply mechanism provides a proper charge for a high dynamic range of 2D sensing, and a feedback pull-down mechanism expedites the response time of 3D sensing for time-of-flight applications. Particularly, rapid parallel reading at a 3D mode is developed by a bus-sharing mechanism. Using the TSMC 0.35μm 2P4M technology, a 2D/3D-integrated image sensor including P-diffusion_N-well_P-substrate photodiodes, pixel circuits, correlated double sampling circuits, sense amplifiers, a multi-channel time-to-digital converter, column/row decoders, bus-sharing connections/decoders, readout circuits and so on was implemented with a die size of 12mm×12mm. The proposed 2D/3D-integrated image sensor can perceive a 352×288-pixel 2D image and an 88×72-pixel 3D image with a dynamic range up to 100dB and a depth resolution of around 4cm, respectively. Therefore, our image sensor can effectively capture gray-level and depth information of a scene at the same location without additional alignment and post-processing. Finally, the currently available 2D and 3D image sensors are discussed and presented

    Efficient Unified Demosaicing for Bayer and Non-Bayer Patterned Image Sensors

    Full text link
    As the physical size of recent CMOS image sensors (CIS) gets smaller, the latest mobile cameras are adopting unique non-Bayer color filter array (CFA) patterns (e.g., Quad, Nona, QxQ), which consist of homogeneous color units with adjacent pixels. These non-Bayer sensors are superior to conventional Bayer CFA thanks to their changeable pixel-bin sizes for different light conditions but may introduce visual artifacts during demosaicing due to their inherent pixel pattern structures and sensor hardware characteristics. Previous demosaicing methods have primarily focused on Bayer CFA, necessitating distinct reconstruction methods for non-Bayer patterned CIS with various CFA modes under different lighting conditions. In this work, we propose an efficient unified demosaicing method that can be applied to both conventional Bayer RAW and various non-Bayer CFAs' RAW data in different operation modes. Our Knowledge Learning-based demosaicing model for Adaptive Patterns, namely KLAP, utilizes CFA-adaptive filters for only 1% key filters in the network for each CFA, but still manages to effectively demosaic all the CFAs, yielding comparable performance to the large-scale models. Furthermore, by employing meta-learning during inference (KLAP-M), our model is able to eliminate unknown sensor-generic artifacts in real RAW data, effectively bridging the gap between synthetic images and real sensor RAW. Our KLAP and KLAP-M methods achieved state-of-the-art demosaicing performance in both synthetic and real RAW data of Bayer and non-Bayer CFAs

    Running head: What color is it

    Get PDF
    Color vision provides low-resolution spectrophotometric information about candidate materials for planetary surfaces that is comparable in precision to wideband photoelectric photometry, and considerably superior to Voyager TV data. Briefly explained are the basic concepts, teminology, and notation of color science. Also shown is how to convert a reflectance spectrum into a color specification. An Appendix lists a simple computer subroutine to convert spectral reflectance into CIE coordinates, and the text explains how to convert these to a surface color in a standard color atlas. Target and printed Solar System colors are compared to show how accurate are the printed colors
    • …
    corecore