736 research outputs found
Fast Color Space Transformations Using Minimax Approximations
Color space transformations are frequently used in image processing,
graphics, and visualization applications. In many cases, these transformations
are complex nonlinear functions, which prohibits their use in time-critical
applications. In this paper, we present a new approach called Minimax
Approximations for Color-space Transformations (MACT).We demonstrate MACT on
three commonly used color space transformations. Extensive experiments on a
large and diverse image set and comparisons with well-known multidimensional
lookup table interpolation methods show that MACT achieves an excellent balance
among four criteria: ease of implementation, memory usage, accuracy, and
computational speed
Adversarial Image Generation by Spatial Transformation in Perceptual Colorspaces
Deep neural networks are known to be vulnerable to adversarial perturbations.
The amount of these perturbations are generally quantified using metrics,
such as , and . However, even when the measured
perturbations are small, they tend to be noticeable by human observers since
distance metrics are not representative of human perception. On the other
hand, humans are less sensitive to changes in colorspace. In addition, pixel
shifts in a constrained neighborhood are hard to notice. Motivated by these
observations, we propose a method that creates adversarial examples by applying
spatial transformations, which creates adversarial examples by changing the
pixel locations independently to chrominance channels of perceptual colorspaces
such as and , instead of making an additive perturbation
or manipulating pixel values directly. In a targeted white-box attack setting,
the proposed method is able to obtain competitive fooling rates with very high
confidence. The experimental evaluations show that the proposed method has
favorable results in terms of approximate perceptual distance between benign
and adversarially generated images. The source code is publicly available at
https://github.com/ayberkydn/stadv-torc
Digital Color Imaging
This paper surveys current technology and research in the area of digital
color imaging. In order to establish the background and lay down terminology,
fundamental concepts of color perception and measurement are first presented
us-ing vector-space notation and terminology. Present-day color recording and
reproduction systems are reviewed along with the common mathematical models
used for representing these devices. Algorithms for processing color images for
display and communication are surveyed, and a forecast of research trends is
attempted. An extensive bibliography is provided
A New Automatic Watercolour Painting Algorithm Based on Dual Stream Image Segmentation Model with Colour Space Estimation
Image processing plays a crucial role in automatic watercolor painting by manipulating the digital image to achieve the desired watercolor effect. segmentation in automatic watercolor painting algorithms is essential for region-based processing, color mixing and blending, capturing brushwork and texture, and providing artistic control over the final result. It allows for more realistic and expressive watercolor-like paintings by processing different image regions individually and applying appropriate effects to each segment. Hence, this paper proposed an effective Dual Stream Exception Maximization (DSEM) for automatic image segmentation. DSEM combines both color and texture information to segment an image into meaningful regions. This approach begins by converting the image from the RGB color space to a perceptually-based color space, such as CIELAB, to account for variations in lighting conditions and human perception of color. With the color space conversion, DSEM extracts relevant features from the image. Color features are computed based on the values of the color channels in the chosen color space, capturing the nuances of color distribution within the image. Simultaneously, texture features are derived by computing statistical measures such as local variance or co-occurrence matrices, capturing the textural characteristics of the image. Finally, the model is applied over the deep learning model for the classification of the color space in the painting. Simulation analysis is performed compared with conventional segmentation techniques such a CNN and RNN. The comparative analysis states that the proposed DSEM exhibits superior performance compared to conventional techniques in terms of color space estimation, texture analysis and region merging. The performance of classification with DSEM is ~12% higher than the conventional techniques
Camera characterization for improving color archaeological documentation
[EN] Determining the correct color is essential for proper cultural heritage documentation
and cataloging. However, the methodology used in most cases limits the results since
it is based either on perceptual procedures or on the application of color profiles in
digital processing software. The objective of this study is to establish a rigorous procedure,
from the colorimetric point of view, for the characterization of cameras,
following different polynomial models. Once the camera is characterized, users
obtain output images in the sRGB space that is independent of the sensor of the camera.
In this article we report on pyColorimetry software that was developed and
tested taking into account the recommendations of the Commission Internationale de
l’Eclairage (CIE). This software allows users to control the entire digital image processing
and the colorimetric data workflow, including the rigorous processing of raw
data. We applied the methodology on a picture targeting Levantine rock art motifs in
Remigia Cave (Spain) that is considered part of a UNESCO World Heritage Site.
Three polynomial models were tested for the transformation between color spaces.
The outcomes obtained were satisfactory and promising, especially with RAW files.
The best results were obtained with a second-order polynomial model, achieving
residuals below three CIELAB units. We highlight several factors that must be taken
into account, such as the geometry of the shot and the light conditions, which are
determining factors for the correct characterization of a digital camera.The authors gratefully acknowledge the support from the Spanish Ministerio de Economia y Competitividad to the project HAR2014-59873-R. The authors would like also to acknowledge the comments from the colleagues at the Photogrammetry & Laser Scanning Research Group (GIFLE) and the fruitful discussions provided by Archaeologist Dr. Esther Lopez-Montalvo.Molada Tebar, A.; Lerma García, JL.; Marqués Mateu, Á. (2017). Camera characterization for improving color archaeological documentation. Color Research and Application. 43(1):47-57. https://doi.org/10.1002/col.22152S475743
The Impact of Color Space and Intensity Normalization to Face Detection Performance
In this study, human face detection have been widely conducted and it is still interesting to be research. In this research, strong impact of color space for face i.e., many and multi faces detection by using YIQ, YCbCr, HSV, HSL, CIELAB, and CIELUV are proposed. In this experiment, intensity normality method in one of the color space channel and tested the faces using Android based have been developed. The faces multi image datasets came from social media, mobile phone and digital camera. In this experiment, the color space YCbCr percentage value with the image initial value detection before processing are 67.15%, 75.00%, and 64.58% have been reached. Then, after the normalization process are 83.21%, 87.12%, and 80.21% have been increased. Furthermore, this study showed that color space of YCbCr have reached improvement percentag
Virtual Cleaning of Works of Art Using Deep Learning Based Approaches
Virtual cleaning of art is a key process that conservators apply to see the likely appearance of the work of art they have aimed to clean, before the process of cleaning. There have been many different approaches to virtually clean artworks but having to physically clean the artwork at a few specific places of specific colors, the need to have pure black and white paint on the painting and their low accuracy are only a few of their shortcomings prompting us to propose deep learning based approaches in this research. First we report the work we have done in this field focusing on the color estimation of the artwork virtual cleaning and then we describe our methods for the spectral reflectance estimation of artwork in virtual cleaning. In the color estimation part, a deep convolutional neural network (CNN) and a deep generative network (DGN) are suggested, which estimate the RGB image of the cleaned artwork from an RGB image of the uncleaned artwork. Applying the networks to the images of the well-known artworks (such as the Mona Lisa and The Virgin and Child with Saint Anne) and Macbeth ColorChecker and comparing the results to the only physics-based model (which is the first model that has approached the issue of virtual cleaning from the physics-point of view, hence our reference to compare our models with) shows that our methods outperform that model and have great potentials of being applied to the real situations in which there might not be much information available on the painting, and all we have is an RGB image of the uncleaned artwork. Nonetheless, the methods proposed in the first part, cannot provide us with the spectral reflectance information of the artwork, therefore, the second part of the dissertation is proposed. This part focuses on the spectral estimation of the artwork virtual cleaning. Two deep learning-based approaches are also proposed here; the first one is deep generative network. This method receives a cube of the hyperspectral image of the uncleaned artwork and tries to output another cube which is the virtually cleaned hyperspectral image of the artwork. The second approach is 1D Convolutional Autoencoder (1DCA), which is based on 1D convolutional neural network and tries to find the spectra of the virtually cleaned artwork using the spectra of the physically cleaned artworks and their corresponding uncleaned spectra. The approaches are applied to hyperspectral images of Macbeth ColorChecker (simulated in the forms of cleaned and uncleaned hyperspectral images) and the \u27Haymakers\u27 (real hyperspectral images of both cleaned and uncleaned states). The results, in terms of Euclidean distance and spectral angle between the virtually cleaned artwork and the physically cleaned one, show that the proposed approaches have outperformed the physics-based model, with DGN outperforming the 1DCA. Methods proposed herein do not rely on finding a specific type of paint and color on the painting first and take advantage of the high accuracy offered by deep learning-based approaches and they are also applicable to other paintings
The alternating least squares technique for nonuniform intensity color correction
Color correction involves mapping device RGBs to display counterparts or to corresponding XYZs. A popular methodology is to take an image of a color chart and then solve for the best 3 × 3 matrix that maps the RGBs to the corresponding known XYZs. However, this approach fails at times when the intensity of the light varies across the chart. This variation needs to be removed before estimating the correction matrix. This is typically achieved by acquiring an image of a uniform gray chart in the same location, and then dividing the color checker image by the gray-chart image. Of course, taking images of two charts doubles the complexity of color correction. In this article, we present an alternative color correction algorithm that simultaneously estimates the intensity variation and the 3 × 3 transformation matrix from a single image of a color chart. We show that the color correction problem, that is, finding the 3 × 3 correction matrix, can be solved using a simple alternating least-squares procedure. Experiments validate our approach. © 2014 Wiley Periodicals, Inc. Col Res Appl, 40, 232–242, 201
A Gaussian Process Model for Color Camera Characterization: Assessment in Outdoor Levantine Rock Art Scenes
[EN] In this paper, we propose a novel approach to undertake the colorimetric camera characterization procedure based on a Gaussian process (GP). GPs are powerful and flexible nonparametric models for multivariate nonlinear functions. To validate the GP model, we compare the results achieved with a second-order polynomial model, which is the most widely used regression model for characterization purposes. We applied the methodology on a set of raw images of rock art scenes collected with two different Single Lens Reflex (SLR) cameras. A leave-one-out cross-validation (LOOCV) procedure was used to assess the predictive performance of the models in terms of CIE XYZ residuals and Delta E-ab* color differences. Values of less than 3 CIELAB units were achieved for Delta E-ab*. The output sRGB characterized images show that both regression models are suitable for practical applications in cultural heritage documentation. However, the results show that colorimetric characterization based on the Gaussian process provides significantly better results, with lower values for residuals and Delta E-ab*. We also analyzed the induced noise into the output image after applying the camera characterization. As the noise depends on the specific camera, proper camera selection is essential for the photogrammetric work.This research is partly funded by the Research and Development Aid Program PAID-01-16 of the Universitat Politecnica de Valencia, through FPI-UPV-2016 Sub 1 grant.Molada-Tebar, A.; Riutort-Mayol, G.; Marqués-Mateu, Á.; Lerma, JL. (2019). A Gaussian Process Model for Color Camera Characterization: Assessment in Outdoor Levantine Rock Art Scenes. Sensors. 19(21):1-22. https://doi.org/10.3390/s19214610S1221921Ruiz, J. F., & Pereira, J. (2014). The colours of rock art. Analysis of colour recording and communication systems in rock art research. Journal of Archaeological Science, 50, 338-349. doi:10.1016/j.jas.2014.06.023Gaiani, M., Apollonio, F., Ballabeni, A., & Remondino, F. (2017). Securing Color Fidelity in 3D Architectural Heritage Scenarios. Sensors, 17(11), 2437. doi:10.3390/s17112437Robert, E., Petrognani, S., & Lesvignes, E. (2016). Applications of digital photography in the study of Paleolithic cave art. Journal of Archaeological Science: Reports, 10, 847-858. doi:10.1016/j.jasrep.2016.07.026Fernández-Lozano, J., Gutiérrez-Alonso, G., Ruiz-Tejada, M. Á., & Criado-Valdés, M. (2017). 3D digital documentation and image enhancement integration into schematic rock art analysis and preservation: The Castrocontrigo Neolithic rock art (NW Spain). Journal of Cultural Heritage, 26, 160-166. doi:10.1016/j.culher.2017.01.008López-Menchero Bendicho, V. M., Marchante Ortega, Á., Vincent, M., Cárdenas Martín-Buitrago, Á. J., & Onrubia Pintado, J. (2017). Uso combinado de la fotografía digital nocturna y de la fotogrametría en los procesos de documentación de petroglifos: el caso de Alcázar de San Juan (Ciudad Real, España). Virtual Archaeology Review, 8(17), 64. doi:10.4995/var.2017.6820Hong, G., Luo, M. R., & Rhodes, P. A. (2000). A study of digital camera colorimetric characterization based on polynomial modeling. Color Research & Application, 26(1), 76-84. doi:10.1002/1520-6378(200102)26:13.0.co;2-3Hung, P.-C. (1993). Colorimetric calibration in electronic imaging devices using a look-up-table model and interpolations. Journal of Electronic Imaging, 2(1), 53. doi:10.1117/12.132391Vrhel, M. J., & Trussell, H. J. (1992). Color correction using principal components. Color Research & Application, 17(5), 328-338. doi:10.1002/col.5080170507Bianco, S., Gasparini, F., Russo, A., & Schettini, R. (2007). A New Method for RGB to XYZ Transformation Based on Pattern Search Optimization. IEEE Transactions on Consumer Electronics, 53(3), 1020-1028. doi:10.1109/tce.2007.4341581Finlayson, G. D., Mackiewicz, M., & Hurlbert, A. (2015). Color Correction Using Root-Polynomial Regression. IEEE Transactions on Image Processing, 24(5), 1460-1470. doi:10.1109/tip.2015.2405336Connah, D., Westland, S., & Thomson, M. G. A. (2001). Recovering spectral information using digital camera systems. Coloration Technology, 117(6), 309-312. doi:10.1111/j.1478-4408.2001.tb00080.xLiang, J., & Wan, X. (2017). Optimized method for spectral reflectance reconstruction from camera responses. Optics Express, 25(23), 28273. doi:10.1364/oe.25.028273Heikkinen, V. (2018). Spectral Reflectance Estimation Using Gaussian Processes and Combination Kernels. IEEE Transactions on Image Processing, 27(7), 3358-3373. doi:10.1109/tip.2018.2820839Molada-Tebar, A., Lerma, J. L., & Marqués-Mateu, Á. (2017). Camera characterization for improving color archaeological documentation. Color Research & Application, 43(1), 47-57. doi:10.1002/col.22152Durmus, A., Moulines, É., & Pereyra, M. (2018). Efficient Bayesian Computation by Proximal Markov Chain Monte Carlo: When Langevin Meets Moreau. SIAM Journal on Imaging Sciences, 11(1), 473-506. doi:10.1137/16m1108340Ruppert, D., Wand, M. P., & Carroll, R. J. (2009). Semiparametric regression during 2003–2007. Electronic Journal of Statistics, 3(0), 1193-1256. doi:10.1214/09-ejs525Rock Art of the Mediterranean Basin on the Iberian Peninsulahttp://whc.unesco.org/en/list/874Direct Image Sensor Sigma SD15http://www.sigma-sd.com/SD15/technology-colorsensor.htmlRamanath, R., Snyder, W. E., Yoo, Y., & Drew, M. S. (2005). Color image processing pipeline. IEEE Signal Processing Magazine, 22(1), 34-43. doi:10.1109/msp.2005.1407713Stone, M. (1974). Cross-Validatory Choice and Assessment of Statistical Predictions. Journal of the Royal Statistical Society: Series B (Methodological), 36(2), 111-133. doi:10.1111/j.2517-6161.1974.tb00994.xVazquez-Corral, J., Connah, D., & Bertalmío, M. (2014). Perceptual Color Characterization of Cameras. Sensors, 14(12), 23205-23229. doi:10.3390/s141223205Sharma, G., Wu, W., & Dalal, E. N. (2004). The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Research & Application, 30(1), 21-30. doi:10.1002/col.20070Lebrun, M., Buades, A., & Morel, J. M. (2013). A Nonlocal Bayesian Image Denoising Algorithm. SIAM Journal on Imaging Sciences, 6(3), 1665-1688. doi:10.1137/120874989Colom, M., Buades, A., & Morel, J.-M. (2014). Nonparametric noise estimation method for raw images. Journal of the Optical Society of America A, 31(4), 863. doi:10.1364/josaa.31.000863Sur, F., & Grédiac, M. (2015). Measuring the Noise of Digital Imaging Sensors by Stacking Raw Images Affected by Vibrations and Illumination Flickering. SIAM Journal on Imaging Sciences, 8(1), 611-643. doi:10.1137/140977035Zhang, Y., Wang, G., & Xu, J. (2018). Parameter Estimation of Signal-Dependent Random Noise in CMOS/CCD Image Sensor Based on Numerical Characteristic of Mixed Poisson Noise Samples. Sensors, 18(7), 2276. doi:10.3390/s18072276Naveed, K., Ehsan, S., McDonald-Maier, K. D., & Ur Rehman, N. (2019). A Multiscale Denoising Framework Using Detection Theory with Application to Images from CMOS/CCD Sensors. Sensors, 19(1), 206. doi:10.3390/s19010206Riutort-Mayol, G., Marqués-Mateu, Á., Seguí, A. E., & Lerma, J. L. (2012). Grey Level and Noise Evaluation of a Foveon X3 Image Sensor: A Statistical and Experimental Approach. Sensors, 12(8), 10339-10368. doi:10.3390/s120810339Marqués-Mateu, Á., Lerma, J. L., & Riutort-Mayol, G. (2013). Statistical grey level and noise evaluation of Foveon X3 and CFA image sensors. Optics & Laser Technology, 48, 1-15. doi:10.1016/j.optlastec.2012.09.034Chou, Y.-F., Luo, M. R., Li, C., Cheung, V., & Lee, S.-L. (2013). Methods for designing characterisation targets for digital cameras. Coloration Technology, 129(3), 203-213. doi:10.1111/cote.12022Shen, H.-L., Cai, P.-Q., Shao, S.-J., & Xin, J. H. (2007). Reflectance reconstruction for multispectral imaging by adaptive Wiener estimation. Optics Express, 15(23), 15545. doi:10.1364/oe.15.015545Molada-Tebar, A., Marqués-Mateu, Á., & Lerma, J. (2019). Camera Characterisation Based on Skin-Tone Colours for Rock Art Recording. Proceedings, 19(1), 12. doi:10.3390/proceedings201901901
- …