28 research outputs found
A study of the relationship between the migration of image silver and perceived yellowing of silver gelatine photographs
Silver gelatine photographs were the most dominant photographic process of the twentieth century from the 1880s until the 1960s. They are prone to yellowing, mirroring and fading which is largely attributed to the effects of pollutants, relative humidity and residual processing chemicals. Experts in the conservation of photographs claim they can determine the causes of deterioration with the naked eye: the effects of humidity result in a more yellowed appearance, whilst the presence of residual chemicals results in a redder appearance. This work aims to investigate if the same deterioration processes can be diagnosed in photographic prints with a spectrophotometer by addressing two questions: (1) In new and artificially aged silver gelatine photographs is it possible to distinguish between discolouration caused by silver migration and that caused by the presence of residual sulfur? (2) What are the complexities of applying these findings to historic photographs? A set of test photographs, some well processed and some insufficiently washed was developed and artificially aged. These were compared to a small collection of historical photographs of different ages, paper types and image colours. Samples were assessed using visual observation, residual silver and hypo spot tests, colour measurements including L*a*b* and reflectance spectra, Fourier transform infra-red (FTIR) spectroscopy and transmission electron microscopy (TEM). After artificial ageing the well processed test photographs were more yellowed, TEM indicated that this was due to colloidal silver formation. The insufficiently washed test photographs were more red but also darker, TEM showed these samples to have more homogeneous silver filaments, thought to be due to silver sulfide formation. The results for the historical photographs were similar but more subtle. A larger sample set is needed to investigate this more extensively. Further investigation on historical samples, with colour measurements and residual silver and fixer spot tests will take place
Photometric Compensation to Dynamic Surfaces in a Projector-Camera System
International audienceIn this paper, a novel approach that allows color compensated projection on an arbitrary surface is presented. Assuming that the geometry of the surface is known, this method can be used in dynamic environments, where the surface color is not static. A simple calibration process is performed offline and only a single input image under reference illumination is sufficient for the estimation of the compensation. The system can recover the reflectance of the surface pixel-wise and provide an accurate photometric compensation to minimize the visibility of the projection surface. The color matching between the desired appearance of the projected image and the projection on the surface is performed in the device-independent color space CIE 1931 XYZ. The results of the evaluation confirm that this method provides a robust and accurate compensation even for surfaces with saturated colors and high spatial frequency patterns. This promising method can be the cornerstone of a real time projector-camera system for dynamic scenes
Multispectral Constancy Based on Spectral Adaptation Transform
The spectral reflectance of an object surface provides valuable information about its characteristics. Reflectance reconstruction from multispectral images is based on certain assumptions. One of these assumptions is that the same illumination is used for system calibration and image acquisition. We propose the novel concept of multispectral constancy, achieved through a spectral adaptation transform, which transforms the sensor data acquired under an unknown illumination to a generic illuminant-independent space. The proposed concept and methods are inspired from the field of computational color constancy. Spectral reflectance is then estimated by using a generic linear calibration. Results of reflectance reconstruction using the proposed concept show that it is efficient, but highly sensitive to the accuracy of illuminant estimation
Color Calibration on Human Skin Images
Many recent medical developments rely on image analysis, however, it is not convenient nor cost-efficient to use professional image acquisition tools in every clinic or laboratory. Hence, a reliable color calibration is necessary; color calibration refers to adjusting the pixel colors to a standard color space.
During a real-life project on neonatal jaundice disease detection, we faced a problem to perform skin color calibration on already taken images of neonatal babies. These images were captured with a smartphone (Samsung Galaxy S7, equipped with a 12 Mega Pixel camera to capture 4032 × 3024 resolution images) in the presence of a specific calibration pattern. This post-processing image analysis deprived us from calibrating the camera itself. There is currently no comprehensive study on color calibration methods applied to human skin images, particularly when using amateur cameras (e.g. smartphones). We made a comprehensive study and we proposed a novel approach for color calibration, Gaussian process regression (GPR), a machine learning model that adapts to environmental variables. The results show that the GPR achieves equal results to state-of-the-art color calibration techniques, while also creating more general models
Computer-aided modelling of three-dimensional maxillofacial tissues through multi-modal imaging
Recent developments in digital imaging techniques have allowed a wide spread of three-dimensional methodologies based on capturing anatomical tissues by different approaches, such as cone-beam computed tomography, three-dimensional photography and surface scanning. In oral rehabilitation, an objective method to predict surgical and orthodontic outcomes should be based on anatomical data belonging to soft facial tissue, facial skeleton and dentition (maxillofacial triad). However, none of the available imaging techniques can accurately capture the complete triad. This article presents a multi-modal framework, which allows image fusion of different digital techniques to create a three-dimensional virtual maxillofacial model, which integrates photorealistic face, facial skeleton and dentition. The methodology is based on combining structured light surface scanning and cone-beam computed tomography data processing. The fusion procedure provides multi-modal representations by aligning different tissues on the basis of common anatomical constraints
