134 research outputs found
Evaluation and improvement of the workflow of digital imaging of fine art reproduction in museums
Fine arts refer to a broad spectrum of art formats, ie~painting, calligraphy, photography, architecture, and so forth. Fine art reproductions are to create surrogates of the original artwork that are able to faithfully deliver the aesthetics and feelings of the original. Traditionally, reproductions of fine art are made in the form of catalogs, postcards or books by museums, libraries, archives, and so on (hereafter called museums for simplicity). With the widespread adoption of digital archiving in museums, more and more artwork is reproduced to be viewed on a display. For example, artwork collections are made available through museum websites and Google Art Project for art lovers to view on their own displays. In the thesis, we study the fine art reproduction of paintings in the form of soft copy viewed on displays by answering four questions: (1) what is the impact of the viewing condition and original on image quality evaluation? (2) can image quality be improved by avoiding visual editing in current workflows of fine art reproduction? (3) can lightweight spectral imaging be used for fine art reproduction? and (4) what is the performance of spectral reproductions compared with reproductions by current workflows? We started with evaluating the perceived image quality of fine art reproduction created by representative museums in the United States under controlled and uncontrolled environments with and without the presence of the original artwork. The experimental results suggest that the image quality is highly correlated with the color accuracy of the reproduction only when the original is present and the reproduction is evaluated on a characterized display. We then examined the workflows to create these reproductions, and found that current workflows rely heavily on visual editing and retouching (global and local color adjustments on the digital reproduction) to improve the color accuracy of the reproduction. Visual editing and retouching can be both time-consuming and subjective in nature (depending on experts\u27 own experience and understanding of the artwork) lowering the efficiency of artwork digitization considerably. We therefore propose to improve the workflow of fine art reproduction by (1) automating the process of visual editing and retouching in current workflows based on RGB acquisition systems and by (2) recovering the spectral reflectance of the painting with off-the-shelf equipment under commonly available lighting conditions. Finally, we studied the perceived image quality of reproductions created by current three-channel (RGB) workflows with those by spectral imaging and those based on an exemplar-based method
Compression, Modeling, and Real-Time Rendering of Realistic Materials and Objects
The realism of a scene basically depends on the quality of the geometry, the
illumination and the materials that are used. Whereas many sources for
the creation of three-dimensional geometry exist and numerous algorithms
for the approximation of global illumination were presented, the acquisition
and rendering of realistic materials remains a challenging problem.
Realistic materials are very important in computer graphics, because
they describe the reflectance properties of surfaces, which are based on the
interaction of light and matter. In the real world, an enormous diversity of
materials can be found, comprising very different properties. One important
objective in computer graphics is to understand these processes, to formalize
them and to finally simulate them.
For this purpose various analytical models do already exist, but their
parameterization remains difficult as the number of parameters is usually
very high. Also, they fail for very complex materials that occur in the real
world. Measured materials, on the other hand, are prone to long acquisition
time and to huge input data size. Although very efficient statistical
compression algorithms were presented, most of them do not allow for editability,
such as altering the diffuse color or mesostructure. In this thesis,
a material representation is introduced that makes it possible to edit these
features. This makes it possible to re-use the acquisition results in order to
easily and quickly create deviations of the original material. These deviations
may be subtle, but also substantial, allowing for a wide spectrum of
material appearances.
The approach presented in this thesis is not based on compression, but on
a decomposition of the surface into several materials with different reflection
properties. Based on a microfacette model, the light-matter interaction is
represented by a function that can be stored in an ordinary two-dimensional
texture. Additionally, depth information, local rotations, and the diffuse
color are stored in these textures. As a result of the decomposition, some
of the original information is inevitably lost, therefore an algorithm for the
efficient simulation of subsurface scattering is presented as well.
Another contribution of this work is a novel perception-based simplification
metric that includes the material of an object. This metric comprises
features of the human visual system, for example trichromatic color
perception or reduced resolution. The proposed metric allows for a more
aggressive simplification in regions where geometric metrics do not simplif
Fusing spatial and temporal components for real-time depth data enhancement of dynamic scenes
The depth images from consumer depth cameras (e.g., structured-light/ToF devices) exhibit a substantial amount of artifacts (e.g., holes, flickering, ghosting) that needs to be removed for real-world applications. Existing methods cannot entirely remove them and perform slow. This thesis proposes a new real-time spatio-temporal depth image enhancement filter that completely removes flickering and ghosting, and significantly reduces holes. This thesis also presents a novel depth-data capture setup and two data reduction methods to optimize the performance of the proposed enhancement method
Similarity reasoning for local surface analysis and recognition
This thesis addresses the similarity assessment of digital shapes, contributing to the analysis of surface characteristics that are independent of the global shape but are crucial to identify a model as belonging to the same manufacture, the same origin/culture or the same typology (color, common decorations, common feature elements, compatible style elements, etc.). To face this problem, the interpretation of the local surface properties is crucial.
We go beyond the retrieval of models or surface patches in a collection of models, facing the recognition of geometric patterns across digital models with different overall shape. To address this challenging problem, the use of both engineered and learning-based descriptions are investigated, building one of the first contributions towards the localization and identification of geometric patterns on digital surfaces. Finally, the recognition of patterns adds a further perspective in the exploration of (large) 3D data collections, especially in the cultural heritage domain.
Our work contributes to the definition of methods able to locally characterize the geometric and colorimetric surface decorations. Moreover, we showcase our benchmarking activity carried out in recent years on the identification of geometric features and the retrieval of digital models completely characterized by geometric or colorimetric patterns
Expanding Dimensionality in Cinema Color: Impacting Observer Metamerism through Multiprimary Display
Television and cinema display are both trending towards greater ranges and saturation of reproduced colors made possible by near-monochromatic RGB illumination technologies. Through current broadcast and digital cinema standards work, system designs employing laser light sources, narrow-band LED, quantum dots and others are being actively endorsed in promotion of Wide Color Gamut (WCG). Despite artistic benefits brought to creative content producers, spectrally selective excitations of naturally different human color response functions exacerbate variability of observer experience. An exaggerated variation in color-sensing is explicitly counter to the exhaustive controls and calibrations employed in modern motion picture pipelines. Further, singular standard observer summaries of human color vision such as found in the CIE’s 1931 and 1964 color matching functions and used extensively in motion picture color management are deficient in recognizing expected human vision variability. Many researchers have confirmed the magnitude of observer metamerism in color matching in both uniform colors and imagery but few have shown explicit color management with an aim of minimized difference in observer perception variability. This research shows that not only can observer metamerism influences be quantitatively predicted and confirmed psychophysically but that intentionally engineered multiprimary displays employing more than three primaries can offer increased color gamut with drastically improved consistency of experience. To this end, a seven-channel prototype display has been constructed based on observer metamerism models and color difference indices derived from the latest color vision demographic research. This display has been further proven in forced-choice paired comparison tests to deliver superior color matching to reference stimuli versus both contemporary standard RGB cinema projection and recently ratified standard laser projection across a large population of color-normal observers
Appearance-based image splitting for HDR display systems
High dynamic range displays that incorporate two optically-coupled image planes have recently been developed. This dual image plane design requires that a given HDR input image be split into two complementary standard dynamic range components that drive the coupled systems, therefore there existing image splitting issue. In this research, two types of HDR display systems (hardcopy and softcopy HDR display) are constructed to facilitate the study of HDR image splitting algorithm for building HDR displays. A new HDR image splitting algorithm which incorporates iCAM06 image appearance model is proposed, seeking to create displayed HDR images that can provide better image quality. The new algorithm has potential to improve image details perception, colorfulness and better gamut utilization. Finally, the performance of the new iCAM06-based HDR image splitting algorithm is evaluated and compared with widely spread luminance square root algorithm through psychophysical studies
Recommended from our members
Holoscopic 3D image depth estimation and segmentation techniques
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonToday’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems. Though the images displayed by such systems tend to cause eye strain, fatigue and headaches after prolonged viewing as users are required to focus on the screen plane/accommodation to converge their eyes to a point in space in a different plane/convergence. Holoscopy is a 3D technology that targets overcoming the above limitations of current 3D technology and was recently developed at Brunel University. This work is part W4.1 of the 3D VIVANT project that is funded by the EU under the ICT program and coordinated by Dr. Aman Aggoun at Brunel University, West London, UK. The objective of the work described in this thesis is to develop estimation and segmentation techniques that are capable of estimating precise 3D depth, and are applicable for holoscopic 3D imaging system. Particular emphasis is given to the task of automatic techniques i.e. favours algorithms with broad generalisation abilities, as no constraints are placed on the setting. Algorithms that provide invariance to most appearance based variation of objects in the scene (e.g. viewpoint changes, deformable objects, presence of noise and changes in lighting). Moreover, have the ability to estimate depth information from both types of holoscopic 3D images i.e. Unidirectional and Omni-directional which gives horizontal parallax and full parallax (vertical and horizontal), respectively. The main aim of this research is to develop 3D depth estimation and 3D image segmentation techniques with great precision. In particular, emphasis on automation of thresholding techniques and cues identifications for development of robust algorithms. A method for depth-through-disparity feature analysis has been built based on the existing correlation between the pixels at a one micro-lens pitch which has been exploited to extract the viewpoint images (VPIs). The corresponding displacement among the VPIs has been exploited to estimate the depth information map via setting and extracting reliable sets of local features. ii Feature-based-point and feature-based-edge are two novel automatic thresholding techniques for detecting and extracting features that have been used in this approach. These techniques offer a solution to the problem of setting and extracting reliable features automatically to improve the performance of the depth estimation related to the generalizations, speed and quality. Due to the resolution limitation of the extracted VPIs, obtaining an accurate 3D depth map is challenging. Therefore, sub-pixel shift and integration is a novel interpolation technique that has been used in this approach to generate super-resolution VPIs. By shift and integration of a set of up-sampled low resolution VPIs, the new information contained in each viewpoint is exploited to obtain a super resolution VPI. This produces a high resolution perspective VPI with wide Field Of View (FOV). This means that the holoscopic 3D image system can be converted into a multi-view 3D image pixel format. Both depth accuracy and a fast execution time have been achieved that improved the 3D depth map. For a 3D object to be recognized the related foreground regions and depth information map needs to be identified. Two novel unsupervised segmentation methods that generate interactive depth maps from single viewpoint segmentation were developed. Both techniques offer new improvements over the existing methods due to their simple use and being fully automatic; therefore, producing the 3D depth interactive map without human interaction. The final contribution is a performance evaluation, to provide an equitable measurement for the extent of the success of the proposed techniques for foreground object segmentation, 3D depth interactive map creation and the generation of 2D super-resolution viewpoint techniques. The no-reference image quality assessment metrics and their correlation with the human perception of quality are used with the help of human participants in a subjective manner
Recommended from our members
Perception-Aware Optimisation Methodologies for Quantum Dot Based Displays and Lighting
Human colour vision acuity is limited. This presents opportunities to leverage these perceptual limits to achieve engineering optimisations for devices and systems that interact with the human vision system. This dissertation presents the results of few investigations we carried out into quantifying these limits and several optimisation methodologies that we devised. The first step in this process is to quantify the acuity of human colour vision. We obtained a large corpus of colour matching data from a mobile video game called Specimen. We examine what questions about human vision this dataset allows us to answer and explore global statistics about colour vision based on this data on 41,000 players from 175 countries. We show that we can use the information in this dataset to infer potential candidate functions for the spectral sensitivities of each person in the dataset. The human eye acts like a many to one function; quantifiably different spectra can look like the same colour. This is referred to as metamerism. From a device perspective, different spectra consume different amounts of energy to generate. We show that we can use these two properties to elicit the same colour sensation using less energy. In the colour samples we evaluated, we show that we can achieve up to 10 times less power consumption while achieving a colour match. Given that one cannot change the emission spectrum of a display after fabrication, we propose the use of a multi-primary colour display to achieve this. We present two indices for quantifying the metameric capacity of such a display and its ability to save energy. The emission spectrum of a quantum dot (QD) based device is very narrow. Previous work in the literature suggested that narrow bandwidth spectra can lead to observer metameric breakdown; different observers disagreeing on the perceived ‘colour’ of a spectrum. We show that this might not be the case, using modern colour science tools, and show how metameric breakdown in a display could be minimised by carefully choosing the primary emission wavelengths. The limited colour acuity of human vision implies that people cannot notice small differences in colour. This fact has been used to create approximate colour transformation algorithms that subtly change colours in images such that they consume less energy when displayed on an emissive pixel display without causing unacceptable visual artefacts. We conducted a user study to gather information about the effect of one such colour transform called Crayon. We present a method for effectively picking the optimal transform parameters for Crayon, based on the user study results. The method presented calculates these parameters based on the properties of the image being transformed such that the power saving can be maximised while minimising the loss of image quality. The user study results show that we can achieve up to 50% power saving with a majority of the study participants reporting a negligible degradation in image quality in the transformed images. We additionally investigate a hypothesis that was presented stating that images with large amounts of highly luminous pixels cause increased power consumption in OLED displays due to localised display heating. We show that this hypothesis is wrong. We also investigate if sub-pixel rendering in Pentile displays can be used to reduce display power consumption by intentionally turning off random sub-pixels. However, we present a negative result showing that even single-pixel artefacts are observable on the test platform and thus, this cannot be used to improve display power efficiency. The narrow-band optical emissions of QD based devices mixed with their ability to be fabricated through solution processing can be used to mix multiple QDs together to build devices that generate arbitrary spectral shapes. We show how to use this property in an numerical optimisation based design framework to create lighting devices with a high colour rendering index (CRI). We evaluate the effects of different cost functions and initialisation strategies, and show that, we are able to design devices with a CRI > 96 using only four different QD primaries. We use a charge-transport based simulator to asses the electric properties of the designed devices. We also showcase initial work done on a modular software interface and a material library we developed for this simulator.EPSRC DTP studentship award RG84040:EP/N509620/
- …