74,677 research outputs found
Virtual Cleaning of Works of Art Using Deep Learning Based Approaches
Virtual cleaning of art is a key process that conservators apply to see the likely appearance of the work of art they have aimed to clean, before the process of cleaning. There have been many different approaches to virtually clean artworks but having to physically clean the artwork at a few specific places of specific colors, the need to have pure black and white paint on the painting and their low accuracy are only a few of their shortcomings prompting us to propose deep learning based approaches in this research. First we report the work we have done in this field focusing on the color estimation of the artwork virtual cleaning and then we describe our methods for the spectral reflectance estimation of artwork in virtual cleaning. In the color estimation part, a deep convolutional neural network (CNN) and a deep generative network (DGN) are suggested, which estimate the RGB image of the cleaned artwork from an RGB image of the uncleaned artwork. Applying the networks to the images of the well-known artworks (such as the Mona Lisa and The Virgin and Child with Saint Anne) and Macbeth ColorChecker and comparing the results to the only physics-based model (which is the first model that has approached the issue of virtual cleaning from the physics-point of view, hence our reference to compare our models with) shows that our methods outperform that model and have great potentials of being applied to the real situations in which there might not be much information available on the painting, and all we have is an RGB image of the uncleaned artwork. Nonetheless, the methods proposed in the first part, cannot provide us with the spectral reflectance information of the artwork, therefore, the second part of the dissertation is proposed. This part focuses on the spectral estimation of the artwork virtual cleaning. Two deep learning-based approaches are also proposed here; the first one is deep generative network. This method receives a cube of the hyperspectral image of the uncleaned artwork and tries to output another cube which is the virtually cleaned hyperspectral image of the artwork. The second approach is 1D Convolutional Autoencoder (1DCA), which is based on 1D convolutional neural network and tries to find the spectra of the virtually cleaned artwork using the spectra of the physically cleaned artworks and their corresponding uncleaned spectra. The approaches are applied to hyperspectral images of Macbeth ColorChecker (simulated in the forms of cleaned and uncleaned hyperspectral images) and the \u27Haymakers\u27 (real hyperspectral images of both cleaned and uncleaned states). The results, in terms of Euclidean distance and spectral angle between the virtually cleaned artwork and the physically cleaned one, show that the proposed approaches have outperformed the physics-based model, with DGN outperforming the 1DCA. Methods proposed herein do not rely on finding a specific type of paint and color on the painting first and take advantage of the high accuracy offered by deep learning-based approaches and they are also applicable to other paintings
Craquelure as a Graph: Application of Image Processing and Graph Neural Networks to the Description of Fracture Patterns
Cracks on a painting is not a defect but an inimitable signature of an
artwork which can be used for origin examination, aging monitoring, damage
identification, and even forgery detection. This work presents the development
of a new methodology and corresponding toolbox for the extraction and
characterization of information from an image of a craquelure pattern.
The proposed approach processes craquelure network as a graph. The graph
representation captures the network structure via mutual organization of
junctions and fractures. Furthermore, it is invariant to any geometrical
distortions. At the same time, our tool extracts the properties of each node
and edge individually, which allows to characterize the pattern statistically.
We illustrate benefits from the graph representation and statistical features
individually using novel Graph Neural Network and hand-crafted descriptors
correspondingly. However, we also show that the best performance is achieved
when both techniques are merged into one framework. We perform experiments on
the dataset for paintings' origin classification and demonstrate that our
approach outperforms existing techniques by a large margin.Comment: Published in ICCV 2019 Workshop
Experimental and Creative Approaches to Collecting and Distributing New Media Art within Regional Arts Organisations
This article is an overview of preliminary research undertaken for the creation of a framework for collecting and distributing new media art within regional art galleries in the U.K. From the 1960s, artists have experimented using computers and software as production tools to create artworks ranging from static, algorithmic drawings on paper to installations with complex, interactive and process-oriented behaviours. The art-form has evolved into multiple strands of production, presentation and distribution. But are we, as collectors, researchers, artists and enthusiasts facing an uncertain future concerning the integration of new media art into institutional cultural organisations? Recently, concerns have been raised by curators regarding the importance of learning how to collect new media art if there is to be any hope of preserving the artworks as well as their histories. Traditional collections management approaches must evolve to take into account the variable characteristics of new media artworks. As I will discuss in this article, although regarded as a barrier to collecting new media artworks, artists and curators at individual institutions have recently taken steps to tackle curatorial and collections management activities concerning the often unpredictable and unstable behaviours of new media artworks by collaboration and experimentation. This method has proved successful with some mainstream, university and municipal galleries prior to acquiring or commissioning new artworks into their collections. This paper purports that by collaboration, experimentation and the sharing of knowledge and resources, these concerns may be conquered to preserve and make new media art accessible for future generations to enjoy and not to lament over its disappearance
Controllable Multi-domain Semantic Artwork Synthesis
We present a novel framework for multi-domain synthesis of artwork from
semantic layouts. One of the main limitations of this challenging task is the
lack of publicly available segmentation datasets for art synthesis. To address
this problem, we propose a dataset, which we call ArtSem, that contains 40,000
images of artwork from 4 different domains with their corresponding semantic
label maps. We generate the dataset by first extracting semantic maps from
landscape photography and then propose a conditional Generative Adversarial
Network (GAN)-based approach to generate high-quality artwork from the semantic
maps without necessitating paired training data. Furthermore, we propose an
artwork synthesis model that uses domain-dependent variational encoders for
high-quality multi-domain synthesis. The model is improved and complemented
with a simple but effective normalization method, based on normalizing both the
semantic and style jointly, which we call Spatially STyle-Adaptive
Normalization (SSTAN). In contrast to previous methods that only take semantic
layout as input, our model is able to learn a joint representation of both
style and semantic information, which leads to better generation quality for
synthesizing artistic images. Results indicate that our model learns to
separate the domains in the latent space, and thus, by identifying the
hyperplanes that separate the different domains, we can also perform
fine-grained control of the synthesized artwork. By combining our proposed
dataset and approach, we are able to generate user-controllable artwork that is
of higher quality than existingComment: 15 pages, accepted by CVMJ, to appea
Learning scale-variant and scale-invariant features for deep image classification
Convolutional Neural Networks (CNNs) require large image corpora to be
trained on classification tasks. The variation in image resolutions, sizes of
objects and patterns depicted, and image scales, hampers CNN training and
performance, because the task-relevant information varies over spatial scales.
Previous work attempting to deal with such scale variations focused on
encouraging scale-invariant CNN representations. However, scale-invariant
representations are incomplete representations of images, because images
contain scale-variant information as well. This paper addresses the combined
development of scale-invariant and scale-variant representations. We propose a
multi- scale CNN method to encourage the recognition of both types of features
and evaluate it on a challenging image classification task involving
task-relevant characteristics at multiple scales. The results show that our
multi-scale CNN outperforms single-scale CNN. This leads to the conclusion that
encouraging the combined development of a scale-invariant and scale-variant
representation in CNNs is beneficial to image recognition performance
- …