8 research outputs found

    Image Processing for Art Investigation

    Get PDF
    Recent advances in digital image acquisition methods and the wide range of imaging modalities currently available have triggered museums to digitize their painting collections. Not only is this crucial for archival or dissemination purposes but it also enabled the digital analysis of the painting through its digital image counterpart. It also set in motion a cross-disciplinary collaboration between image analysis specialists, mathematicians, statisticians and art historians that have the common goal to develop algorithms and build a digital toolbox in support of art scholarship. Computer processing of digital images of paintings has become a fast growing and challenging field of research during the last few years. Our contribution to this research domain consists of a set of tools that are based on dimensionality reduction methods, sparse representations and dictionary learning techniques. These tools are used to assist in art related matters such as restoration, conservation, art history, material and structure characterization, authentication, dating and even style analysis. Since paintings are complex structures the analysis of all pictorial layers and the support requires a multimodal set of high-resolution image acquisitions. The presented research can broadly be subdivided into three main fields. The first one is the digital enhancement of painting acquisitions in order to assist the art specialist in his professional assessment of the painting. The second main field of research is the automated detection of cracks within the Ghent Altarpiece, which is meant to help in the delicate matter of the conservation of this exceptional masterpiece but also as guidance during its current campaign of restoration. The last field consists of a set of methods that can be deployed in art forensics. These methods consist of the characterization of canvas, the analysis of multispectral imagery of a painting and even the objective quantification of the style of a particular artist.

    Craquelure as a Graph: Application of Image Processing and Graph Neural Networks to the Description of Fracture Patterns

    Full text link
    Cracks on a painting is not a defect but an inimitable signature of an artwork which can be used for origin examination, aging monitoring, damage identification, and even forgery detection. This work presents the development of a new methodology and corresponding toolbox for the extraction and characterization of information from an image of a craquelure pattern. The proposed approach processes craquelure network as a graph. The graph representation captures the network structure via mutual organization of junctions and fractures. Furthermore, it is invariant to any geometrical distortions. At the same time, our tool extracts the properties of each node and edge individually, which allows to characterize the pattern statistically. We illustrate benefits from the graph representation and statistical features individually using novel Graph Neural Network and hand-crafted descriptors correspondingly. However, we also show that the best performance is achieved when both techniques are merged into one framework. We perform experiments on the dataset for paintings' origin classification and demonstrate that our approach outperforms existing techniques by a large margin.Comment: Published in ICCV 2019 Workshop

    Image Processing for Art Investigation

    Get PDF
    Advisors: Ann Dooms, Ingrid Daubechies. Date and location of PhD thesis defense: 13 October 2014, Vrije Universiteit BrusselRecent advances in digital image acquisition methods and the wide range of imaging modalities currently available have triggered museums to digitize their painting collections. Not only is this crucial for archival or dissemination purposes but it also enabled the digital analysis of the painting through its digital image counterpart. It also set in motion a cross-disciplinary collaboration between image analysis specialists, mathematicians, statisticians and art historians that have the common goal to develop algorithms and build a digital toolbox in support of art scholarship. Computer processing of digital images of paintings has become a fast growing and challenging field of research during the last few years. Our contribution to this research domain consists of a set of tools that are based on dimensionality reduction methods, sparse representations and dictionary learning techniques. These tools are used to assist in art related matters such as restoration, conservation, art history, material and structure characterization, authentication, dating and even style analysis. Since paintings are complex structures the analysis of all pictorial layers and the support requires a multimodal set of high-resolution image acquisitions. The presented research can broadly be subdivided into three main fields. The first one is the digital enhancement of painting acquisitions in order to assist the art specialist in his professional assessment of the painting. The second main field of research is the automated detection of cracks within the Ghent Altarpiece, which is meant to help in the delicate matter of the conservation of this exceptional masterpiece but also as guidance during its current campaign of restoration. The last field consists of a set of methods that can be deployed in art forensics. These methods consist of the characterization of canvas, the analysis of multispectral imagery of a painting and even the objective quantification of the style of a particular artist

    Crack detection in paintings using convolutional neural networks

    Get PDF
    The accurate detection of cracks in paintings, which generally portray rich and varying content, is a challenging task. Traditional crack detection methods are often lacking on recent acquisitions of paintings as they are poorly adapted to high-resolutions and do not make use of the other imaging modalities often at hand. Furthermore, many paintings portray a complex or cluttered composition, significantly complicating a precise detection of cracks when using only photographic material. In this paper, we propose a fast crack detection algorithm based on deep convolutional neural networks (CNN) that is capable of combining several imaging modalities, such as regular photographs, infrared photography and X-Ray images. Moreover, we propose an efficient solution to improve the CNN-based localization of the actual crack boundaries and extend the CNN architecture such that areas where it makes little sense to run expensive learning models are ignored. This allows us to process large resolution scans of paintings more efficiently. The proposed on-line method is capable of continuously learning from newly acquired visual data, thus further improving classification results as more data becomes available. A case study on multimodal acquisitions of the Ghent Altarpiece, taken during the currently ongoing conservation-restoration treatment, shows improvements over the state-of-the-art in crack detection methods and demonstrates the potential of our proposed method in assisting art conservators

    Learning visual representations of style

    Get PDF
    Learning Visual Representations of Style Door Nanne van Noord De stijl van een kunstenaar is zichtbaar in zijn/haar werk, onafhankelijk van de vorm of het onderwerp van een kunstwerk kunnen kunstexperts deze stijl herkennen. Of het nu om een landschap of een portret gaat, het connaisseurschap van kunstexperts stelt hen in staat om de stijl van de kunstenaar te herkennen. Het vertalen van dit vermogen tot connaisseurschap naar een computer, zodat de computer in staat is om de stijl van een kunstenaar te herkennen, en om kunstwerken te (re)produceren in de stijl van de kunstenaar, staat centraal in dit onderzoek. Voor visuele analyseren van kunstwerken maken computers gebruik van beeldverwerkingstechnieken. Traditioneel gesproken bestaan deze technieken uit door computerwetenschappers ontwikkelde algoritmes die vooraf gedefinieerde visuele kernmerken kunnen herkennen. Omdat deze kenmerken zijn ontwikkelt voor de analyse van de inhoud van foto’s zijn ze beperkt toepasbaar voor de analyse van de stijl van visuele kunst. Daarnaast is er ook geen definitief antwoord welke visuele kenmerken indicatief zijn voor stijl. Om deze beperkingen te overkomen maken we in dit onderzoek gebruik van Deep Learning, een methodologie die het beeldverwerking onderzoeksveld in de laatste jaren enorm heeft gerevolutionaliseerd. De kracht van Deep Learning komt voort uit het zelflerende vermogen, in plaats van dat we afhankelijk zijn van vooraf gedefinieerde kenmerken, kan de computer zelf leren wat de juiste kenmerken zijn. In dit onderzoek hebben we algoritmes ontwikkelt met het doel om het voor de computer mogelijk te maken om 1) zelf te leren om de stijl van een kunstenaar te herkennen, en 2) nieuwe afbeeldingen te genereren in de stijl van een kunstenaar. Op basis van het in het proefschrift gepresenteerde werk kunnen we concluderen dat de computer inderdaad in staat is om te leren om de stijl van een kunstenaar te herkennen, ook in een uitdagende setting met duizenden kunstwerken en enkele honderden kunstenaars. Daarnaast kunnen we concluderen dat het mogelijk is om, op basis van bestaande kunstwerken, nieuwe kunstwerken te generen in de stijl van de kunstenaar. Namelijk, een kleurloze afbeeldingen van een kunstwerk kan ingekleurd worden in de stijl van de kunstenaar, en wanneer er delen missen uit een kunstwerk is het mogelijk om deze missende stukken in te vullen (te retoucheren). Alhoewel we nog niet in staat zijn om volledig nieuwe kunstwerken te generen, is dit onderzoek een grote stap in die richting. Bovendien zijn de in dit onderzoek ontwikkelde technieken en methodes veelbelovend als digitale middelen ter ondersteuning van kunstexperts en restauratoren

    Nonparametric Bayes for Big Data

    Get PDF
    <p>Classical asymptotic theory deals with models in which the sample size nn goes to infinity with the number of parameters pp being fixed. However, rapid advancement of technology has empowered today's scientists to collect a huge number of explanatory variables</p><p>to predict a response. Many modern applications in science and engineering belong to the ``big data" regime in which both pp and nn may be very large. A variety of genomic applications even have pp substantially greater than nn. With the advent of MCMC, Bayesian approaches exploded in popularity. Bayesian inference often allows easier interpretability than frequentist inference. Therefore, it becomes important to understand and evaluate</p><p>Bayesian procedures for ``big data" from a frequentist perspective.</p><p>In this dissertation, we address a number of questions related to solving large-scale statistical problems via Bayesian nonparametric methods.</p><p>It is well-known that classical estimators can be inconsistent in the high-dimensional regime without any constraints on the model. Therefore, imposing additional low-dimensional structures on the high-dimensional ambient space becomes inevitable. In the first two chapters of the thesis, we study the prediction performance of high-dimensional nonparametric regression from a minimax point of view. We consider two different low-dimensional constraints: 1. the response depends only on a small subset of the covariates; 2. the covariates lie on a low dimensional manifold in the original high dimensional ambient space. We also provide Bayesian nonparametric methods based on Gaussian process priors that are shown to be adaptive to unknown smoothness or low-dimensional manifold structure by attaining minimax convergence rates up to log factors. In chapter 3, we consider high-dimensional classification problems where all data are of categorical nature. We build a parsimonious model based on Bayesian tensor factorization for classification while doing inferences on the important predictors.</p><p>It is generally believed that ensemble approaches, which combine multiple algorithms or models, can outperform any single algorithm at machine learning tasks, such as prediction. In chapter 5, we propose Bayesian convex and linear aggregation approaches motivated by regression applications. We show that the proposed approach is minimax optimal when the true data-generating model is a convex or linear combination of models in the list. Moreover, the method can adapt to sparsity structure in which certain models should receive zero weights, and the method is tuning parameter free unlike competitors. More generally, under an M-open view when the truth falls outside the space of all convex/linear combinations, our theory suggests that the posterior measure tends to concentrate on the best approximation of the truth at the minimax rate.</p><p>Chapter 6 is devoted to sequential Markov chain Monte Carlo algorithms for Bayesian on-line learning of big data. The last chapter attempts to justify the use of posterior distribution to conduct statistical inferences for semiparametric estimation problems (the semiparametric Bernstein von-Mises theorem) from a frequentist perspective.</p>Dissertatio
    corecore