263 research outputs found

    An oil painters recognition method based on cluster multiple kernel learning algorithm

    Get PDF
    A lot of image processing research works focus on natural images, such as in classification, clustering, and the research on the recognition of artworks (such as oil paintings), from feature extraction to classifier design, is relatively few. This paper focuses on oil painter recognition and tries to find the mobile application to recognize the painter. This paper proposes a cluster multiple kernel learning algorithm, which extracts oil painting features from three aspects: color, texture, and spatial layout, and generates multiple candidate kernels with different kernel functions. With the results of clustering numerous candidate kernels, we selected the sub-kernels with better classification performance, and use the traditional multiple kernel learning algorithm to carry out the multi-feature fusion classification. The algorithm achieves a better result on the Painting91 than using traditional multiple kernel learning directly

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Advanced Probabilistic Models for Clustering and Projection

    Get PDF
    Probabilistic modeling for data mining and machine learning problems is a fundamental research area. The general approach is to assume a generative model underlying the observed data, and estimate model parameters via likelihood maximization. It has the deep probability theory as the mathematical background, and enjoys a large amount of methods from statistical learning, sampling theory and Bayesian statistics. In this thesis we study several advanced probabilistic models for data clustering and feature projection, which are the two important unsupervised learning problems. The goal of clustering is to group similar data points together to uncover the data clusters. While numerous methods exist for various clustering tasks, one important question still remains, i.e., how to automatically determine the number of clusters. The first part of the thesis answers this question from a mixture modeling perspective. A finite mixture model is first introduced for clustering, in which each mixture component is assumed to be an exponential family distribution for generality. The model is then extended to an infinite mixture model, and its strong connection to Dirichlet process (DP) is uncovered which is a non-parametric Bayesian framework. A variational Bayesian algorithm called VBDMA is derived from this new insight to learn the number of clusters automatically, and empirical studies on some 2D data sets and an image data set verify the effectiveness of this algorithm. In feature projection, we are interested in dimensionality reduction and aim to find a low-dimensional feature representation for the data. We first review the well-known principal component analysis (PCA) and its probabilistic interpretation (PPCA), and then generalize PPCA to a novel probabilistic model which is able to handle non-linear projection known as kernel PCA. An expectation-maximization (EM) algorithm is derived for kernel PCA such that it is fast and applicable to large data sets. Then we propose a novel supervised projection method called MORP, which can take the output information into account in a supervised learning context. Empirical studies on various data sets show much better results compared to unsupervised projection and other supervised projection methods. At the end we generalize MORP probabilistically to propose SPPCA for supervised projection, and we can also naturally extend the model to S2PPCA which is a semi-supervised projection method. This allows us to incorporate both the label information and the unlabeled data into the projection process. In the third part of the thesis, we introduce a unified probabilistic model which can handle data clustering and feature projection jointly. The model can be viewed as a clustering model with projected features, and a projection model with structured documents. A variational Bayesian learning algorithm can be derived, and it turns out to iterate the clustering operations and projection operations until convergence. Superior performance can be obtained for both clustering and projection

    Large-scale interactive retrieval in art collections using multi-style feature aggregation

    Get PDF
    Finding objects and motifs across artworks is of great importance for art history as it helps to understand individual works and analyze relations between them. The advent of digitization has produced extensive digital art collections with many research opportunities. However, manual approaches are inadequate to handle this amount of data, and it requires appropriate computer-based methods to analyze them. This article presents a visual search algorithm and user interface to support art historians to find objects and motifs in extensive datasets. Artistic image collections are subject to significant domain shifts induced by large variations in styles, artistic media, and materials. This poses new challenges to most computer vision models which are trained on photographs. To alleviate this problem, we introduce a multi-style feature aggregation that projects images into the same distribution, leading to more accurate and style-invariant search results. Our retrieval system is based on a voting procedure combined with fast nearest-neighbor search and enables finding and localizing motifs within an extensive image collection in seconds. The presented approach significantly improves the state-of-the-art in terms of accuracy and search time on various datasets and applies to large and inhomogeneous collections. In addition to the search algorithm, we introduce a user interface that allows art historians to apply our algorithm in practice. The interface enables users to search for single regions, multiple regions regarding different connection types and holds an interactive feedback system to improve retrieval results further. With our methodological contribution and easy-to-use user interface, this work manifests further progress towards a computer-based analysis of visual art

    EDSUCh: A robust ensemble data summarization method for effective medical diagnosis

    Get PDF
    Identifying rare patterns for medical diagnosis is a challenging task due to heterogeneity and the volume of data. Data summarization can create a concise version of the original data that can be used for effective diagnosis. In this paper, we propose an ensemble summarization method that combines clustering and sampling to create a summary of the original data to ensure the inclusion of rare patterns. To the best of our knowledge, there has been no such technique available to augment the performance of anomaly detection techniques and simultaneously increase the efficiency of medical diagnosis. The performance of popular anomaly detection algorithms increases significantly in terms of accuracy and computational complexity when the summaries are used. Therefore, the medical diagnosis becomes more effective, and our experimental results reflect that the combination of the proposed summarization scheme and all underlying algorithms used in this paper outperforms the most popular anomaly detection techniques

    Advanced Probabilistic Models for Clustering and Projection

    Get PDF
    Probabilistic modeling for data mining and machine learning problems is a fundamental research area. The general approach is to assume a generative model underlying the observed data, and estimate model parameters via likelihood maximization. It has the deep probability theory as the mathematical background, and enjoys a large amount of methods from statistical learning, sampling theory and Bayesian statistics. In this thesis we study several advanced probabilistic models for data clustering and feature projection, which are the two important unsupervised learning problems. The goal of clustering is to group similar data points together to uncover the data clusters. While numerous methods exist for various clustering tasks, one important question still remains, i.e., how to automatically determine the number of clusters. The first part of the thesis answers this question from a mixture modeling perspective. A finite mixture model is first introduced for clustering, in which each mixture component is assumed to be an exponential family distribution for generality. The model is then extended to an infinite mixture model, and its strong connection to Dirichlet process (DP) is uncovered which is a non-parametric Bayesian framework. A variational Bayesian algorithm called VBDMA is derived from this new insight to learn the number of clusters automatically, and empirical studies on some 2D data sets and an image data set verify the effectiveness of this algorithm. In feature projection, we are interested in dimensionality reduction and aim to find a low-dimensional feature representation for the data. We first review the well-known principal component analysis (PCA) and its probabilistic interpretation (PPCA), and then generalize PPCA to a novel probabilistic model which is able to handle non-linear projection known as kernel PCA. An expectation-maximization (EM) algorithm is derived for kernel PCA such that it is fast and applicable to large data sets. Then we propose a novel supervised projection method called MORP, which can take the output information into account in a supervised learning context. Empirical studies on various data sets show much better results compared to unsupervised projection and other supervised projection methods. At the end we generalize MORP probabilistically to propose SPPCA for supervised projection, and we can also naturally extend the model to S2PPCA which is a semi-supervised projection method. This allows us to incorporate both the label information and the unlabeled data into the projection process. In the third part of the thesis, we introduce a unified probabilistic model which can handle data clustering and feature projection jointly. The model can be viewed as a clustering model with projected features, and a projection model with structured documents. A variational Bayesian learning algorithm can be derived, and it turns out to iterate the clustering operations and projection operations until convergence. Superior performance can be obtained for both clustering and projection

    Authentication of Amadeo de Souza-Cardoso Paintings and Drawings With Deep Learning

    Get PDF
    Art forgery has a long-standing history that can be traced back to the Roman period and has become more rampant as the art market continues prospering. Reports disclosed that uncountable artworks circulating on the art market could be fake. Even some principal art museums and galleries could be exhibiting a good percentage of fake artworks. It is therefore substantially important to conserve cultural heritage, safeguard the interest of both the art market and the artists, as well as the integrity of artists’ legacies. As a result, art authentication has been one of the most researched and well-documented fields due to the ever-growing commercial art market in the past decades. Over the past years, the employment of computer science in the art world has flourished as it continues to stimulate interest in both the art world and the artificial intelligence arena. In particular, the implementation of Artificial Intelligence, namely Deep Learning algorithms and Neural Networks, has proved to be of significance for specialised image analysis. This research encompassed multidisciplinary studies on chemistry, physics, art and computer science. More specifically, the work presents a solution to the problem of authentication of heritage artwork by Amadeo de Souza-Cardoso, namely paintings, through the use of artificial intelligence algorithms. First, an authenticity estimation is obtained based on processing of images through a deep learning model that analyses the brushstroke features of a painting. Iterative, multi-scale analysis of the images is used to cover the entire painting and produce an overall indication of authenticity. Second, a mixed input, deep learning model is proposed to analyse pigments in a painting. This solves the image colour segmentation and pigment classification problem using hyperspectral imagery. The result is used to provide an indication of authenticity based on pigment classification and correlation with chemical data obtained via XRF analysis. Further algorithms developed include a deep learning model that tackles the pigment unmixing problem based on hyperspectral data. Another algorithm is a deep learning model that estimates hyperspectral images from sRGB images. Based on the established algorithms and results obtained, two applications were developed. First, an Augmented Reality mobile application specifically for the visualisation of pigments in the artworks by Amadeo. The mobile application targets the general public, i.e., art enthusiasts, museum visitors, art lovers or art experts. And second, a desktop application with multiple purposes, such as the visualisation of pigments and hyperspectral data. This application is designed for art specialists, i.e., conservators and restorers. Due to the special circumstances of the pandemic, trials on the usage of these applications were only performed within the Department of Conservation and Restoration at NOVA University Lisbon, where both applications received positive feedback.A falsificação de arte tem uma história de longa data que remonta ao período romano e tornou-se mais desenfreada à medida que o mercado de arte continua a prosperar. Relatórios revelaram que inúmeras obras de arte que circulam no mercado de arte podem ser falsas. Mesmo alguns dos principais museus e galerias de arte poderiam estar exibindo uma boa porcentagem de obras de arte falsas. Por conseguinte, é extremamente importante conservar o património cultural, salvaguardar os interesses do mercado da arte e dos artis- tas, bem como a integridade dos legados dos artistas. Como resultado, a autenticação de arte tem sido um dos campos mais pesquisados e bem documentados devido ao crescente mercado de arte comercial nas últimas décadas.Nos últimos anos, o emprego da ciência da computação no mundo da arte floresceu à medida que continua a estimular o interesse no mundo da arte e na arena da inteligência artificial. Em particular, a implementação da Inteligência Artificial, nomeadamente algoritmos de aprendizagem profunda (ou Deep Learning) e Redes Neuronais, tem-se revelado importante para a análise especializada de imagens.Esta investigação abrangeu estudos multidisciplinares em química, física, arte e informática. Mais especificamente, o trabalho apresenta uma solução para o problema da autenticação de obras de arte patrimoniais de Amadeo de Souza-Cardoso, nomeadamente pinturas, através da utilização de algoritmos de inteligência artificial. Primeiro, uma esti- mativa de autenticidade é obtida com base no processamento de imagens através de um modelo de aprendizagem profunda que analisa as características de pincelada de uma pintura. A análise iterativa e multiescala das imagens é usada para cobrir toda a pintura e produzir uma indicação geral de autenticidade. Em segundo lugar, um modelo misto de entrada e aprendizagem profunda é proposto para analisar pigmentos em uma pintura. Isso resolve o problema de segmentação de cores de imagem e classificação de pigmentos usando imagens hiperespectrais. O resultado é usado para fornecer uma indicação de autenticidade com base na classificação do pigmento e correlação com dados químicos obtidos através da análise XRF. Outros algoritmos desenvolvidos incluem um modelo de aprendizagem profunda que aborda o problema da desmistura de pigmentos com base em dados hiperespectrais. Outro algoritmo é um modelo de aprendizagem profunda estabelecidos e nos resultados obtidos, foram desenvolvidas duas aplicações. Primeiro, uma aplicação móvel de Realidade Aumentada especificamente para a visualização de pigmentos nas obras de Amadeo. A aplicação móvel destina-se ao público em geral, ou seja, entusiastas da arte, visitantes de museus, amantes da arte ou especialistas em arte. E, em segundo lugar, uma aplicação de ambiente de trabalho com múltiplas finalidades, como a visualização de pigmentos e dados hiperespectrais. Esta aplicação é projetada para especialistas em arte, ou seja, conservadores e restauradores. Devido às circunstâncias especiais da pandemia, os ensaios sobre a utilização destas aplicações só foram realizados no âmbito do Departamento de Conservação e Restauro da Universidade NOVA de Lisboa, onde ambas as candidaturas receberam feedback positivo

    Digital analysis of paintings

    Get PDF
    corecore