69 research outputs found

    Leveraging Expert Models for Training Deep Neural Networks in Scarce Data Domains: Application to Offline Handwritten Signature Verification

    Full text link
    This paper introduces a novel approach to leverage the knowledge of existing expert models for training new Convolutional Neural Networks, on domains where task-specific data are limited or unavailable. The presented scheme is applied in offline handwritten signature verification (OffSV) which, akin to other biometric applications, suffers from inherent data limitations due to regulatory restrictions. The proposed Student-Teacher (S-T) configuration utilizes feature-based knowledge distillation (FKD), combining graph-based similarity for local activations with global similarity measures to supervise student's training, using only handwritten text data. Remarkably, the models trained using this technique exhibit comparable, if not superior, performance to the teacher model across three popular signature datasets. More importantly, these results are attained without employing any signatures during the feature extraction training process. This study demonstrates the efficacy of leveraging existing expert models to overcome data scarcity challenges in OffSV and potentially other related domains

    Conditional Residual Coding: A Remedy for Bottleneck Problems in Conditional Inter Frame Coding

    Full text link
    Conditional coding is a new video coding paradigm enabled by neural-network-based compression. It can be shown that conditional coding is in theory better than the traditional residual coding, which is widely used in video compression standards like HEVC or VVC. However, on closer inspection, it becomes clear that conditional coders can suffer from information bottlenecks in the prediction path, i.e., that due to the data processing inequality not all information from the prediction signal can be passed to the reconstructed signal, thereby impairing the coder performance. In this paper we propose the conditional residual coding concept, which we derive from information theoretical properties of the conditional coder. This coder significantly reduces the influence of bottlenecks, while maintaining the theoretical performance of the conditional coder. We provide a theoretical analysis of the coding paradigm and demonstrate the performance of the conditional residual coder in a practical example. We show that conditional residual coders alleviate the disadvantages of conditional coders while being able to maintain their advantages over residual coders. In the spectrum of residual and conditional coding, we can therefore consider them as ``the best from both worlds''.Comment: 12 pages, 8 figure

    Prioritizing Content of Interest in Multimedia Data Compression

    Get PDF
    Image and video compression techniques make data transmission and storage in digital multimedia systems more efficient and feasible for the system's limited storage and bandwidth. Many generic image and video compression techniques such as JPEG and H.264/AVC have been standardized and are now widely adopted. Despite their great success, we observe that these standard compression techniques are not the best solution for data compression in special types of multimedia systems such as microscopy videos and low-power wireless broadcast systems. In these application-specific systems where the content of interest in the multimedia data is known and well-defined, we should re-think the design of a data compression pipeline. We hypothesize that by identifying and prioritizing multimedia data's content of interest, new compression methods can be invented that are far more effective than standard techniques. In this dissertation, a set of new data compression methods based on the idea of prioritizing the content of interest has been proposed for three different kinds of multimedia systems. I will show that the key to designing efficient compression techniques in these three cases is to prioritize the content of interest in the data. The definition of the content of interest of multimedia data depends on the application. First, I show that for microscopy videos, the content of interest is defined as the spatial regions in the video frame with pixels that don't only contain noise. Keeping data in those regions with high quality and throwing out other information yields to a novel microscopy video compression technique. Second, I show that for a Bluetooth low energy beacon based system, practical multimedia data storage and transmission is possible by prioritizing content of interest. I designed custom image compression techniques that preserve edges in a binary image, or foreground regions of a color image of indoor or outdoor objects. Last, I present a new indoor Bluetooth low energy beacon based augmented reality system that integrates a 3D moving object compression method that prioritizes the content of interest.Doctor of Philosoph

    Learning disentangled representations of satellite image time series in a weakly supervised manner

    Get PDF
    Cette thèse se focalise sur l'apprentissage de représentations de séries temporelles d'images satellites via des méthodes d'apprentissage non supervisé. Le but principal est de créer une représentation qui capture l'information la plus pertinente de la série temporelle afin d'effectuer d'autres applications d'imagerie satellite. Cependant, l'extraction d'information à partir de la donnée satellite implique de nombreux défis. D'un côté, les modèles doivent traiter d'énormes volumes d'images fournis par les satellites. D'un autre côté, il est impossible pour les opérateurs humains d'étiqueter manuellement un tel volume d'images pour chaque tâche (par exemple, la classification, la segmentation, la détection de changement, etc.). Par conséquent, les méthodes d'apprentissage supervisé qui ont besoin des étiquettes ne peuvent pas être appliquées pour analyser la donnée satellite. Pour résoudre ce problème, des algorithmes d'apprentissage non supervisé ont été proposés pour apprendre la structure de la donnée au lieu d'apprendre une tâche particulière. L'apprentissage non supervisé est une approche puissante, car aucune étiquette n'est nécessaire et la connaissance acquise sur la donnée peut être transférée vers d'autres tâches permettant un apprentissage plus rapide avec moins d'étiquettes. Dans ce travail, on étudie le problème de l'apprentissage de représentations démêlées de séries temporelles d'images satellites. Le but consiste à créer une représentation partagée qui capture l'information spatiale de la série temporelle et une représentation exclusive qui capture l'information temporelle spécifique à chaque image. On présente les avantages de créer des représentations spatio-temporelles. Par exemple, l'information spatiale est utile pour effectuer la classification ou la segmentation d'images de manière invariante dans le temps tandis que l'information temporelle est utile pour la détection de changement. Pour ce faire, on analyse plusieurs modèles d'apprentissage non supervisé tels que l'auto-encodeur variationnel (VAE) et les réseaux antagonistes génératifs (GANs) ainsi que les extensions de ces modèles pour effectuer le démêlage des représentations. Considérant les résultats impressionnants qui ont été obtenus par les modèles génératifs et reconstructifs, on propose un nouveau modèle qui crée une représentation spatiale et une représentation temporelle de la donnée satellite. On montre que les représentations démêlées peuvent être utilisées pour effectuer plusieurs tâches de vision par ordinateur surpassant d'autres modèles de l'état de l'art. Cependant, nos expériences suggèrent que les modèles génératifs et reconstructifs présentent des inconvénients liés à la dimensionnalité de la représentation, à la complexité de l'architecture et au manque de garanties sur le démêlage. Pour surmonter ces limitations, on étudie une méthode récente basée sur l'estimation et la maximisation de l'informations mutuelle sans compter sur la reconstruction ou la génération d'image. On propose un nouveau modèle qui étend le principe de maximisation de l'information mutuelle pour démêler le domaine de représentation. En plus des expériences réalisées sur la donnée satellite, on montre que notre modèle est capable de traiter différents types de données en étant plus performant que les méthodes basées sur les GANs et les VAEs. De plus, on prouve que notre modèle demande moins de puissance de calcul et pourtant est plus efficace. Enfin, on montre que notre modèle est utile pour créer une représentation qui capture uniquement l'information de classe entre deux images appartenant à la même catégorie. Démêler la classe ou la catégorie d'une image des autres facteurs de variation permet de calculer la similarité entre pixels et effectuer la segmentation d'image d'une manière faiblement supervisée.This work focuses on learning data representations of satellite image time series via an unsupervised learning approach. The main goal is to enforce the data representation to capture the relevant information from the time series to perform other applications of satellite imagery. However, extracting information from satellite data involves many challenges since models need to deal with massive amounts of images provided by Earth observation satellites. Additionally, it is impossible for human operators to label such amount of images manually for each individual task (e.g. classification, segmentation, change detection, etc.). Therefore, we cannot use the supervised learning framework which achieves state-of-the-art results in many tasks.To address this problem, unsupervised learning algorithms have been proposed to learn the data structure instead of performing a specific task. Unsupervised learning is a powerful approach since no labels are required during training and the knowledge acquired can be transferred to other tasks enabling faster learning with few labels.In this work, we investigate the problem of learning disentangled representations of satellite image time series where a shared representation captures the spatial information across the images of the time series and an exclusive representation captures the temporal information which is specific to each image. We present the benefits of disentangling the spatio-temporal information of time series, e.g. the spatial information is useful to perform time-invariant image classification or segmentation while the knowledge about the temporal information is useful for change detection. To accomplish this, we analyze some of the most prevalent unsupervised learning models such as the variational autoencoder (VAE) and the generative adversarial networks (GANs) as well as the extensions of these models to perform representation disentanglement. Encouraged by the successful results achieved by generative and reconstructive models, we propose a novel framework to learn spatio-temporal representations of satellite data. We prove that the learned disentangled representations can be used to perform several computer vision tasks such as classification, segmentation, information retrieval and change detection outperforming other state-of-the-art models. Nevertheless, our experiments suggest that generative and reconstructive models present some drawbacks related to the dimensionality of the data representation, architecture complexity and the lack of disentanglement guarantees. In order to overcome these limitations, we explore a recent method based on mutual information estimation and maximization for representation learning without relying on image reconstruction or image generation. We propose a new model that extends the mutual information maximization principle to disentangle the representation domain into two parts. In addition to the experiments performed on satellite data, we show that our model is able to deal with different kinds of datasets outperforming the state-of-the-art methods based on GANs and VAEs. Furthermore, we show that our mutual information based model is less computationally demanding yet more effective. Finally, we show that our model is useful to create a data representation that only captures the class information between two images belonging to the same category. Disentangling the class or category of an image from other factors of variation provides a powerful tool to compute the similarity between pixels and perform image segmentation in a weakly-supervised manner

    Machine Learning for handwriting text recognition in historical documents

    Get PDF
    Olmos ABSTRACT In this thesis, we focus on the handwriting text recognition task over historical documents that are difficult to read for any person that is not an expert in ancient languages and writing style. We aim to take advantage and improve the neural networks architectures and techniques that other authors are proposing for handwriting text recognition in modern handwritten documents. These models perform this task very precisely when a large amount of data is available. However, the low availability of labeled data is a widespread problem in historical documents. The type of writing is singular, and it is pretty expensive to hire an expert to transcribe a large number of pages. After investigating and analyzing the state-of-the-art, we propose the efficient application of methods such as transfer learning and data augmentation. We also contribute an algorithm for purging mislabeled samples that affect the learning of models. Finally, we develop a variational auto encoder method for generating synthetic samples of handwritten text images for data augmentation. Experiments are performed on various historical handwritten text databases to validate the performance of the proposed algorithms. The various included analyses focus on the evolution of the character and word error rate (CER and WER) as we increase the training dataset. One of the most important results is the participation in a contest for transcription of historical handwritten text. The organizers provided us with a dataset of documents to train the model, then just a few labeled pages of 5 new documents were handled to adjust the solution further. Finally, the transcription of nonlabeled images was requested to evaluate the algorithm. Our method raked second in this contest

    Load Forecasting and Synthetic Data Generation for Smart Home Energy Management System

    Get PDF
    A number of recent trends, such as the increased power consumption in developed and developing countries, the dangers associated with greenhouse gases, the potential shortages of fossil fuels, and the increasing availability of solar and wind energy act as motivating factors for the development of more intelligent and efficient systems both on the power provider as well as the consumer side. One of the most important prerequisites for making efficient energy management decisions is the ability to predict energy production and consumption patterns. While long-term forecasting of average consumption had been extensively used to direct investments in the energy grid, short-term predictions of energy consumption became practical only recently. Most of the existing work in this domain operates at the level of individual households. However, the availability of historical power consumption data can be an issue due to concerns such as privacy, data size or data quality. Researchers have been provided with synthetic smart home energy management systems that mimic the statistical and functional properties of the actual smart grid in order to improve their access to public system models. Through developing time series to represent different operating conditions of these synthetic systems, the potential of artificial smart home energy management system applications will be further enhanced. The work described in this dissertation extends the ability to predict and control power consumption to the level of individual devices in the home. This work is made possible by several recent developments. Internet of things technologies that connect individual devices to the internet allows the remote tracking of energy consumption and the remote control and scheduling of the devices. At the same time, progress in artificial intelligence and machine learning techniques improve the accuracy of predictions. These components often form the basis of smart home energy management systems (HEMS). One of our insights that facilitates the prediction of the energy consumption of individual devices is that the history of consumption contains important information about future consumption. Thus, we propose to use a long short-term memory (LSTM) recurrent neural network for prediction. In a second contribution, we extend this model into a sequence-to-sequence model which uses several interconnected LSTM cells on both the input and the output sides. We show that these approaches produce better predictions compared to memoryless machine learning techniques. The prediction of energy consumption delivers maximum value when it is integrated with the active component of a HEMS. We design a reinforcement learning-based technique where a Q-learning model is trained offline based on the prediction results. This system is then validated only using real data from PV power generation and load consumption. Considering the scarcity of data among the smart grid users, in our third contribution, we propose the Variational Autoencoder Generative Adversarial Network (VAE-GAN) as a smart grid data generative model capable of learning various types of data distributions, such as electrical load consumption, PV power production and electric vehicles charging load consumption, and generating plausible sample data from the same distribution without first performing any pre-training analysis on the data. Our extensive experiments have shown the accuracy of our approach in synthesizing smart home datasets. There is a high degree of resemblance between the distribution of VAE-GAN synthetic data and the distribution of real data. The next step will be to incorporate Q-learning for offline optimization of HEMS using synthetic data and to test its performance with real test data
    corecore