30 research outputs found

    Towards Emotion Recognition: A Persistent Entropy Application

    Get PDF
    Emotion recognition and classification is a very active area of research. In this paper, we present a first approach to emotion classification using persistent entropy and support vector machines. A topology-based model is applied to obtain a single real number from each raw signal. These data are used as input of a support vector machine to classify signals into 8 different emotions (calm, happy, sad, angry, fearful, disgust and surprised)

    Lógica de primer orden en Haskell

    Get PDF
    This final degree project consists in the implementation of First Order Logic theory and algorithms in Haskell, a functional programming language. Furthermore, a relation between maths and programming based on Curry-Howard correspondence is established, giving an intuitive sort of examples. Moreover, it aims to give an introduction to Haskell and other sources as git and doctest.Universidad de Sevilla. Grado en Matemática

    Topology-based Representative Datasets to Reduce Neural Network Training Resources

    Full text link
    One of the main drawbacks of the practical use of neural networks is the long time required in the training process. Such a training process consists of an iterative change of parameters trying to minimize a loss function. These changes are driven by a dataset, which can be seen as a set of labelled points in an n-dimensional space. In this paper, we explore the concept of are representative dataset which is a dataset smaller than the original one, satisfying a nearness condition independent of isometric transformations. Representativeness is measured using persistence diagrams (a computational topology tool) due to its computational efficiency. We prove that the accuracy of the learning process of a neural network on a representative dataset is "similar" to the accuracy on the original dataset when the neural network architecture is a perceptron and the loss function is the mean squared error. These theoretical results accompanied by experimentation open a door to reducing the size of the dataset to gain time in the training process of any neural network

    A Topological Approach to Measuring Training Data Quality

    Full text link
    Data quality is crucial for the successful training, generalization and performance of artificial intelligence models. Furthermore, it is known that the leading approaches in artificial intelligence are notoriously data-hungry. In this paper, we propose the use of small training datasets towards faster training. Specifically, we provide a novel topological method based on morphisms between persistence modules to measure the training data quality with respect to the complete dataset. This way, we can provide an explanation of why the chosen training dataset will lead to poor performance

    Emotion recognition in talking-face videos using persistent entropy and neural networks

    Get PDF
    The automatic recognition of a person’s emotional state has become a very active research field that involves scientists specialized in different areas such as artificial intelligence, computer vi sion, or psychology, among others. Our main objective in this work is to develop a novel approach, using persistent entropy and neural networks as main tools, to recognise and classify emotions from talking-face videos. Specifically, we combine audio-signal and image-sequence information to com pute a topology signature (a 9-dimensional vector) for each video. We prove that small changes in the video produce small changes in the signature, ensuring the stability of the method. These topological signatures are used to feed a neural network to distinguish between the following emotions: calm, happy, sad, angry, fearful, disgust, and surprised. The results reached are promising and competitive, beating the performances achieved in other state-of-the-art works found in the literature.Agencia Estatal de Investigación PID2019-107339GB-100Agencia Andaluza del Conocimiento P20-0114

    Towards a Philological Metric through a Topological Data Analysis Approach

    Get PDF
    The canon of the baroque Spanish literature has been thoroughly studied with philological techniques. The major representatives of the poetry of this epoch are Francisco de Quevedo and Luis de Góngora y Argote. They are commonly classified by the literary experts in two different streams: Quevedo belongs to the Conceptismo and Góngora to the Culteranismo. Besides, traditionally, even if Quevedo is considered the most representative of the Conceptismo, Lope de Vega is also considered to be, at least, closely related to this literary trend. In this paper, we use Topological Data Analysis techniques to provide a first approach to a metric distance between the literary style of these poets. As a consequence, we reach results that are under the literary experts’ criteria, locating the literary style of Lope de Vega, closer to the one of Quevedo than to the one of Góngora

    Representative datasets for neural networks

    Get PDF
    Neural networks present big popularity and success in many fields. The large training time process problem is a very important task nowadays. In this paper, a new approach to get over this issue based on reducing dataset size is proposed. Two algorithms covering two different shape notions are shown and experimental results are given.Ministerio de Economía y Competitividad MTM2015-67072-

    Representative Datasets: The Perceptron Case

    Get PDF
    One of the main drawbacks of the practical use of neural networks is the long time needed in the training process. Such training process consists in an iterative change of parameters trying to minimize a loss function. These changes are driven by a dataset, which can be seen as a set of labeled points in an n-dimensional space. In this paper, we explore the concept of representative dataset which is smaller than the original dataset and satisfies a nearness condition independent of isometric transformations. The representativeness is measured using persistence diagrams due to its computational efficiency. We also prove that the accuracy of the learning process of a neural network on a representative dataset is comparable with the accuracy on the original dataset when the neural network architecture is a perceptron and the loss function is the mean squared error. These theoretical results accompanied with experimentation open a door to the size reduction of the dataset in order to gain time in the training process of any neural network
    corecore