5 research outputs found

    Representative datasets for neural networks

    Get PDF
    Neural networks present big popularity and success in many fields. The large training time process problem is a very important task nowadays. In this paper, a new approach to get over this issue based on reducing dataset size is proposed. Two algorithms covering two different shape notions are shown and experimental results are given.Ministerio de Economía y Competitividad MTM2015-67072-

    Optimizing the Simplicial-Map Neural Network Architecture

    Get PDF
    Simplicial-map neural networks are a recent neural network architecture induced by simplicial maps defined between simplicial complexes. It has been proved that simplicial-map neural networks are universal approximators and that they can be refined to be robust to adversarial attacks. In this paper, the refinement toward robustness is optimized by reducing the number of simplices (i.e., nodes) needed. We have shown experimentally that such a refined neural network is equivalent to the original network as a classification tool but requires much less storage.Agencia Estatal de Investigación PID2019-107339GB-10

    Topology-based representative datasets to reduce neural network training resources

    Get PDF
    One of the main drawbacks of the practical use of neural networks is the long time required in the training process. Such a training process consists of an iterative change of parameters trying to minimize a loss function. These changes are driven by a dataset, which can be seen as a set of labeled points in an n-dimensional space. In this paper, we explore the concept of a representative dataset which is a dataset smaller than the original one, satisfying a nearness condition independent of isometric transformations. Representativeness is measured using persistence diagrams (a computational topology tool) due to its computational efficiency. We theoretically prove that the accuracy of a perceptron evaluated on the original dataset coincides with the accuracy of the neural network evaluated on the representative dataset when the neural network architecture is a perceptron, the loss function is the mean squared error, and certain conditions on the representativeness of the dataset are imposed. These theoretical results accompanied by experimentation open a door to reducing the size of the dataset to gain time in the training process of any neural networkAgencia Estatal de Investigación PID2019-107339GB-100Agencia Andaluza del Conocimiento P20-0114

    Neural network generator for image similarity measurement

    Get PDF
    Tato práce se zabývá návrhem a implementací automatického softwarového generátoru dopředných neuronových sítí pro klasifikaci obrazu. Teoretická část práce objasňuje pojmy jako neuronová síť nebo formální neuron. Dále práce prezentuje rozdělení neuronových sítí na základě typu jejich architektury sítě a stylu učení. Práce se zaměřuje na konkrétní typ neuronových sítí, a sice sítě konvoluční. Jsou prezentované vybrané výzkumy z~této oblasti. Následují informace o~implementaci použité v~praktické části práce, tedy jaký byl zvolen programovací jazyk a který aplikační rámec byl použit. Stejně tak je obsahem stručný popis implementace, přehled implementovaných vrstev neuronové sítě, zvolená databáze fotografií a postup testování sítí. Výsledky toho testování jsou prezentovány a příslušně okomentovány.This thesis deals with designing an automatic generator of deep neural networks for image classification. Theoretical part clarifies what a neural network and formal neuron are. Furthermore, the types of neural network architectures are presented. The focus of this thesis is convolutional neural networks, several pieces of research from this field are mentioned. The practical part of this thesis describes information with regards to the implementation of neural network generator, possible frameworks and programming languages for such implementation. Brief description of the implementation itself is presented as well as implemented layers. Generated neural networks are tested on Google-Landmarks dataset and results are commented upon.
    corecore