799 research outputs found

    Comparing Fixed and Adaptive Computation Time for Recurrent Neural Networks

    Get PDF
    Adaptive Computation Time for Recurrent Neural Networks (ACT) is one of the most promising architectures for variable computation. ACT adapts to the input sequence by being able to look at each sample more than once, and learn how many times it should do it. In this paper, we compare ACT to Repeat-RNN, a novel architecture based on repeating each sample a fixed number of times. We found surprising results, where Repeat-RNN performs as good as ACT in the selected tasks. Source code in TensorFlow and PyTorch is publicly available at https://imatge-upc.github.io/danifojo-2018-repeatrnn/Comment: Accepted as workshop paper at ICLR 201

    Deep learning that scales: leveraging compute and data

    Get PDF
    Deep learning has revolutionized the field of artificial intelligence in the past decade. Although the development of these techniques spans over several years, the recent advent of deep learning is explained by an increased availability of data and compute that have unlocked the potential of deep neural networks. They have become ubiquitous in domains such as natural language processing, computer vision, speech processing, and control, where enough training data is available. Recent years have seen continuous progress driven by ever-growing neural networks that benefited from large amounts of data and computing power. This thesis is motivated by the observation that scale is one of the key factors driving progress in deep learning research, and aims at devising deep learning methods that scale gracefully with the available data and compute. We narrow down this scope into two main research directions. The first of them is concerned with designing hardware-aware methods which can make the most of the computing resources in current high performance computing facilities. We then study bottlenecks preventing existing methods from scaling up as more data becomes available, providing solutions that contribute towards enabling training of more complex models. This dissertation studies the aforementioned research questions for two different learning paradigms, each with its own algorithmic and computational characteristics. The first part of this thesis studies the paradigm where the model needs to learn from a collection of examples, extracting as much information as possible from the given data. The second part is concerned with training agents that learn by interacting with a simulated environment, which introduces unique challenges such as efficient exploration and simulation

    Documentación de la orla más antigua de la Facultad de Medicina de Valladolid

    Get PDF
    La orla más antigua de la que se dispone en la facultad de Medicina de Valladolid data del año 1873 y se halla expuesta en el despacho del Señor Decano de dicha facultad. En este trabajo se profundiza acerca de quiénes fueron y qué hicieron aquellos graduados que aparecen en ella. Mediante un trabajo de documentación hemos reunido información tanto académica como profesional, haciendo uso del Archivo Universitario e Histórico Provincial de Valladolid así como de fuentes informáticas. Durante esta investigación hemos podido comprobar cómo era la enseñanza de Medicina en aquella época, de qué forma se distribuyeron por la península y fuera de ella los graduados en su labor profesional, y cómo se almacenó dicha información a través del tiempo.Grado en Medicin

    Historical development of the european structural and investment funds

    Get PDF
    Artículo de revistaIn December 2020 the European Council approved the regulation establishing the European Union (EU) Multiannual Financial Framework for 2021-2027 and the Next Generation EU recovery facility. Both mechanisms will help provide in the coming years for financing worth €1.8 trillion to sustain the EU’s post-pandemic recovery and its long-term priorities. To set the scale of these funds and the challenge of managing them in context, this article firstly describes the European Structural and Investment Funds. It then offers a detailed analysis of the amount and composition of the resources received to date under these funds, along with their distribution by type of expenditure in the biggest EU countries, with particular emphasis on Spain. Lastly, given the regional focus of the allocation criteria, the final section of the article dissects the course, composition and distribution by type of expenditure of these funds among Spain’s different regions

    Learning to skip state updates in recurrent neural networks

    Get PDF
    Recurrent Neural Networks (RNNs) continue to show outstanding performance in sequence modeling tasks. However, training RNNs on long sequences often face challenges like slow inference, vanishing gradients and dificulty in capturing long term dependencies. In backpropagation through time settings, these issues are tightly coupled with the large, sequential computational graph resulting from unfolding the RNN in time. We introduce the Skip RNN model which extends existing RNN models by learning to skip state updates and shortens the effective size of the computational graph. This network can be encouraged to perform fewer state updates through a novel loss term. We evaluate the proposed model on various tasks and show how it can reduce the number of required RNN updates while preserving, and sometimes even improving, the performance of the baseline models.Les Xarxes Neuronals Recurrents (de l’anglès, RNNs) mostren un alt rendiment en tasques de modelat de seqüències. Tot i així, entrenar RNNs en seqüències llargues sol provocar dificultats com una inferència lenta, gradients que s’esvaeixen i dificultats per capturar dependències temporals a llarg terme. En escenaris amb backpropagation through time, aquests problemes estan estretament relacionats amb la longitud i la seqüencialitat del graf computacional resultant de desdoblar la RNN en el temps. Presentem Skip RNN, model que extén arquitectures recurrents existents, permetent-les aprendre quan ometre actualitzacions del seu estat i escurçant així la longitud efectiva del graf computacional. Aquesta xarxa pot ser estimulada per efectuar menys actualitzacions d’estat a través d’un nou terme a la funció de cost. Avaluem el model proposat en una sèrie de tasques i demostrem com pot reduir el nombre d’actualitzacions de la RNN mentre preserva, o fins i tot millora, el rendiment dels models de referència.Las Redes Neuronales Recurrentes (del inglés, RNNs) muestran un alto rendimiento en tareas de modelado de secuencias. Aún así, entrenar RNNs en secuencias largas suele provocar difi- cultades como una inferencia lenta, gradientes que se desvanecen y dificultades para capturar dependencias temporales a largo plazo. En escenarios con backpropagation through time, estos problemas están estrechamente relacionados con la longitud y la secuencialidad del grafo computacional resultante de desdoblar la RNN en el tiempo. Presentamos Skip RNN, un modelo que extiende arquitecturas recurrentes existentes, permitiéndoles aprender cuándo omitir actualizaciones de su estado y acortando así la longitud efectiva del grafo computacional. Esta red puede ser estimulada para efectuar menos actualizaciones de estado a través de un nuevo elemento en la función de coste. Evaluamos el modelo propuesto en una serie de tareas y demostramos cómo puede reducir el número de actualizaciones de la RNN mientras preserva, o incluso mejora, el rendimiento de los modelos de referencia

    Evaluación de la gestión de los resultados de las empresas públicas mexicanas del sector industrial

    Get PDF
    The objective of this document is to examine the valueof the discretionary accruals in order to evaluate the earnings management as a measure of accounting quality ofthe public companies of the industrial sector in Mexicoduring the period 1991 – 2017. For this, the Jones modeladjusted to ROA is used. The first results confirm that after the adoption of the International Financial ReportingStandards discretionary accruals were different compared with the period when only local accounting standardswere applied, however, it cannot be affirm that decreased.El presente documento tiene como objetivo examinar elvalor de la devengación discrecional con el fin de evaluarel comportamiento de la gestión de los resultados comouna medida de la calidad de la contabilidad de las empresas públicas del sector industrial en México durante elperiodo 1991 – 2017. Para cumplir con este objetivo seutilizó el modelo de Jones ajustado al ROA. Los primeros resultados confirman que en el periodo posterior a laadopción de las Normas Internacionales de InformaciónFinanciera la devengación discrecional es diferente comparado con el periodo en el que se aplicaban las normascontables locales, sin embargo no se puede afirmar quedisminuyó
    corecore