86 research outputs found

    Methodologies for time series prediction and missing value imputation

    Get PDF
    The amount of collected data is increasing all the time in the world. More sophisticated measuring instruments and increase in the computer processing power produce more and more data, which requires more capacity from the collection, transmission and storage. Even though computers are faster, large databases need also good and accurate methodologies for them to be useful in practice. Some techniques are not feasible to be applied to very large databases or are not able to provide the necessary accuracy. As the title proclaims, this thesis focuses on two aspects encountered with databases, time series prediction and missing value imputation. The first one is a function approximation and regression problem, but can, in some cases, be formulated also as a classification task. Accurate prediction of future values is heavily dependent not only on a good model, which is well trained and validated, but also preprocessing, input variable selection or projection and output approximation strategy selection. The importance of all these choices made in the approximation process increases when the prediction horizon is extended further into the future. The second focus area deals with missing values in a database. The missing values can be a nuisance, but can be also be a prohibiting factor in the use of certain methodologies and degrade the performance of others. Hence, missing value imputation is a very necessary part of the preprocessing of a database. This imputation has to be done carefully in order to retain the integrity of the database and not to insert any unwanted artifacts to aggravate the job of the final data analysis methodology. Furthermore, even though the accuracy is always the main requisite for a good methodology, computational time has to be considered alongside the precision. In this thesis, a large variety of different strategies for output approximation and variable processing for time series prediction are presented. There is also a detailed presentation of new methodologies and tools for solving the problem of missing values. The strategies and methodologies are compared against the state-of-the-art ones and shown to be accurate and useful in practice.Maailmassa tuotetaan koko ajan enemmän ja enemmän tietoa. Kehittyneemmät mittalaitteet, nopeammat tietokoneet sekä kasvaneet siirto- ja tallennuskapasiteetit mahdollistavat suurien tietomassojen keräämisen, siirtämisen ja varastoinnin. Vaikka tietokoneiden laskentateho kasvaa jatkuvasti, suurten tietoaineistojen käsittelyssä tarvitaan edelleen hyviä ja tarkkoja menetelmiä. Kaikki menetelmät eivät sovellu valtavien aineistojen käsittelyyn tai eivät tuota tarpeeksi tarkkoja tuloksia. Tässä työssä keskitytään kahteen tärkeään osa-alueeseen tietokantojen käsittelyssä: aikasarjaennustamiseen ja puuttuvien arvojen täydentämiseen. Ensimmäinen näistä alueista on regressio-ongelma, jossa pyritään arvioimaan aikasarjan tulevaisuutta edeltävien näytteiden pohjalta. Joissain tapauksissa regressio-ongelma voidaan muotoilla myös luokitteluongelmaksi. Tarkka aikasarjan ennustaminen on riippuvainen hyvästä ja luotettavasta ennustusmallista. Malli on opetettava oikein ja sen oikeellisuus ja tarkkuus on varmistettava. Lisäksi aikasarjan esikäsittely, syötemuuttujien valinta- tai projektiotapa sekä ennustusstrategia täytyy valita huolella ja niiden soveltuvuus mallin yhteyteen on varmistettava huolellisesti. Tehtyjen valintojen tärkeys kasvaa entisestään mitä pidemmälle tulevaisuuteen ennustetaan. Toinen tämän työn osa-alue käsittelee puuttuvien arvojen ongelmaa. Tietokannasta puuttuvat arvot voivat heikentää data-analyysimenetelmän tuottamia tuloksia tai jopa estää joidenkin menetelmien käytön, joten puuttuvien arvojen arviointi ja täydentäminen esikäsittelyn osana on suositeltavaa. Täydentäminen on kuitenkin tehtävä harkiten, sillä puutteellinen täydentäminen johtaa hyvin todennäköisesti epätarkkuuksiin lopullisessa käyttökohteessa ja ei-toivottuihin rakenteisiin tietokannan sisällä. Koska kyseessä on esikäsittely, eikä varsinainen datan hyötykäyttö, puuttuvien arvojen täydentämiseen käytetty laskenta-aika tulisi minimoida säilyttäen laskentatarkkuus. Tässä väitöskirjassa on esitelty erilaisia tapoja ennustaa pitkän ajan päähän tulevaisuuteen ja keinoja syötemuuttujien valintaan. Lisäksi uusia menetelmiä puuttuvien arvojen täydentämiseen on kehitetty ja niitä on vertailtu olemassa oleviin menetelmiin

    Learning for Optimization with Virtual Savant

    Get PDF
    Optimization problems arising in multiple fields of study demand efficient algorithms that can exploit modern parallel computing platforms. The remarkable development of machine learning offers an opportunity to incorporate learning into optimization algorithms to efficiently solve large and complex problems. This thesis explores Virtual Savant, a paradigm that combines machine learning and parallel computing to solve optimization problems. Virtual Savant is inspired in the Savant Syndrome, a mental condition where patients excel at a specific ability far above the average. In analogy to the Savant Syndrome, Virtual Savant extracts patterns from previously-solved instances to learn how to solve a given optimization problem in a massively-parallel fashion. In this thesis, Virtual Savant is applied to three optimization problems related to software engineering, task scheduling, and public transportation. The efficacy of Virtual Savant is evaluated in different computing platforms and the experimental results are compared against exact and approximate solutions for both synthetic and realistic instances of the studied problems. Results show that Virtual Savant can find accurate solutions, effectively scale in the problem dimension, and take advantage of the availability of multiple computing resources.Los problemas de optimización que surgen en múltiples campos de estudio demandan algoritmos eficientes que puedan explotar las plataformas modernas de computación paralela. El notable desarrollo del aprendizaje automático ofrece la oportunidad de incorporar el aprendizaje en algoritmos de optimización para resolver problemas complejos y de grandes dimensiones de manera eficiente. Esta tesis explora Savant Virtual, un paradigma que combina aprendizaje automático y computación paralela para resolver problemas de optimización. Savant Virtual está inspirado en el Sı́ndrome de Savant, una condición mental en la que los pacientes se destacan en una habilidad especı́fica muy por encima del promedio. En analogı́a con el sı́ndrome de Savant, Savant Virtual extrae patrones de instancias previamente resueltas para aprender a resolver un determinado problema de optimización de forma masivamente paralela. En esta tesis, Savant Virtual se aplica a tres problemas de optimización relacionados con la ingenierı́a de software, la planificación de tareas y el transporte público. La eficacia de Savant Virtual se evalúa en diferentes plataformas informáticas y los resultados se comparan con soluciones exactas y aproximadas para instancias tanto sintéticas como realistas de los problemas estudiados. Los resultados muestran que Savant Virtual puede encontrar soluciones precisas, escalar eficazmente en la dimensión del problema y aprovechar la disponibilidad de múltiples recursos de cómputo.Fundación Carolina Agencia Nacional de Investigación e Innovación (ANII, Uruguay) Universidad de Cádiz Universidad de la Repúblic

    Environmental risk assessment in the mediterranean region using artificial neural networks

    Get PDF
    Los mapas auto-organizados han demostrado ser una herramienta apropiada para la clasificación y visualización de grupos de datos complejos. Redes neuronales, como los mapas auto-organizados (SOM) o las redes difusas ARTMAP (FAM), se utilizan en este estudio para evaluar el impacto medioambiental acumulativo en diferentes medios (aguas subterráneas, aire y salud humana). Los SOMs también se utilizan para generar mapas de concentraciones de contaminantes en aguas subterráneas simulando las técnicas geostadísticas de interpolación como kriging y cokriging. Para evaluar la confiabilidad de las metodologías desarrolladas en esta tesis, se utilizan procedimientos de referencia como puntos de comparación: la metodología DRASTIC para el estudio de vulnerabilidad en aguas subterráneas y el método de interpolación espacio-temporal conocido como Bayesian Maximum Entropy (BME) para el análisis de calidad del aire. Esta tesis contribuye a demostrar las capacidades de las redes neuronales en el desarrollo de nuevas metodologías y modelos que explícitamente permiten evaluar las dimensiones temporales y espaciales de riesgos acumulativos

    Research in Metabolomics via Nuclear Magnetic Resonance Spectroscopy: Data Mining, Biochemistry and Clinical Chemistry

    Get PDF
    Metabolomics entails the comprehensive characterization of the ensemble of endogenous and exogenous metabolites present in a biological specimen. Metabolites represent, at the same time, the downstream output of the genome and the upstream input from various external factors, such as the environment, lifestyle, and diet. Therefore, in the last few years, metabolomic phenotyping has provided unique insights into the fundamental and molecular causes of several physiological and pathophysiological conditions. In parallel, metabolomics has been demonstrating an emerging role in monitoring the influence of different manufacturing procedures on food quality and food safety. In light of the above, this collection includes the latest research from various fields of NMR-based metabolomics applications ranging from biomedicine to data mining and food chemistry

    Self-Organization of Spiking Neural Networks for Visual Object Recognition

    Get PDF
    On one hand, the visual system has the ability to differentiate between very similar objects. On the other hand, we can also recognize the same object in images that vary drastically, due to different viewing angle, distance, or illumination. The ability to recognize the same object under different viewing conditions is called invariant object recognition. Such object recognition capabilities are not immediately available after birth, but are acquired through learning by experience in the visual world. In many viewing situations different views of the same object are seen in a tem- poral sequence, e.g. when we are moving an object in our hands while watching it. This creates temporal correlations between successive retinal projections that can be used to associate different views of the same object. Theorists have therefore pro- posed a synaptic plasticity rule with a built-in memory trace (trace rule). In this dissertation I present spiking neural network models that offer possible explanations for learning of invariant object representations. These models are based on the following hypotheses: 1. Instead of a synaptic trace rule, persistent firing of recurrently connected groups of neurons can serve as a memory trace for invariance learning. 2. Short-range excitatory lateral connections enable learning of self-organizing topographic maps that represent temporal as well as spatial correlations. 3. When trained with sequences of object views, such a network can learn repre- sentations that enable invariant object recognition by clustering different views of the same object within a local neighborhood. 4. Learning of representations for very similar stimuli can be enabled by adaptive inhibitory feedback connections. The study presented in chapter 3.1 details an implementation of a spiking neural network to test the first three hypotheses. This network was tested with stimulus sets that were designed in two feature dimensions to separate the impact of tempo- ral and spatial correlations on learned topographic maps. The emerging topographic maps showed patterns that were dependent on the temporal order of object views during training. Our results show that pooling over local neighborhoods of the to- pographic map enables invariant recognition. Chapter 3.2 focuses on the fourth hypothesis. There we examine how the adaptive feedback inhibition (AFI) can improve the ability of a network to discriminate between very similar patterns. The results show that with AFI learning is faster, and the network learns selective representations for stimuli with higher levels of overlap than without AFI. Results of chapter 3.1 suggest a functional role for topographic object representa- tions that are known to exist in the inferotemporal cortex, and suggests a mechanism for the development of such representations. The AFI model implements one aspect of predictive coding: subtraction of a prediction from the actual input of a system. The successful implementation in a biologically plausible network of spiking neurons shows that predictive coding can play a role in cortical circuits

    Jan Karel Lenstra : the traveling science man : liber amicorum

    Get PDF
    No abstract

    Application of nuclear magnetic resonance spectroscopy in the study of complex matrices

    Get PDF
    The aim of this PhD work was to apply the NMR based metabolomic approach to the study of complex matrices such as several food plants (pepper, celery, tomatoes, hemp, baobab, teas, blueberries and olive oils). A comprehensive description of the chemical composition in term of primary and secondary metabolites obtained by means of 1D and 2D experiments was reported and information regarding specific aspects (variety, type of production etc) were obtained. The study of stool samples of patients with liver cirrhosis was also carried out confirming the important contribution of the NMR approach in the disease investigation

    50 jaar informatiesystemen 1978-2028 : liber amicorum voor Theo Bemelmans

    Get PDF
    no abstrac

    50 jaar informatiesystemen 1978-2028 : liber amicorum voor Theo Bemelmans

    Get PDF
    no abstrac
    corecore