7 research outputs found

    Time-Series Forecasting: Unleashing Long-Term Dependencies with Fractionally Differenced Data

    Full text link
    This study introduces a novel forecasting strategy that leverages the power of fractional differencing (FD) to capture both short- and long-term dependencies in time series data. Unlike traditional integer differencing methods, FD preserves memory in series while stabilizing it for modeling purposes. By applying FD to financial data from the SPY index and incorporating sentiment analysis from news reports, this empirical analysis explores the effectiveness of FD in conjunction with binary classification of target variables. Supervised classification algorithms were employed to validate the performance of FD series. The results demonstrate the superiority of FD over integer differencing, as confirmed by Receiver Operating Characteristic/Area Under the Curve (ROCAUC) and Mathews Correlation Coefficient (MCC) evaluations

    Research on operation optimization of building energy systems based on machine learning

    Get PDF
    北九州市立大学博士(工学)本研究では、建築エネルギーシステムの運用を最適化するために機械学習を応用し、建築エネルギーシステムの運用コストを削減し、再生可能エネルギーの自給率を向上させることを重点的に扱っています。これらの一連の研究成果は、この分野に新たな知見をもたらし、建築エネルギーシステムの経済的効率を向上させるのに役立っています。In this study, we focus on applying machine learning to optimize the operation of building energy systems, with a primary emphasis on reducing the operational costs of these systems and enhancing the self-sufficiency of renewable energy. This series of research outcomes has brought new insights to the field and contributes to improving the economic efficiency of building energy systems.doctoral thesi

    Biological Bayesian Optimizer

    Full text link
    Los avances en el ámbito de la Inteligencia Artificial a lo largo de los últimos años han sido enormes, han aparecido nuevos tipos de redes y diariamente se encuentran nuevas aplicaciones en las que estos algoritmos consiguen muy buenos resultados. La optimización bayesiana es una técnica centrada en la optimización de funciones en las que se desconoce su expresión analítica, como podría ser aquella que forman todos los errores de predicción de un algoritmo, de los comentados anteriormente. En el presente trabajo se han combinado la optimización bayesiana y la optimización con metaheurísticas bioinspiradas. Se presenta una herramienta que en base a un problema planteado, bien sea una función o un algoritmo como una red neuronal profunda o de cualquier tipo, genera una recomendación de la que podría ser una metaheurística prometedora para realizar una optimización exhaustiva profunda futura. Para ello, combina ambas técnicas comentadas, un primer nivel de búsqueda en el que se aplican las metaheurísticas bioinspiradas y un segundo que optimiza los mejores resultados de estas. De esta forma, encontramos una metaheurística de la que se tiene la certeza que es la mejor que encuentra la optimización bayesiana. La parte de las metaheurísticas se ha implementado utilizando pygmo2 y la optimización bayesiana con SMAC3. Además de la creación del programa, se han realizado una serie de experimentos para probar su funcionalidad y comparar si los resultados obtenidos son mejores que una búsqueda aleatoria en el mismo espacio de búsqueda. Los experimentos consisten en 30 ejecuciones sobre un perceptrón multicapa y el algoritmo XGBoost en las que se obtiene finalmente una metaheurística optimizada por la herramienta, para cada uno de ellos. También se han realizado estos experimentos para la función matemática Branin, entre otras que no se muestran en el presente documento. Los resultados no han sido los esperados, no se ha visto una diferencia relevante entre los resultados optimizados doblemente y las búsquedas aleatorias de los experimentos. Uno de los principales motivos, es la simplicidad del dataset utilizado para ellos, debido a las limitaciones computacionales. Se espera que con problemas más complicados, los resultados muestren una gran mejora entre los resultados con la herramienta y los obtenidos con una búsqueda aleatoria, queda como trabajo futuro

    Hare : serviço de investimento baseado em agentes autônomos para operar na B3

    Get PDF
    Trabalho de conclusão de curso (graduação)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2020.Os mercados de ações desempenham um papel crucial na economia, permitindo o cresci- mento de empresas e possibilitando uma geração de rendimento para seus investidores. Na literatura de aprendizado de máquina para mercados financeiros, múltiplas ferramentas e técnicas foram propostas e aplicadas para analisar o comportamento geral do mercado. No entanto, entender as regras intrínsecas do funcionamento da bolsa de valores, com a possibilidade de gerar lucros, está longe de ser uma tarefa trivial. Abordando esse desafio, este trabalho propõe o Hare: um serviço de investimento com técnicas híbridas, orientado a agentes racionais autônomos, para negociar ativos no mercado de ações. O Hare oferece um serviço confiável com alta precisão e estabilidade no processo de tomadas de decisão, se baseando em análises técnicas e fundamentais. O cerne de funcionamento do Hare é sua modelagem utilizando um agente racional capaz de perceber o mercado e agir de forma autônoma com base em suas decisões. Para tal dois módulos principais foram implemen- tados com o objetivo de fornecer uma racionalidade ao agente: (i) o Módulo Preditor de Movimento (MPM), responsável por prever a movimentação de um ativo; e (ii) o Modelo de Alocação de Recursos (MAR), responsável por utilizar as informações de predição ao seu favor, distribuindo seus recursos entre os ativos disponíveis para tentar gerar o maior lucro possível com o menor risco. Como prova de conceito, o Hare foi projetado para operar gerenciando um portfólio no mercado de ações da B3. Os resultados avaliados do MPM demonstraram que o serviço é capaz de prever o ganho ou perda de valor no preço de uma ação, quando comparado com sua média da janela de tempo analisada, com uma acurácia de 82% no pior caso e 94% na melhor situação. Ademais, o MAR foi capaz de obter uma rentabilidade de 11, 74% gerenciando um portfólio com 3 ativos no período de tempo analisado. Ainda, Hare foi capaz de superar a rentabilidade de investimentos de renda fixa e de portfólios construídos com base na Variância Média de Markowitz.Stock markets play an essential role in the economy and offer opportunities for com- panies to grow and investors to make profits. Therefore, to seize opportunities, many tools and techniques have been proposed and applied to analyze market behavior. How- ever, understanding the stock exchange’s intrinsic rules and, thus, seizing opportunities are not trivial tasks. Addressing this challenge, this work proposes Hare: a new hybrid, autonomous, agent-orientated service created to trade equities in the stock market success- fully. It offers a reliable service based on technical and fundamental analysis with high precision and stability in the decision-making process. Hare’s intelligence core is modeled using a rational agent capable of perceiving the market and acting upon its perception au- tonomously. Two main modules were implemented to provide the agent’s rationality: (i) a Preditor Module, which is responsible for forecasting an asset’s movement; and (ii) a Resource Allocation Module, that given the predictions made by the agent, distributes its resources trying to generate the maximum profit with the lowest risk. As proof of concept, Hare was designed to operate on the B3 stock exchange, considering different stocks in its portfolio. The predictor module results showed that the proposed service could predict the rise or fall of a price compared to its time-steps’ mean with an accuracy of 82% in the worst case and 94% in the best case. Furthermore, the Resource Allocation Module was capable of achieving an 11, 74% rentability managing a portfolio in the evaluated period, demonstrating that the Hare service is a viable investment system able to overcome fixed income investments and portfolios built with the Markowitz Mean-Variance model

    A long-term prediction approach based on long short-term memory neural networks with automatic parameter optimization by Tree-structured Parzen Estimator and applied to time-series data of NPP steam generators

    Get PDF
    Developing an accurate and reliable multi-step ahead prediction model is a key problem in many Prognostics and Health Management (PHM) applications. Inevitably, the further one attempts to predict into the future, the harder it is to achieve an accurate and stable prediction due to increasing uncertainty and error accumulation. In this paper, we address this problem by proposing a prediction model based on Long Short-Term Memory (LSTM), a deep neural network developed for dealing with the long-term dependencies in time-series data. Our proposed prediction model also tackles two additional issues. Firstly, the hyperparameters of the proposed model are automatically tuned by a Bayesian optimization algorithm, called Tree-structured Parzen Estimator (TPE). Secondly, the proposed model allows assessing the uncertainty on the prediction. To validate the performance of the proposed model, a case study considering steam generator data acquired from different French nuclear power plants (NPPs) is carried out. Alternative prediction models are also considered for comparison purposes

    Applications of Physically Accurate Deep Learning for Processing Digital Rock Images

    Full text link
    Digital rock analysis aims to improve our understanding of the fluid flow properties of reservoir rocks, which are important for enhanced oil recovery, hydrogen storage, carbonate dioxide storage, and groundwater management. X-ray microcomputed tomography (micro-CT) is the primary approach to capturing the structure of porous rock samples for digital rock analysis. Initially, the obtained micro-CT images are processed using image-based techniques, such as registration, denoising, and segmentation depending on various requirements. Numerical simulations are then conducted on the digital models for petrophysical prediction. The accuracy of the numerical simulation highly depends on the quality of the micro-CT images. Therefore, image processing is a critical step for digital rock analysis. Recent advances in deep learning have surpassed conventional methods for image processing. Herein, the utility of convolutional neural networks (CNN) and generative adversarial networks (GAN) are assessed in regard to various applications in digital rock image processing, such as segmentation, super-resolution, and denoising. To obtain training data, different sandstone and carbonate samples were scanned using various micro-CT facilities. After that, validation images previously unseen by the trained neural networks are utilised to evaluate the performance and robustness of the proposed deep learning techniques. Various threshold scenarios are applied to segment the reconstructed digital rock images for sensitivity analyses. Then, quantitative petrophysical analyses, such as porosity, absolute/relative permeability, and pore size distribution, are implemented to estimate the physical accuracy of the digital rock data with the corresponding ground truth data. The results show that both CNN and GAN deep learning methods can provide physically accurate digital rock images with less user bias than traditional approaches. These results unlock new pathways for various applications related to the reservoir characterisation of porous reservoir rocks

    A long-term prediction approach based on long short-term memory neural networks with automatic parameter optimization by Tree-structured Parzen Estimator and applied to time-series data of NPP steam generators

    No full text
    International audienceDeveloping an accurate and reliable multi-step ahead prediction model is a key problem in many Prognostics and Health Management (PHM) applications. Inevitably, the further one attempts to predict into the future, the harder it is to achieve an accurate and stable prediction due to increasing uncertainty and error accumulation. In this paper, we address this problem by proposing a prediction model based on Long Short-Term Memory (LSTM), a deep neural network developed for dealing with the long-term dependencies in time-series data. Our proposed prediction model also tackles two additional issues. Firstly, the hyperparameters of the proposed model are automatically tuned by a Bayesian optimization algorithm, called Tree-structured Parzen Estimator (TPE). Secondly, the proposed model allows assessing the uncertainty on the prediction. To validate the performance of the proposed model, a case study considering steam generator data acquired from different French nuclear power plants (NPPs) is carried out. Alternative prediction models are also considered for comparison purposes
    corecore