1,072 research outputs found

    Forecasting currency exchange rate time series with fireworks-algorithm-based higher order neural network with special attention to training data enrichment

    Get PDF
    Exchange rates are highly fluctuating by nature, thus difficult to forecast. Artificial neural networks (ANN) have proved to be better than statistical methods. Inadequate training data may lead the model to reach suboptimal solution resulting, poor accuracy as ANN-based forecasts are data driven. To enhance forecasting accuracy, we suggests a method of enriching training dataset through exploring and incorporating of virtual data points (VDPs) by an evolutionary method called as fireworks algorithm trained functional link artificial neural network (FWA-FLN). The model maintains the correlation between the current and past data, especially at the oscillation point on the time series. The exploring of a VDP and forecast of the succeeding term go consecutively by the FWA-FLN. Real exchange rate time series are used to train and validate the proposed model. The efficiency of the proposed technique is related to other models trained similarly and produces far better prediction accuracy

    Investigating the Predictability of a Chaotic Time-Series Data using Reservoir Computing, Deep-Learning and Machine- Learning on the Short-, Medium- and Long-Term Pricing of Bitcoin and Ethereum.

    Get PDF
    This study will investigate the predictability of a Chaotic time-series data using Reservoir computing (Echo State Network), Deep-Learning(LSTM) and Machine- Learning(Linear, Bayesian, ElasticNetCV , Random Forest, XGBoost Regression and a machine learning Neural Network) on the short (1-day out prediction), medium (5-day out prediction) and long-term (30-day out prediction) pricing of Bitcoin and Ethereum Using a range of machine learning tools, to perform feature selection by permutation importance to select technical indicators on the individual cryptocurrencies, to ensure the datasets are the best for predictions per cryptocurrency while reducing noise within the models. The predictability of these two chaotic time-series is then compared to evaluate the models to find the best fit model. The models are fine-tuned, with hyperparameters, design of the network within the LSTM and the reservoir size within the Echo State Network being adjusted to improve accuracy and speed. This research highlights the effect of the trends within the cryptocurrency and its effect on predictive models, these models will then be optimized with hyperparameter tuning, and be evaluated to compare the models across the two currencies. It is found that the datasets for each cryptocurrency are different, due to the different permutation importance, which does not affect the overall predictability of the models with the short and medium-term predictions having the same models being the top performers. This research confirms that the chaotic data although can have positive results for shortand medium-term prediction, for long-term prediction, technical analysis basedprediction is not sufficient

    A State-of-the-Art Review of Time Series Forecasting Using Deep Learning Approaches

    Get PDF
    Time series forecasting has recently emerged as a crucial study area with a wide spectrum of real-world applications. The complexity of data processing originates from the amount of data processed in the digital world. Despite a long history of successful time-series research using classic statistical methodologies, there are some limits in dealing with an enormous amount of data and non-linearity. Deep learning techniques effectually handle the complicated nature of time series data. The effective analysis of deep learning approaches like Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long short-term memory (LSTM), Gated Recurrent Unit (GRU), Autoencoders, and other techniques like attention mechanism, transfer learning, and dimensionality reduction are discussed with their merits and limitations. The performance evaluation metrics used to validate the model's accuracy are discussed. This paper reviews various time series applications using deep learning approaches with their benefits, challenges, and opportunities

    Improving Demand Forecasting: The Challenge of Forecasting Studies Comparability and a Novel Approach to Hierarchical Time Series Forecasting

    Get PDF
    Bedarfsprognosen sind in der Wirtschaft unerlĂ€sslich. Anhand des erwarteten Kundenbe-darfs bestimmen Firmen beispielsweise welche Produkte sie entwickeln, wie viele Fabri-ken sie bauen, wie viel Personal eingestellt wird oder wie viel Rohmaterial geordert wer-den muss. FehleinschĂ€tzungen bei Bedarfsprognosen können schwerwiegende Auswir-kungen haben, zu Fehlentscheidungen fĂŒhren, und im schlimmsten Fall den Bankrott einer Firma herbeifĂŒhren. Doch in vielen FĂ€llen ist es komplex, den tatsĂ€chlichen Bedarf in der Zukunft zu antizipie-ren. Die Einflussfaktoren können vielfĂ€ltig sein, beispielsweise makroökonomische Ent-wicklung, das Verhalten von Wettbewerbern oder technologische Entwicklungen. Selbst wenn alle Einflussfaktoren bekannt sind, sind die ZusammenhĂ€nge und Wechselwirkun-gen hĂ€ufig nur schwer zu quantifizieren. Diese Dissertation trĂ€gt dazu bei, die Genauigkeit von Bedarfsprognosen zu verbessern. Im ersten Teil der Arbeit wird im Rahmen einer ĂŒberfassenden Übersicht ĂŒber das gesamte Spektrum der Anwendungsfelder von Bedarfsprognosen ein neuartiger Ansatz eingefĂŒhrt, wie Studien zu Bedarfsprognosen systematisch verglichen werden können und am Bei-spiel von 116 aktuellen Studien angewandt. Die Vergleichbarkeit von Studien zu verbes-sern ist ein wesentlicher Beitrag zur aktuellen Forschung. Denn anders als bspw. in der Medizinforschung, gibt es fĂŒr Bedarfsprognosen keine wesentlichen vergleichenden quan-titativen Meta-Studien. Der Grund dafĂŒr ist, dass empirische Studien fĂŒr Bedarfsprognosen keine vereinheitlichte Beschreibung nutzen, um ihre Daten, Verfahren und Ergebnisse zu beschreiben. Wenn Studien hingegen durch systematische Beschreibung direkt miteinan-der verglichen werden können, ermöglicht das anderen Forschern besser zu analysieren, wie sich Variationen in AnsĂ€tzen auf die PrognosegĂŒte auswirken – ohne die aufwĂ€ndige Notwendigkeit, empirische Experimente erneut durchzufĂŒhren, die bereits in Studien beschrieben wurden. Diese Arbeit fĂŒhrt erstmals eine solche Systematik zur Beschreibung ein. Der weitere Teil dieser Arbeit behandelt Prognoseverfahren fĂŒr intermittierende Zeitreihen, also Zeitreihen mit wesentlichem Anteil von Bedarfen gleich Null. Diese Art der Zeitreihen erfĂŒllen die Anforderungen an Stetigkeit der meisten Prognoseverfahren nicht, weshalb gĂ€ngige Verfahren hĂ€ufig ungenĂŒgende PrognosegĂŒte erreichen. Gleichwohl ist die Rele-vanz intermittierender Zeitreihen hoch – insbesondere Ersatzteile weisen dieses Bedarfs-muster typischerweise auf. ZunĂ€chst zeigt diese Arbeit in drei Studien auf, dass auch die getesteten Stand-der-Technik Machine Learning AnsĂ€tze bei einigen bekannten DatensĂ€t-zen keine generelle Verbesserung herbeifĂŒhren. Als wesentlichen Beitrag zur Forschung zeigt diese Arbeit im Weiteren ein neuartiges Verfahren auf: Der Similarity-based Time Series Forecasting (STSF) Ansatz nutzt ein Aggregation-Disaggregationsverfahren basie-rend auf einer selbst erzeugten Hierarchie statistischer Eigenschaften der Zeitreihen. In Zusammenhang mit dem STSF Ansatz können alle verfĂŒgbaren Prognosealgorithmen eingesetzt werden – durch die Aggregation wird die Stetigkeitsbedingung erfĂŒllt. In Expe-rimenten an insgesamt sieben öffentlich bekannten DatensĂ€tzen und einem proprietĂ€ren Datensatz zeigt die Arbeit auf, dass die PrognosegĂŒte (gemessen anhand des Root Mean Square Error RMSE) statistisch signifikant um 1-5% im Schnitt gegenĂŒber dem gleichen Verfahren ohne Einsatz von STSF verbessert werden kann. Somit fĂŒhrt das Verfahren eine wesentliche Verbesserung der PrognosegĂŒte herbei. Zusammengefasst trĂ€gt diese Dissertation zum aktuellen Stand der Forschung durch die zuvor genannten Verfahren wesentlich bei. Das vorgeschlagene Verfahren zur Standardi-sierung empirischer Studien beschleunigt den Fortschritt der Forschung, da sie verglei-chende Studien ermöglicht. Und mit dem STSF Verfahren steht ein Ansatz bereit, der zuverlĂ€ssig die PrognosegĂŒte verbessert, und dabei flexibel mit verschiedenen Arten von Prognosealgorithmen einsetzbar ist. Nach dem Erkenntnisstand der umfassenden Literatur-recherche sind keine vergleichbaren AnsĂ€tze bislang beschrieben worden

    Targeted aspect-based emotion analysis to detect opportunities and precaution in financial Twitter messages

    Get PDF
    Microblogging platforms, of which Twitter is a representative example, are valuable information sources for market screening and financial models. In them, users voluntarily provide relevant information, including educated knowledge on investments, reacting to the state of the stock markets in real-time and, often, influencing this state. We are interested in the user forecasts in financial, social media messages expressing opportunities and precautions about assets. We propose a novel Targeted Aspect-Based Emotion Analysis (tabea) system that can individually discern the financial emotions (positive and negative forecasts) on the different stock market assets in the same tweet (instead of making an overall guess about that whole tweet). It is based on Natural Language Processing (nlp) techniques and Machine Learning streaming algorithms. The system comprises a constituency parsing module for parsing the tweets and splitting them into simpler declarative clauses; an offline data processing module to engineer textual, numerical and categorical features and analyse and select them based on their relevance; and a stream classification module to continuously process tweets on-the-fly. Experimental results on a labelled data set endorse our solution. It achieves over 90% precision for the target emotions, financial opportunity, and precaution on Twitter. To the best of our knowledge, no prior work in the literature has addressed this problem despite its practical interest in decision-making, and we are not aware of any previous nlp nor online Machine Learning approaches to tabea.Xunta de Galicia | Ref. ED481B-2021-118Xunta de Galicia | Ref. ED481B-2022-093Financiado para publicaciĂłn en acceso aberto: Universidade de Vigo/CISU

    Deep learning: parameter optimization using proposed novel hybrid bees Bayesian convolutional neural network

    Get PDF
    Deep Learning (DL) is a type of machine learning used to model big data to extract complex relationship as it has the advantage of automatic feature extraction. This paper presents a review on DL showing all its network topologies along with their advantages, limitations, and applications. The most popular Deep Neural Network (DNN) is called a Convolutional Neural Network (CNN), the review found that the most important issue is designing better CNN topology, which needs to be addressed to improve CNN performance further. This paper addresses this problem by proposing a novel nature inspired hybrid algorithm that combines the Bees Algorithm (BA), which is known to mimic the behavior of honey bees, with Bayesian Optimization (BO) in order to increase the overall performance of CNN, which is referred to as BA-BO-CNN. Applying the hybrid algorithm on Cifar10DataDir benchmark image data yielded an increase in the validation accuracy from 80.72% to 82.22%, while applying it on digits datasets showed the same accuracy as the existing original CNN and BO-CNN, but with an improvement in the computational time by 3 min and 12 s reduction, and finally applying it on concrete cracks images produced almost similar results to existing algorithms
    • 

    corecore