1,339 research outputs found

    Leveraging Deep Learning and Online Source Sentiment for Financial Portfolio Management

    Full text link
    Financial portfolio management describes the task of distributing funds and conducting trading operations on a set of financial assets, such as stocks, index funds, foreign exchange or cryptocurrencies, aiming to maximize the profit while minimizing the loss incurred by said operations. Deep Learning (DL) methods have been consistently excelling at various tasks and automated financial trading is one of the most complex one of those. This paper aims to provide insight into various DL methods for financial trading, under both the supervised and reinforcement learning schemes. At the same time, taking into consideration sentiment information regarding the traded assets, we discuss and demonstrate their usefulness through corresponding research studies. Finally, we discuss commonly found problems in training such financial agents and equip the reader with the necessary knowledge to avoid these problems and apply the discussed methods in practice

    Residual acceleration data on IML-1: Development of a data reduction and dissemination plan

    Get PDF
    The research performed consisted of three stages: (1) identification of sensitive IML-1 experiments and sensitivity ranges by order of magnitude estimates, numerical modeling, and investigator input; (2) research and development towards reduction, supplementation, and dissemination of residual acceleration data; and (3) implementation of the plan on existing acceleration databases

    Data Augmentation for Time-Series Classification: An Extensive Empirical Study and Comprehensive Survey

    Full text link
    Data Augmentation (DA) has emerged as an indispensable strategy in Time Series Classification (TSC), primarily due to its capacity to amplify training samples, thereby bolstering model robustness, diversifying datasets, and curtailing overfitting. However, the current landscape of DA in TSC is plagued with fragmented literature reviews, nebulous methodological taxonomies, inadequate evaluative measures, and a dearth of accessible, user-oriented tools. In light of these challenges, this study embarks on an exhaustive dissection of DA methodologies within the TSC realm. Our initial approach involved an extensive literature review spanning a decade, revealing that contemporary surveys scarcely capture the breadth of advancements in DA for TSC, prompting us to meticulously analyze over 100 scholarly articles to distill more than 60 unique DA techniques. This rigorous analysis precipitated the formulation of a novel taxonomy, purpose-built for the intricacies of DA in TSC, categorizing techniques into five principal echelons: Transformation-Based, Pattern-Based, Generative, Decomposition-Based, and Automated Data Augmentation. Our taxonomy promises to serve as a robust navigational aid for scholars, offering clarity and direction in method selection. Addressing the conspicuous absence of holistic evaluations for prevalent DA techniques, we executed an all-encompassing empirical assessment, wherein upwards of 15 DA strategies were subjected to scrutiny across 8 UCR time-series datasets, employing ResNet and a multi-faceted evaluation paradigm encompassing Accuracy, Method Ranking, and Residual Analysis, yielding a benchmark accuracy of 88.94 +- 11.83%. Our investigation underscored the inconsistent efficacies of DA techniques, with..

    An intelligent recommender system based on short-term disease risk prediction for patients with chronic diseases in a telehealth environment

    Get PDF
    Clinical decisions are usually made based on the practitioners' experiences with limited support from data-centric analytic processes from medical databases. This often leads to undesirable biases, human errors and high medical costs affecting the quality of services provided to patients. Recently, the use of intelligent technologies in clinical decision making in the telehealth environment has begun to play a vital role in improving the quality of patients' lives and reducing the costs and workload involved in their daily healthcare. In the telehealth environment, patients suffering from chronic diseases such as heart disease or diabetes have to take various medical tests such as measuring blood pressure, blood sugar and blood oxygen, etc. This practice adversely affects the overall convenience and quality of their everyday living. In this PhD thesis, an effective recommender system is proposed utilizing a set of innovative disease risk prediction algorithms and models for short-term disease risk prediction to provide chronic disease patients with appropriate recommendations regarding the need to take a medical test on the coming day. The input sequence of sliding windows based on the patient's time series data, is analyzed in both the time domain and the frequency domain. The time series medical data obtained for each chronicle disease patient is partitioned into consecutive sliding windows for analysis in both the time and the frequency domains. The available time series data are readily available in time domains which can be used for analysis without any further conversion. For data analysis in the frequency domain, Fast Fourier Transformation (FFT) and Dual-Tree Complex Wavelet Transformation (DTCWT) are applied to convert the data into the frequency domain and extract the frequency information. In the time domain, four innovative predictive algorithms, Basic Heuristic Algorithm (BHA), Regression-Based Algorithm (RBA) and Hybrid Algorithm (HA) as well as a structural graph-based method (SG), are proposed to study the time series data for producing recommendations. While, in the frequency domain, three predictive classifiers, Artificial Neural Network, Least Squares-Support Vector Machine, and Naïve Bayes, are used to produce the recommendations. An ensemble machine learning model is utilized to combine all the used predictive models and algorithms in both the time and frequency domains to produce the final recommendation. Two real-life telehealth datasets collected from chronic disease patients (i.e., heart disease and diabetes patients) are utilized for a comprehensive experimental evaluation in this study. The results show that the proposed system is effective in analysing time series medical data and providing accurate and reliable (very low risk) recommendations to patients suffering from chronic diseases such as heart disease and diabetes. This research work will help provide high-quality evidence-based intelligent decision support to clinical disease patients that significantly reduces workload associated with medical checkups would otherwise have to be conducted every day in a telehealth environment

    Development of machine learning models for short-term water level forecasting

    Get PDF
    The impact of precise river flood forecasting and warnings in preventing potential victims along with promoting awareness and easing evacuation is realized in the reduction of flood damage and avoidance of loss of life. Machine learning models have been used widely in flood forecasting through discharge. However the usage of discharge can be inconvenient in terms of issuing a warning since discharge is not the direct measure for the early warning system. This paper focuses on water level prediction on the Storå River, Denmark utilizing several machine learning models. The study revealed that the transformation of features to follow a Gaussian-like distribution did not improve the prediction accuracy further. Additional data through different feature sets resulted in increased prediction performance of the machine learning models. Using a hybrid method for the feature selection improved the prediction performance as well. The Feed-Forward Neural Network gave the lowest mean absolute error and highest coefficient of determination value. The results indicated the difference in prediction performance in terms of mean absolute error term between the Feed-Forward Neural Network and the Multiple Linear Regression model was 0.003 cm. It was concluded that the Multiple Linear Regression model would be a good alternative when time, resources, or expert knowledge is limited

    Pushing the Limits: Cognitive, Affective, and Neural Plasticity Revealed by an Intensive Multifaceted Intervention.

    Get PDF
    Scientific understanding of how much the adult brain can be shaped by experience requires examination of how multiple influences combine to elicit cognitive, affective, and neural plasticity. Using an intensive multifaceted intervention, we discovered that substantial and enduring improvements can occur in parallel across multiple cognitive and neuroimaging measures in healthy young adults. The intervention elicited substantial improvements in physical health, working memory, standardized test performance, mood, self-esteem, self-efficacy, mindfulness, and life satisfaction. Improvements in mindfulness were associated with increased degree centrality of the insula, greater functional connectivity between insula and somatosensory cortex, and reduced functional connectivity between posterior cingulate cortex (PCC) and somatosensory cortex. Improvements in working memory and reading comprehension were associated with increased degree centrality of a region within the middle temporal gyrus (MTG) that was extensively and predominately integrated with the executive control network. The scope and magnitude of the observed improvements represent the most extensive demonstration to date of the considerable human capacity for change. These findings point to higher limits for rapid and concurrent cognitive, affective, and neural plasticity than is widely assumed

    An investigation of the predictive accuracy of salinity forecast using the source IMS for the Murray-Darling river

    Get PDF
    The Murray Darling Basin (MDB) is Australia’s largest and most important river system. Today, the Murray Darling Basin Authority (MDBA) manages and operates the river system through the oversight of key components such as water storage, quality, markets, trade, sharing and salinity. In order to provide defensible operational decisions and enable effective planning, the MDBA has developed a model of the Lower Murray Darling River using the Source Integrated Modelling System (IMS). A key functionality of the model is the ability to forecast salinity. The forecasting of salinity enables justification of key water sharing and management decisions in relation to their effects on future salinity levels. In order to predict salinity, the current method is driven by three key inputs being salinity concentration (mg/L), flow (ML) and inflow salt load (Tonnes). Currently, salinity and flow are forecast using trend or average functions while inflow salt load is forecast using the average of the most recent month extrapolated forward. This research project worked to determine the current accuracy of salinity predictions within a new Source model and investigated methods used to estimate and forecast additional salt loads between the reaches. The project worked to improve the model prediction through investigating a variety of data smoothing methods in order to determine whether monthly averaging is the best representation of including the salt inflow loads within the current model. The project then worked to refine the existing forecast method using two approaches: one being trend extrapolation, and the second being application of an Artificial Neural Network (ANN). The results of the data smoothing analysis indicate that monthly averaging is the best representation of additional salt inflow used within the model. The results of the forecast analysis indicate that rather than using the average of the most recent month for forecasting, trend methods may provide a more effective option. Finally, the research found that the developed neural network was unable to recognize patterns present in the salt inflow data enabling an effective forecast. However, the research highlighted that the application of artificial neural networks are well suited to the prediction of water resource variables such as salinity and would make an excellent option for future research

    A Transformer-based Framework For Multi-variate Time Series: A Remaining Useful Life Prediction Use Case

    Full text link
    In recent times, Large Language Models (LLMs) have captured a global spotlight and revolutionized the field of Natural Language Processing. One of the factors attributed to the effectiveness of LLMs is the model architecture used for training, transformers. Transformer models excel at capturing contextual features in sequential data since time series data are sequential, transformer models can be leveraged for more efficient time series data prediction. The field of prognostics is vital to system health management and proper maintenance planning. A reliable estimation of the remaining useful life (RUL) of machines holds the potential for substantial cost savings. This includes avoiding abrupt machine failures, maximizing equipment usage, and serving as a decision support system (DSS). This work proposed an encoder-transformer architecture-based framework for multivariate time series prediction for a prognostics use case. We validated the effectiveness of the proposed framework on all four sets of the C-MAPPS benchmark dataset for the remaining useful life prediction task. To effectively transfer the knowledge and application of transformers from the natural language domain to time series, three model-specific experiments were conducted. Also, to enable the model awareness of the initial stages of the machine life and its degradation path, a novel expanding window method was proposed for the first time in this work, it was compared with the sliding window method, and it led to a large improvement in the performance of the encoder transformer model. Finally, the performance of the proposed encoder-transformer model was evaluated on the test dataset and compared with the results from 13 other state-of-the-art (SOTA) models in the literature and it outperformed them all with an average performance increase of 137.65% over the next best model across all the datasets
    • …
    corecore