15 research outputs found

    Predicting extreme events in the stock market using generative adversarial networks

    Get PDF
    Accurately predicting extreme stock market fluctuations at the right time will allow traders and investors to make better-informed investment decisions and practice more efficient financial risk management. However, extreme stock market events are particularly hard to model because of their scarce and erratic nature. Moreover, strong trading strategies, market stress tests, and portfolio optimization largely rely on sound data. While the application of generative adversarial networks (GANs) for stock forecasting has been an active area of research, there is still a gap in the literature on using GANs for extreme market movement prediction and simulation. In this study, we proposed a framework based on GANs to efficiently model stock prices’ extreme movements. By creating synthetic real-looking data, the framework simulated multiple possible market-evolution scenarios, which can be used to improve the forecasting quality of future market variations. The fidelity and predictive power of the generated data were tested by quantitative and qualitative metrics. Our experimental results on S&P 500 and five emerging market stock data show that the proposed framework is capable of producing a realistic time series by recovering important properties from real data. The results presented in this work suggest that the underlying dynamics of extreme stock market variations can be captured efficiently by some state-of-the-art GAN architectures. This conclusion has great practical implications for investors, traders, and corporations willing to anticipate the future trends of their financial assets. The proposed framework can be used as a simulation tool to mimic stock market behaviors

    Prediction of moisture saturation levels for vinylester composite laminates : a data-driven approach for predicting the behavior of composite materials

    Get PDF
    Presented at the 34th International Conference of the Polymer Processing Society, May 24, 2018.This paper introduces a comprehensive, data-driven method to predict the properties of composite materials, such as thermo-mechanical properties, moisture saturation level, durability, or other such important behavior. The approach is based on applying data mining techniques to the collective knowledge in the materials field. In this article, first, a comprehensive database is compiled from published research articles. Second, the Random Forests algorithm is used to build a predictive model that explains the investigated material response based on a wide variety of material and process variables (of different data types). This advanced statistical learning approach has the potential to drastically enhance the design of composite materials by selecting appropriate constituents and process parameters in order to optimize the response for a specific application. This method is demonstrated by predicting the moisture saturation level for vinylester-based composite laminates. Using 90% of the available published data available as the training dataset, the Random Forests algorithm is used to develop a regression model for the moisture saturation level. Variables considered by the model include the manufacturing process, the fiber type and architecture, the fiber and void contents, the matrix filler type and content, as well as the conditioning environment and temperature. On this training data, the model proved to be a good fit with a prediction accuracy of R^2(training)=94.96%. When used to predict the moisture saturation level for the remaining unseen 10% of the compiled data, the model exhibited a prediction accuracy of R^2(test)=85.28%. Furthermore, the Random Forests model allows the assessment of the impact of the different variables on the moisture saturation level. The fiber type is found to be the most important determinant on the moisture saturation level in vinylester composite laminates.YesPeer reviewed for the proceedings of the 34t

    A literature review on multi-echelon inventory management: the case of pharmaceutical supply chain

    No full text
    Inventory management remains a key challenge in supply chain management. Many companies recognize the benefits of a good inventory management system. An effective inventory management helps reaching a high customer service level while dealing with demand variability. In a complex supply chain network where inventories are found across the entire system as raw materials or finished products, the need for an integrated approach for managing inventory had become crucial. Modelling the system as a multi-echelon inventory system allows to consider all the factors related to inventory optimization. On the other hand, the high criticality of the pharmaceutical products makes the need for a sophisticated supply chain inventory management essential. The implementation of the multi-echelon inventory management in such supply chains helps keeping the stock of pharmaceutical products available at the different installations. This paper provides an insight into the multi-echelon inventory management problem, especially in the pharmaceutical supply chain. A classification of several multi-echelon inventory systems according to a set of criteria is provided. A synthesis of multiple multi-echelon pharmaceutical supply chain problems is elaborated

    Machine learning for survival analysis in cancer research: A comparative study

    No full text
    Overview: Survival analysis is at the basis of every study in the field of cancer research. As every endeavor in this field aims primarily and eventually to improve patients’ survival time or reduce the potential for recurrence. This article presents a summary of some cancer survival analysis techniques and an up-to-date overview of different implementations of Machine Learning in this area of research. This paper also presents an empirical comparison of selected statistical and Machine Learning approaches on different types of cancer medical datasets. Methods: In this paper we explore a selection of recent articles that: review the use of Machine Learning in cancer research and/or benchmark the different Machine Learning techniques used in cancer survival analysis. This search resulted in 12 papers that were selected following certain criteria. Our aim is to assess the importance of the use of Machine Learning for survival analysis in cancer research, compared to the statistical methods, and how different Machine Learning techniques may perform in different settings in the context of cancer survival analysis. The techniques were selected based on their popularity. Cox Proportional Hazards with Ridge penalty, Random Survival Forests, Gradient Boosting for Survival Analysis with a CoxPh loss function, linear and kernel Support Vector Machines were applied to 10 different cancer survival datasets. The mean Concordance Index and standard deviation were used to compare the performances of these techniques and the results of these implementations were summarized and analyzed for noticeable patterns or trends. Kaplan-Meier plots were used for the non-parametric survival analysis of the different datasets. Results: Cox Proportional Hazards delivers comparable results with Machine Learning techniques thanks to the Ridge penalty and the different methods for dealing with tied events but fails to produce results in higher dimensional datasets. All techniques benchmarked in the study had comparable performances. The use of prognostic tools when there is a mismatch between the patients and the populations used to train the models may not be advisable since each dataset provides a differently shaped survival curve even when presenting a similar cancer type

    Deep Learning-Based Ship Speed Prediction for Intelligent Maritime Traffic Management

    No full text
    Improving maritime operations planning and scheduling can play an important role in enhancing the sector’s performance and competitiveness. In this context, accurate ship speed estimation is crucial to ensure efficient maritime traffic management. This study addresses the problem of ship speed prediction from a Maritime Vessel Services perspective in an area of the Saint Lawrence Seaway. The challenge is to build a real-time predictive model that accommodates different routes and vessel types. This study proposes a data-driven solution based on deep learning sequence methods and historical ship trip data to predict ship speeds at different steps of a voyage. It compares three different sequence models and shows that they outperform the baseline ship speed rates used by the VTS. The findings suggest that deep learning models combined with maritime data can leverage the challenge of estimating ship speed. The proposed solution could provide accurate and real-time estimations of ship speed to improve shipping operational efficiency, navigation safety and security, and ship emissions estimation and monitoring
    corecore