469 research outputs found

    Machine Learning for Stock Prediction Based on Fundamental Analysis

    Get PDF
    Application of machine learning for stock prediction is attracting a lot of attention in recent years. A large amount of research has been conducted in this area and multiple existing results have shown that machine learning methods could be successfully used toward stock predicting using stocks’ historical data. Most of these existing approaches have focused on short term prediction using stocks’ historical price and technical indicators. In this thesis, we prepared 22 years’ worth of stock quarterly financial data and investigated three machine learning algorithms: Feed-forward Neural Network (FNN), Random Forest (RF) and Adaptive Neural Fuzzy Inference System (ANFIS) for stock prediction based on fundamental analysis. In addition, we applied RF based feature selection and bootstrap aggregation in order to improve model performance and aggregate predictions from different models. Our results show that RF model achieves the best prediction results, and feature selection is able to improve test performance of FNN and ANFIS. Moreover, the aggregated model outperforms all baseline models as well as the benchmark DJIA index by an acceptable margin for the test period. Our findings demonstrate that machine learning models could be used to aid fundamental analysts with decision making regarding to stock investment

    A fast reduced order method for linear parabolic inverse source problems

    Full text link
    In this paper, we propose a novel, computationally efficient reduced order method to solve linear parabolic inverse source problems. Our approach provides accurate numerical solutions without relying on specific training data. The forward solution is constructed using a Krylov sequence, while the source term is recovered via the conjugate gradient (CG) method. Under a weak regularity assumption on the solution of the parabolic partial differential equations (PDEs), we establish convergence of the forward solution and provide a rigorous error estimate for our method. Numerical results demonstrate that our approach offers substantial computational savings compared to the traditional finite element method (FEM) and retains equivalent accuracy.Comment: This is a placeholder. Unfinished Section 4 and Section

    Effects of tendon viscoelasticity on the distribution of forces across sutures in a model tendon-to-bone repair

    Get PDF
    Tears to the rotator cuff often require surgical repair. These repairs often culminate in re-tearing when sutures break through the tendon in the weeks following repair. Numerous studies have been performed to identify suturing strategies that reduce this risk by balancing forces across sutures. However, the structural engineering basis for these approaches is still emerging, and the effects of tendon mechanics on load balancing is still unclear. Specifically, the effects of the viscoelastic nature of tendon on load sharing have not been established. With the aim of providing insight into this problem, this thesis studies how tendon viscoelasticity, tendon stiffness, and structural features such as the spacing of suture anchors affects the balance of forces across sutures. A discrete shear lag approach was developed, and the equations were solved numerically. Results from a model, three-row sutured re-attachment demonstrated that optimized distributions of suture stiffnesses and of the spacing of suture anchors can balance the forces across sutures to within a few percent, even when accounting for tendon viscoelasticity. Non-optimized distributions resulted in concentrated force, typically in the outermost sutures. The mathematical framework provides a foundation for optimizing suturing strategies, and results underscore the importance of accounting for viscoelastic effects in the design of tendon to bone repairs

    Neural Network Models for Stock Selection Based on Fundamental Analysis

    Get PDF
    Application of neural network architectures for financial prediction has been actively studied in recent years. This paper presents a comparative study that investigates and compares feed-forward neural network (FNN) and adaptive neural fuzzy inference system (ANFIS) on stock prediction using fundamental financial ratios. The study is designed to evaluate the performance of each architecture based on the relative return of the selected portfolios with respect to the benchmark stock index. The results show that both architectures possess the ability to separate winners and losers from a sample universe of stocks, and the selected portfolios outperform the benchmark. Our study argues that FNN shows superior performance over ANFIS

    Machine Learning for Stock Prediction Based on Fundamental Analysis

    Get PDF
    Application of machine learning for stock prediction is attracting a lot of attention in recent years. A large amount of research has been conducted in this area and multiple existing results have shown that machine learning methods could be successfully used toward stock predicting using stocks’ historical data. Most of these existing approaches have focused on short term prediction using stocks’ historical price and technical indicators. In this paper, we prepared 22 years’ worth of stock quarterly financial data and investigated three machine learning algorithms: Feed-forward Neural Network (FNN), Random Forest (RF) and Adaptive Neural Fuzzy Inference System (ANFIS) for stock prediction based on fundamental analysis. In addition, we applied RF based feature selection and bootstrap aggregation in order to improve model performance and aggregate predictions from different models. Our results show that RF model achieves the best prediction results, and feature selection is able to improve test performance of FNN and ANFIS. Moreover, the aggregated model outperforms all baseline models as well as the benchmark DJIA index by an acceptable margin for the test period. Our findings demonstrate that machine learning models could be used to aid fundamental analysts with decision-making regarding stock investment

    Every Cloud Has a Silver Lining: Analysis of Negative Book Value Firms

    Get PDF
    Negative book value firms have become more prevalent in recent years, ranging from 0.41% of all Compustat firms in 1961 to 12.47% in 2016 with highest representations in the healthcare, telecommunication, and computer electronic industries. To examine why these negative book value firms are not liquidated, we investigate firms’ 1) current accounting practices, 2) future investment opportunities, and 3) narrative investment disclosures. We first document that negative book value firms adopting a more conservative accounting practice in the current period are less likely to be liquidated. Next, we find that the liquidation likelihood is lower for negative book value firms with higher levels of future intangible investments. Furthermore, we employ a machine-learning-based latent Dirichlet allocation (LDA) approach to measure investment-oriented firm disclosures and find a lower liquidation likelihood for negative book value firms disclosing more future investment narratives. Our evidence is robust to including reorganization firms and extending the length of the liquidation window. Overall, our study sheds light on why negative book value firms are not liquidated by providing evidence on firms’ current accounting practices and future investment opportunities

    Coreset selection can accelerate quantum machine learning models with provable generalization

    Full text link
    Quantum neural networks (QNNs) and quantum kernels stand as prominent figures in the realm of quantum machine learning, poised to leverage the nascent capabilities of near-term quantum computers to surmount classical machine learning challenges. Nonetheless, the training efficiency challenge poses a limitation on both QNNs and quantum kernels, curbing their efficacy when applied to extensive datasets. To confront this concern, we present a unified approach: coreset selection, aimed at expediting the training of QNNs and quantum kernels by distilling a judicious subset from the original training dataset. Furthermore, we analyze the generalization error bounds of QNNs and quantum kernels when trained on such coresets, unveiling the comparable performance with those training on the complete original dataset. Through systematic numerical simulations, we illuminate the potential of coreset selection in expediting tasks encompassing synthetic data classification, identification of quantum correlations, and quantum compiling. Our work offers a useful way to improve diverse quantum machine learning models with a theoretical guarantee while reducing the training cost.Comment: 25 pages, 7 figure
    • 

    corecore