1,012 research outputs found

    Learning dynamics in feedforward neural networks

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.Includes bibliographical references (leaves 108-115).by Jagesh V. Shah.M.S

    USING GENERATIVE ADVERSARIAL NETWORK AS A VALUE-AT-RISK ESTIMATOR

    Get PDF
    Value-at-risk (VaR) estimation is a critical task for modern financial institution. Most methods to estimate VaR rely on classical statistical methods. They produce reliable estimates but there is demand for ever more accurate estimates. Recently there has been major breakthroughs for machine learning models in other fields. This has led to increasing interest in applying machine learning for financial applications. This thesis applies new data-driven machine learning method, generative adversarial network (GAN), for (VaR) estimation. GAN was proposed for fake image generation. Since then it has found applications in multiple domains, such as finance. Estimating the true underlying distribution of financial time series is notoriously difficult task. GAN doesn’t explicitly estimate the underlying distribution but tries to generate new samples from the distribution. This thesis applies a basic GAN model to simulate stock market returns and then estimate the VaR from these. The experiments are conducted on S&P500-index. The GAN model is compared to a simple historical simulation baseline. In the experiments it becomes evident that the GAN model lacks robustness and responds poorly to changes in market. The GAN is unable to fully capture the statistical properties of stock market returns. It can replicate a little of the excess kurtosis present in stock market returns and some of the volatility clustering. The results show that the GAN model has tendency to estimate the VaR between a fairly narrow range. This is in contrast to historical simulation, which can respond to changes in the stock market. Machine learning models, especially neural networks like GANs, present challenges to financial practitioners. Although they provide sometimes more accurate estimates than traditional methods, they lack transparency. GANs have shown promise in the literature but suffer from being unstable to train. It is difficult to guess will a trained GAN work as it is meant to work. Regardless of these shortcomings, it is worthwhile to study GANs and other neural networks in finance. They have performed exceptionally in other fields. Researchers must try to open the black-box nature of the models. Interpretability of the models will allow their use in the financial industry. This thesis shows that more research is needed to provide robust estimates that can be relied on

    Incorporating a priori knowledge into initialized weights for neural classifier

    Get PDF
    Artificial neural networks (ANN), especially, multilayer perceptrons (MLP) have been widely used in pattern recognition and classification. Nevertheless, how to incorporate a priori knowledge in the design of ANNs is still an open problem. The paper tries to give some insight on this topic emphasizing weight initialization from three perspectives. Theoretical analyses and simulations are offered for validatio

    Techniques of replica symmetry breaking and the storage problem of the McCulloch-Pitts neuron

    Full text link
    In this article the framework for Parisi's spontaneous replica symmetry breaking is reviewed, and subsequently applied to the example of the statistical mechanical description of the storage properties of a McCulloch-Pitts neuron. The technical details are reviewed extensively, with regard to the wide range of systems where the method may be applied. Parisi's partial differential equation and related differential equations are discussed, and a Green function technique introduced for the calculation of replica averages, the key to determining the averages of physical quantities. The ensuing graph rules involve only tree graphs, as appropriate for a mean-field-like model. The lowest order Ward-Takahashi identity is recovered analytically and is shown to lead to the Goldstone modes in continuous replica symmetry breaking phases. The need for a replica symmetry breaking theory in the storage problem of the neuron has arisen due to the thermodynamical instability of formerly given solutions. Variational forms for the neuron's free energy are derived in terms of the order parameter function x(q), for different prior distribution of synapses. Analytically in the high temperature limit and numerically in generic cases various phases are identified, among them one similar to the Parisi phase in the Sherrington-Kirkpatrick model. Extensive quantities like the error per pattern change slightly with respect to the known unstable solutions, but there is a significant difference in the distribution of non-extensive quantities like the synaptic overlaps and the pattern storage stability parameter. A simulation result is also reviewed and compared to the prediction of the theory.Comment: 103 Latex pages (with REVTeX 3.0), including 15 figures (ps, epsi, eepic), accepted for Physics Report
    • 

    corecore