866 research outputs found

    Testing the Martingale Difference Hypothesis Using Neural Network Approximations

    Get PDF
    The martingale difference restriction is an outcome of many theoretical analyses in economics and finance. A large body of econometric literature deals with tests of that restriction. We provide new tests based on radial basis function neural networks. Our work is based on the test design of Blake and Kapetanios (2000, 2003a,b). However, unlike that work we can provide a formal theoretical justification for the validity of these tests using approximation results from Kapetanios and Blake (2007). These results take advantage of the link between the algorithms of Blake and Kapetanios (2000, 2003a,b) and boosting. We carry out a Monte Carlo study of the properties of the new tests and find that they have superior power performance to all existing tests of the martingale difference hypothesis we consider. An empirical application to the S&P500 constituents illustrates the usefulness of our new test.Martingale difference hypothesis, Neural networks, Boosting

    Efficient Private ERM for Smooth Objectives

    Full text link
    In this paper, we consider efficient differentially private empirical risk minimization from the viewpoint of optimization algorithms. For strongly convex and smooth objectives, we prove that gradient descent with output perturbation not only achieves nearly optimal utility, but also significantly improves the running time of previous state-of-the-art private optimization algorithms, for both ϵ\epsilon-DP and (ϵ,δ)(\epsilon, \delta)-DP. For non-convex but smooth objectives, we propose an RRPSGD (Random Round Private Stochastic Gradient Descent) algorithm, which provably converges to a stationary point with privacy guarantee. Besides the expected utility bounds, we also provide guarantees in high probability form. Experiments demonstrate that our algorithm consistently outperforms existing method in both utility and running time
    • …
    corecore