245,912 research outputs found

    Second-Order Stochastic Optimization for Machine Learning in Linear Time

    Full text link
    First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization owing to efficient per-iteration complexity. Second-order methods, while able to provide faster convergence, have been much less explored due to the high cost of computing the second-order information. In this paper we develop second-order stochastic methods for optimization problems in machine learning that match the per-iteration cost of gradient based methods, and in certain settings improve upon the overall running time over popular first-order methods. Furthermore, our algorithm has the desirable property of being implementable in time linear in the sparsity of the input data

    Stochastic Dominance Efficiency Tests under Diversification

    Get PDF
    This paper focuses on Stochastic Dominance (SD) efficiency in a finite empirical panel data. We analytically characterize the sets of unsorted time series that dominate a given evaluated distribution by the First, Second, and Third order SD. Using these insights, we develop simple Linear Programming and 0-1 Mixed Integer Linear Programming tests of SD efficiency. The advantage to the earlier efficiency tests is that the proposed approach explicitly accounts for diversification. Allowing for diversification can both improve the power of the empirical SD tests, and enable SD based portfolio optimization. A simple numerical example illustrates the SD efficiency tests. Discussion on the application potential and the future research directions concludes.Stochastic Dominance, Protfolio Choice, Efficiency, Diversification, Mathematical Programming
    • …
    corecore