13 research outputs found

    Variants of SGD for Lipschitz Continuous Loss Functions in Low-Precision Environments

    Full text link
    Motivated by neural network training in low-bit floating and fixed-point environments, this work studies the convergence of variants of SGD with computational error. Considering a general stochastic Lipschitz continuous loss function, a novel convergence result to a Clarke stationary point is presented assuming that only an approximation of its stochastic gradient can be computed as well as error in computing the SGD step itself. Different variants of SGD are then tested empirically in a variety of low-precision arithmetic environments, with improved test set accuracy achieved compared to SGD for two image recognition tasks

    Chance Constrained Optimization for Targeted Internet Advertising

    Full text link
    We introduce a chance constrained optimization model for the fulfillment of guaranteed display Internet advertising campaigns. The proposed formulation for the allocation of display inventory takes into account the uncertainty of the supply of Internet viewers. We discuss and present theoretical and computational features of the model via Monte Carlo sampling and convex approximations. Theoretical upper and lower bounds are presented along with a numerical substantiation

    Risk management under Omega measure

    Full text link
    We prove that the Omega measure, which considers all moments when assessing portfolio performance, is equivalent to the widely used Sharpe ratio under jointly elliptic distributions of returns. Portfolio optimization of the Sharpe ratio is then explored, with an active-set algorithm presented for markets prohibiting short sales. When asymmetric returns are considered we show that the Omega measure and Sharpe ratio lead to different optimal portfolios

    Perturbed Iterate SGD for Lipschitz Continuous Loss Functions

    Full text link
    This paper presents an extension of stochastic gradient descent for the minimization of Lipschitz continuous loss functions. Using the Clarke ϵ\epsilon-subdifferential, we prove non-asymptotic convergence bounds to an approximate stationary point in expectation. Our results hold under the assumption that the stochastic loss function is a Carath\'eodory function which is almost everywhere Lipschitz continuous in the decision variables. To the best of our knowledge this is the first non-asymptotic convergence analysis under these minimal assumptions. Our motivation is for use in non-convex non-smooth stochastic optimization problems, which are frequently encountered in applications such as machine learning. We present numerical results from training a feedforward neural network, comparing our algorithm to stochastic gradient descent

    Applications of Chance Constrained Optimization in Operations Management

    Get PDF
    In this thesis we explore three applications of chance constrained optimization in operations management. We first investigate the effect of consumer demand estimation error on new product production planning. An inventory model is proposed, whereby demand is influenced by price and advertising. The effect of parameter misspecification of the demand model is empirically examined in relation to profit and service level feasibility, and conservative approaches to estimating their effect on consumer demand is determined. We next consider optimization in Internet advertising by introducing a chance constrained model for the fulfillment of guaranteed display Internet advertising campaigns. Lower and upper bounds using Monte Carlo sampling and convex approximations are presented, as well as a branching heuristic for sample approximation lower bounds and an iterative algorithm for improved convex approximation upper bounds. The final application is in risk management for parimutuel horse racing wagering. We develop a methodology to limit potential losing streaks with high probability to the given time horizon of a gambler. A proof of concept was conducted using one season of historical race data, where losing streaks were effectively contained within different time periods for superfecta betting
    corecore