2,397 research outputs found

    Efficient Transductive Online Learning via Randomized Rounding

    Full text link
    Most traditional online learning algorithms are based on variants of mirror descent or follow-the-leader. In this paper, we present an online algorithm based on a completely different approach, tailored for transductive settings, which combines "random playout" and randomized rounding of loss subgradients. As an application of our approach, we present the first computationally efficient online algorithm for collaborative filtering with trace-norm constrained matrices. As a second application, we solve an open question linking batch learning and transductive online learningComment: To appear in a Festschrift in honor of V.N. Vapnik. Preliminary version presented in NIPS 201

    Numerical Analysis

    Get PDF
    Acknowledgements: This article will appear in the forthcoming Princeton Companion to Mathematics, edited by Timothy Gowers with June Barrow-Green, to be published by Princeton University Press.\ud \ud In preparing this essay I have benefitted from the advice of many colleagues who corrected a number of errors of fact and emphasis. I have not always followed their advice, however, preferring as one friend put it, to "put my head above the parapet". So I must take full responsibility for errors and omissions here.\ud \ud With thanks to: Aurelio Arranz, Alexander Barnett, Carl de Boor, David Bindel, Jean-Marc Blanc, Mike Bochev, Folkmar Bornemann, Richard Brent, Martin Campbell-Kelly, Sam Clark, Tim Davis, Iain Duff, Stan Eisenstat, Don Estep, Janice Giudice, Gene Golub, Nick Gould, Tim Gowers, Anne Greenbaum, Leslie Greengard, Martin Gutknecht, Raphael Hauser, Des Higham, Nick Higham, Ilse Ipsen, Arieh Iserles, David Kincaid, Louis Komzsik, David Knezevic, Dirk Laurie, Randy LeVeque, Bill Morton, John C Nash, Michael Overton, Yoshio Oyanagi, Beresford Parlett, Linda Petzold, Bill Phillips, Mike Powell, Alex Prideaux, Siegfried Rump, Thomas Schmelzer, Thomas Sonar, Hans Stetter, Gil Strang, Endre Süli, Defeng Sun, Mike Sussman, Daniel Szyld, Garry Tee, Dmitry Vasilyev, Andy Wathen, Margaret Wright and Steve Wright

    A survey of the state of the art and focused research in range systems, task 2

    Get PDF
    Contract generated publications are compiled which describe the research activities for the reporting period. Study topics include: equivalent configurations of systolic arrays; least squares estimation algorithms with systolic array architectures; modeling and equilization of nonlinear bandlimited satellite channels; and least squares estimation and Kalman filtering by systolic arrays

    Probabilistic Shaping for Finite Blocklengths: Distribution Matching and Sphere Shaping

    Get PDF
    In this paper, we provide for the first time a systematic comparison of distribution matching (DM) and sphere shaping (SpSh) algorithms for short blocklength probabilistic amplitude shaping. For asymptotically large blocklengths, constant composition distribution matching (CCDM) is known to generate the target capacity-achieving distribution. As the blocklength decreases, however, the resulting rate loss diminishes the efficiency of CCDM. We claim that for such short blocklengths and over the additive white Gaussian channel (AWGN), the objective of shaping should be reformulated as obtaining the most energy-efficient signal space for a given rate (rather than matching distributions). In light of this interpretation, multiset-partition DM (MPDM), enumerative sphere shaping (ESS) and shell mapping (SM), are reviewed as energy-efficient shaping techniques. Numerical results show that MPDM and SpSh have smaller rate losses than CCDM. SpSh--whose sole objective is to maximize the energy efficiency--is shown to have the minimum rate loss amongst all. We provide simulation results of the end-to-end decoding performance showing that up to 1 dB improvement in power efficiency over uniform signaling can be obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a discussion on the complexity of these algorithms from the perspective of latency, storage and computations.Comment: 18 pages, 10 figure

    Relax and Localize: From Value to Algorithms

    Full text link
    We show a principled way of deriving online learning algorithms from a minimax analysis. Various upper bounds on the minimax value, previously thought to be non-constructive, are shown to yield algorithms. This allows us to seamlessly recover known methods and to derive new ones. Our framework also captures such "unorthodox" methods as Follow the Perturbed Leader and the R^2 forecaster. We emphasize that understanding the inherent complexity of the learning problem leads to the development of algorithms. We define local sequential Rademacher complexities and associated algorithms that allow us to obtain faster rates in online learning, similarly to statistical learning theory. Based on these localized complexities we build a general adaptive method that can take advantage of the suboptimality of the observed sequence. We present a number of new algorithms, including a family of randomized methods that use the idea of a "random playout". Several new versions of the Follow-the-Perturbed-Leader algorithms are presented, as well as methods based on the Littlestone's dimension, efficient methods for matrix completion with trace norm, and algorithms for the problems of transductive learning and prediction with static experts

    Generalization Bounds in the Predict-then-Optimize Framework

    Full text link
    The predict-then-optimize framework is fundamental in many practical settings: predict the unknown parameters of an optimization problem, and then solve the problem using the predicted values of the parameters. A natural loss function in this environment is to consider the cost of the decisions induced by the predicted parameters, in contrast to the prediction error of the parameters. This loss function was recently introduced in Elmachtoub and Grigas (2017) and referred to as the Smart Predict-then-Optimize (SPO) loss. In this work, we seek to provide bounds on how well the performance of a prediction model fit on training data generalizes out-of-sample, in the context of the SPO loss. Since the SPO loss is non-convex and non-Lipschitz, standard results for deriving generalization bounds do not apply. We first derive bounds based on the Natarajan dimension that, in the case of a polyhedral feasible region, scale at most logarithmically in the number of extreme points, but, in the case of a general convex feasible region, have linear dependence on the decision dimension. By exploiting the structure of the SPO loss function and a key property of the feasible region, which we denote as the strength property, we can dramatically improve the dependence on the decision and feature dimensions. Our approach and analysis rely on placing a margin around problematic predictions that do not yield unique optimal solutions, and then providing generalization bounds in the context of a modified margin SPO loss function that is Lipschitz continuous. Finally, we characterize the strength property and show that the modified SPO loss can be computed efficiently for both strongly convex bodies and polytopes with an explicit extreme point representation.Comment: Preliminary version in NeurIPS 201
    • …
    corecore