93,320 research outputs found

    Parallel sparse interpolation using small primes

    Full text link
    To interpolate a supersparse polynomial with integer coefficients, two alternative approaches are the Prony-based "big prime" technique, which acts over a single large finite field, or the more recently-proposed "small primes" technique, which reduces the unknown sparse polynomial to many low-degree dense polynomials. While the latter technique has not yet reached the same theoretical efficiency as Prony-based methods, it has an obvious potential for parallelization. We present a heuristic "small primes" interpolation algorithm and report on a low-level C implementation using FLINT and MPI.Comment: Accepted to PASCO 201

    Subsquares Approach - Simple Scheme for Solving Overdetermined Interval Linear Systems

    Full text link
    In this work we present a new simple but efficient scheme - Subsquares approach - for development of algorithms for enclosing the solution set of overdetermined interval linear systems. We are going to show two algorithms based on this scheme and discuss their features. We start with a simple algorithm as a motivation, then we continue with a sequential algorithm. Both algorithms can be easily parallelized. The features of both algorithms will be discussed and numerically tested.Comment: submitted to PPAM 201

    Pricing path-dependent Bermudan options using Wiener chaos expansion: an embarrassingly parallel approach

    Full text link
    In this work, we propose a new policy iteration algorithm for pricing Bermudan options when the payoff process cannot be written as a function of a lifted Markov process. Our approach is based on a modification of the well-known Longstaff Schwartz algorithm, in which we basically replace the standard least square regression by a Wiener chaos expansion. Not only does it allow us to deal with a non Markovian setting, but it also breaks the bottleneck induced by the least square regression as the coefficients of the chaos expansion are given by scalar products on the L^2 space and can therefore be approximated by independent Monte Carlo computations. This key feature enables us to provide an embarrassingly parallel algorithm.Comment: The Journal of Computational Finance, Incisive Media, In pres

    A randomised primal-dual algorithm for distributed radio-interferometric imaging

    Get PDF
    Next generation radio telescopes, like the Square Kilometre Array, will acquire an unprecedented amount of data for radio astronomy. The development of fast, parallelisable or distributed algorithms for handling such large-scale data sets is of prime importance. Motivated by this, we investigate herein a convex optimisation algorithmic structure, based on primal-dual forward-backward iterations, for solving the radio interferometric imaging problem. It can encompass any convex prior of interest. It allows for the distributed processing of the measured data and introduces further flexibility by employing a probabilistic approach for the selection of the data blocks used at a given iteration. We study the reconstruction performance with respect to the data distribution and we propose the use of nonuniform probabilities for the randomised updates. Our simulations show the feasibility of the randomisation given a limited computing infrastructure as well as important computational advantages when compared to state-of-the-art algorithmic structures.Comment: 5 pages, 3 figures, Proceedings of the European Signal Processing Conference (EUSIPCO) 2016, Related journal publication available at https://arxiv.org/abs/1601.0402

    Local-Aggregate Modeling for Big-Data via Distributed Optimization: Applications to Neuroimaging

    Full text link
    Technological advances have led to a proliferation of structured big data that have matrix-valued covariates. We are specifically motivated to build predictive models for multi-subject neuroimaging data based on each subject's brain imaging scans. This is an ultra-high-dimensional problem that consists of a matrix of covariates (brain locations by time points) for each subject; few methods currently exist to fit supervised models directly to this tensor data. We propose a novel modeling and algorithmic strategy to apply generalized linear models (GLMs) to this massive tensor data in which one set of variables is associated with locations. Our method begins by fitting GLMs to each location separately, and then builds an ensemble by blending information across locations through regularization with what we term an aggregating penalty. Our so called, Local-Aggregate Model, can be fit in a completely distributed manner over the locations using an Alternating Direction Method of Multipliers (ADMM) strategy, and thus greatly reduces the computational burden. Furthermore, we propose to select the appropriate model through a novel sequence of faster algorithmic solutions that is similar to regularization paths. We will demonstrate both the computational and predictive modeling advantages of our methods via simulations and an EEG classification problem.Comment: 41 pages, 5 figures and 3 table
    corecore