51,195 research outputs found

    Focal-plane wavefront sensing with high-order adaptive optics systems

    Full text link
    We investigate methods to calibrate the non-common path aberrations at an adaptive optics system having a wavefront-correcting device working at an extremely high resolution (larger than 150x150). We use focal-plane images collected successively, the corresponding phase-diversity information and numerically efficient algorithms to calculate the required wavefront updates. The wavefront correction is applied iteratively until the algorithms converge. Different approaches are studied. In addition of the standard Gerchberg-Saxton algorithm, we test the extension of the Fast & Furious algorithm that uses three images and creates an estimate of the pupil amplitudes. We also test recently proposed phase-retrieval methods based on convex optimisation. The results indicate that in the framework we consider, the calibration task is easiest with algorithms similar to the Fast & Furious.Comment: 11 pages, 7 figures, published in SPIE proceeding

    An Online Parallel and Distributed Algorithm for Recursive Estimation of Sparse Signals

    Full text link
    In this paper, we consider a recursive estimation problem for linear regression where the signal to be estimated admits a sparse representation and measurement samples are only sequentially available. We propose a convergent parallel estimation scheme that consists in solving a sequence of â„“1\ell_{1}-regularized least-square problems approximately. The proposed scheme is novel in three aspects: i) all elements of the unknown vector variable are updated in parallel at each time instance, and convergence speed is much faster than state-of-the-art schemes which update the elements sequentially; ii) both the update direction and stepsize of each element have simple closed-form expressions, so the algorithm is suitable for online (real-time) implementation; and iii) the stepsize is designed to accelerate the convergence but it does not suffer from the common trouble of parameter tuning in literature. Both centralized and distributed implementation schemes are discussed. The attractive features of the proposed algorithm are also numerically consolidated.Comment: Part of this work has been presented at The Asilomar Conference on Signals, Systems, and Computers, Nov. 201

    Outlier Detection Using Nonconvex Penalized Regression

    Full text link
    This paper studies the outlier detection problem from the point of view of penalized regressions. Our regression model adds one mean shift parameter for each of the nn data points. We then apply a regularization favoring a sparse vector of mean shift parameters. The usual L1L_1 penalty yields a convex criterion, but we find that it fails to deliver a robust estimator. The L1L_1 penalty corresponds to soft thresholding. We introduce a thresholding (denoted by Θ\Theta) based iterative procedure for outlier detection (Θ\Theta-IPOD). A version based on hard thresholding correctly identifies outliers on some hard test problems. We find that Θ\Theta-IPOD is much faster than iteratively reweighted least squares for large data because each iteration costs at most O(np)O(np) (and sometimes much less) avoiding an O(np2)O(np^2) least squares estimate. We describe the connection between Θ\Theta-IPOD and MM-estimators. Our proposed method has one tuning parameter with which to both identify outliers and estimate regression coefficients. A data-dependent choice can be made based on BIC. The tuned Θ\Theta-IPOD shows outstanding performance in identifying outliers in various situations in comparison to other existing approaches. This methodology extends to high-dimensional modeling with p≫np\gg n, if both the coefficient vector and the outlier pattern are sparse

    Network Flow Algorithms for Structured Sparsity

    Get PDF
    We consider a class of learning problems that involve a structured sparsity-inducing norm defined as the sum of ℓ∞\ell_\infty-norms over groups of variables. Whereas a lot of effort has been put in developing fast optimization methods when the groups are disjoint or embedded in a specific hierarchical structure, we address here the case of general overlapping groups. To this end, we show that the corresponding optimization problem is related to network flow optimization. More precisely, the proximal problem associated with the norm we consider is dual to a quadratic min-cost flow problem. We propose an efficient procedure which computes its solution exactly in polynomial time. Our algorithm scales up to millions of variables, and opens up a whole new range of applications for structured sparse models. We present several experiments on image and video data, demonstrating the applicability and scalability of our approach for various problems.Comment: accepted for publication in Adv. Neural Information Processing Systems, 201
    • …
    corecore