22,597 research outputs found

    Estimation of the Number of Spikes, Possibly Equal, in the High-Dimensional Case

    Get PDF
    Estimating the number of spikes in a spiked model is an important problem in many areas such as signal processing. Most of the classical approaches assume a large sample size nn whereas the dimension pp of the observations is kept small. In this paper, we consider the case of high dimension, where pp is large compared to nn. The approach is based on recent results of random matrix theory. We extend our previous results to a more difficult situation where some spikes are equal, and compare our algorithm to an existing benchmark method

    Super-Resolution of Positive Sources: the Discrete Setup

    Full text link
    In single-molecule microscopy it is necessary to locate with high precision point sources from noisy observations of the spectrum of the signal at frequencies capped by fcf_c, which is just about the frequency of natural light. This paper rigorously establishes that this super-resolution problem can be solved via linear programming in a stable manner. We prove that the quality of the reconstruction crucially depends on the Rayleigh regularity of the support of the signal; that is, on the maximum number of sources that can occur within a square of side length about 1/fc1/f_c. The theoretical performance guarantee is complemented with a converse result showing that our simple convex program convex is nearly optimal. Finally, numerical experiments illustrate our methods.Comment: 31 page, 7 figure

    Convexity in source separation: Models, geometry, and algorithms

    Get PDF
    Source separation or demixing is the process of extracting multiple components entangled within a signal. Contemporary signal processing presents a host of difficult source separation problems, from interference cancellation to background subtraction, blind deconvolution, and even dictionary learning. Despite the recent progress in each of these applications, advances in high-throughput sensor technology place demixing algorithms under pressure to accommodate extremely high-dimensional signals, separate an ever larger number of sources, and cope with more sophisticated signal and mixing models. These difficulties are exacerbated by the need for real-time action in automated decision-making systems. Recent advances in convex optimization provide a simple framework for efficiently solving numerous difficult demixing problems. This article provides an overview of the emerging field, explains the theory that governs the underlying procedures, and surveys algorithms that solve them efficiently. We aim to equip practitioners with a toolkit for constructing their own demixing algorithms that work, as well as concrete intuition for why they work

    Understanding the fine structure of electricity prices

    Get PDF
    This paper analyzes the special features of electricity spot prices derived from the physics of this commodity and from the economics of supply and demand in a market pool. Besides mean reversion, a property they share with other commodities, power prices exhibit the unique feature of spikes in trajectories. We introduce a class of discontinuous processes exhibiting a "jump-reversion" component to properly represent these sharp upward moves shortly followed by drops of similar magnitude. Our approach allows to capture—for the first time to our knowledge—both the trajectorial and the statistical properties of electricity pool prices. The quality of the fitting is illustrated on a database of major U.S. power markets

    Structured Sparsity: Discrete and Convex approaches

    Full text link
    Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated {\em structured} sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
    corecore