4,009 research outputs found

    Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal

    Full text link
    The Odd Cycle Transversal problem (OCT) asks whether a given graph can be made bipartite by deleting at most kk of its vertices. In a breakthrough result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a \BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial runtime of uniform degree for every fixed kk. It is known that this implies a polynomial-time compression algorithm that turns OCT instances into equivalent instances of size at most \BigOh(4^k), a so-called kernelization. Since then the existence of a polynomial kernel for OCT, i.e., a kernelization with size bounded polynomially in kk, has turned into one of the main open questions in the study of kernelization. This work provides the first (randomized) polynomial kernelization for OCT. We introduce a novel kernelization approach based on matroid theory, where we encode all relevant information about a problem instance into a matroid with a representation of size polynomial in kk. For OCT, the matroid is built to allow us to simulate the computation of the iterative compression step of the algorithm of Reed, Smith, and Vetta, applied (for only one round) to an approximate odd cycle transversal which it is aiming to shrink to size kk. The process is randomized with one-sided error exponentially small in kk, where the result can contain false positives but no false negatives, and the size guarantee is cubic in the size of the approximate solution. Combined with an \BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a reduction of the instance to size \BigOh(k^{4.5}), implying a randomized polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape

    Sampling and Recovery of Pulse Streams

    Full text link
    Compressive Sensing (CS) is a new technique for the efficient acquisition of signals, images, and other data that have a sparse representation in some basis, frame, or dictionary. By sparse we mean that the N-dimensional basis representation has just K<<N significant coefficients; in this case, the CS theory maintains that just M = K log N random linear signal measurements will both preserve all of the signal information and enable robust signal reconstruction in polynomial time. In this paper, we extend the CS theory to pulse stream data, which correspond to S-sparse signals/images that are convolved with an unknown F-sparse pulse shape. Ignoring their convolutional structure, a pulse stream signal is K=SF sparse. Such signals figure prominently in a number of applications, from neuroscience to astronomy. Our specific contributions are threefold. First, we propose a pulse stream signal model and show that it is equivalent to an infinite union of subspaces. Second, we derive a lower bound on the number of measurements M required to preserve the essential information present in pulse streams. The bound is linear in the total number of degrees of freedom S + F, which is significantly smaller than the naive bound based on the total signal sparsity K=SF. Third, we develop an efficient signal recovery algorithm that infers both the shape of the impulse response as well as the locations and amplitudes of the pulses. The algorithm alternatively estimates the pulse locations and the pulse shape in a manner reminiscent of classical deconvolution algorithms. Numerical experiments on synthetic and real data demonstrate the advantages of our approach over standard CS

    Compressive Source Separation: Theory and Methods for Hyperspectral Imaging

    Get PDF
    With the development of numbers of high resolution data acquisition systems and the global requirement to lower the energy consumption, the development of efficient sensing techniques becomes critical. Recently, Compressed Sampling (CS) techniques, which exploit the sparsity of signals, have allowed to reconstruct signal and images with less measurements than the traditional Nyquist sensing approach. However, multichannel signals like Hyperspectral images (HSI) have additional structures, like inter-channel correlations, that are not taken into account in the classical CS scheme. In this paper we exploit the linear mixture of sources model, that is the assumption that the multichannel signal is composed of a linear combination of sources, each of them having its own spectral signature, and propose new sampling schemes exploiting this model to considerably decrease the number of measurements needed for the acquisition and source separation. Moreover, we give theoretical lower bounds on the number of measurements required to perform reconstruction of both the multichannel signal and its sources. We also proposed optimization algorithms and extensive experimentation on our target application which is HSI, and show that our approach recovers HSI with far less measurements and computational effort than traditional CS approaches.Comment: 32 page
    corecore