49,985 research outputs found

    Relevance Subject Machine: A Novel Person Re-identification Framework

    Full text link
    We propose a novel method called the Relevance Subject Machine (RSM) to solve the person re-identification (re-id) problem. RSM falls under the category of Bayesian sparse recovery algorithms and uses the sparse representation of the input video under a pre-defined dictionary to identify the subject in the video. Our approach focuses on the multi-shot re-id problem, which is the prevalent problem in many video analytics applications. RSM captures the essence of the multi-shot re-id problem by constraining the support of the sparse codes for each input video frame to be the same. Our proposed approach is also robust enough to deal with time varying outliers and occlusions by introducing a sparse, non-stationary noise term in the model error. We provide a novel Variational Bayesian based inference procedure along with an intuitive interpretation of the proposed update rules. We evaluate our approach over several commonly used re-id datasets and show superior performance over current state-of-the-art algorithms. Specifically, for ILIDS-VID, a recent large scale re-id dataset, RSM shows significant improvement over all published approaches, achieving an 11.5% (absolute) improvement in rank 1 accuracy over the closest competing algorithm considered.Comment: Submitted to IEEE Transactions on Pattern Analysis and Machine Intelligenc

    Sampling of graph signals with successive local aggregations

    Full text link
    A new scheme to sample signals defined in the nodes of a graph is proposed. The underlying assumption is that such signals admit a sparse representation in a frequency domain related to the structure of the graph, which is captured by the so-called graph-shift operator. Most of the works that have looked at this problem have focused on using the value of the signal observed at a subset of nodes to recover the signal in the entire graph. Differently, the sampling scheme proposed here uses as input observations taken at a single node. The observations correspond to sequential applications of the graph-shift operator, which are linear combinations of the information gathered by the neighbors of the node. When the graph corresponds to a directed cycle (which is the support of time-varying signals), our method is equivalent to the classical sampling in the time domain. When the graph is more general, we show that the Vandermonde structure of the sampling matrix, which is critical to guarantee recovery when sampling time-varying signals, is preserved. Sampling and interpolation are analyzed first in the absence of noise and then noise is considered. We then study the recovery of the sampled signal when the specific set of frequencies that is active is not known. Moreover, we present a more general sampling scheme, under which, either our aggregation approach or the alternative approach of sampling a graph signal by observing the value of the signal at a subset of nodes can be both viewed as particular cases. The last part of the paper presents numerical experiments that illustrate the results developed through both synthetic graph signals and a real-world graph of the economy of the United States.Comment: Submitted to IEEE Transactions on Signal Processin

    Modified Hard Thresholding Pursuit with Regularization Assisted Support Identification

    Full text link
    Hard thresholding pursuit (HTP) is a recently proposed iterative sparse recovery algorithm which is a result of combination of a support selection step from iterated hard thresholding (IHT) and an estimation step from the orthogonal matching pursuit (OMP). HTP has been seen to enjoy improved recovery guarantee along with enhanced speed of convergence. Much of the success of HTP can be attributed to its improved support selection capability due to the support selection step from IHT. In this paper, we propose a generalized HTP algorithm, called regularized HTP (RHTP), where the support selection step of HTP is replaced by a IHT-type support selection where the cost function is replaced by a regularized cost function, while the estimation step continues to use the least squares function. With decomposable regularizer, satisfying certain regularity conditions, the RHTP algorithm is shown to produce a sequence dynamically equivalent to a sequence evolving according to a HTP-like evolution, where the identification stage has a gradient premultiplied with a time-varying diagonal matrix. RHTP is also proven, both theoretically, and numerically, to enjoy faster convergence vis-a-vis HTP with both noiseless and noisy measurement vectors.Comment: 10 pages, 5 figure

    Robust Bayesian Method for Simultaneous Block Sparse Signal Recovery with Applications to Face Recognition

    Full text link
    In this paper, we present a novel Bayesian approach to recover simultaneously block sparse signals in the presence of outliers. The key advantage of our proposed method is the ability to handle non-stationary outliers, i.e. outliers which have time varying support. We validate our approach with empirical results showing the superiority of the proposed method over competing approaches in synthetic data experiments as well as the multiple measurement face recognition problem.Comment: To appear in ICIP 201

    Dynamic Filtering of Time-Varying Sparse Signals via l1 Minimization

    Full text link
    Despite the importance of sparsity signal models and the increasing prevalence of high-dimensional streaming data, there are relatively few algorithms for dynamic filtering of time-varying sparse signals. Of the existing algorithms, fewer still provide strong performance guarantees. This paper examines two algorithms for dynamic filtering of sparse signals that are based on efficient l1 optimization methods. We first present an analysis for one simple algorithm (BPDN-DF) that works well when the system dynamics are known exactly. We then introduce a novel second algorithm (RWL1-DF) that is more computationally complex than BPDN-DF but performs better in practice, especially in the case where the system dynamics model is inaccurate. Robustness to model inaccuracy is achieved by using a hierarchical probabilistic data model and propagating higher-order statistics from the previous estimate (akin to Kalman filtering) in the sparse inference process. We demonstrate the properties of these algorithms on both simulated data as well as natural video sequences. Taken together, the algorithms presented in this paper represent the first strong performance analysis of dynamic filtering algorithms for time-varying sparse signals as well as state-of-the-art performance in this emerging application.Comment: 26 pages, 8 figures. arXiv admin note: substantial text overlap with arXiv:1208.032

    Recursive Recovery of Sparse Signal Sequences from Compressive Measurements: A Review

    Full text link
    In this article, we review the literature on design and analysis of recursive algorithms for reconstructing a time sequence of sparse signals from compressive measurements. The signals are assumed to be sparse in some transform domain or in some dictionary. Their sparsity patterns can change with time, although, in many practical applications, the changes are gradual. An important class of applications where this problem occurs is dynamic projection imaging, e.g., dynamic magnetic resonance imaging (MRI) for real-time medical applications such as interventional radiology, or dynamic computed tomography.Comment: To appear in IEEE Trans. Signal Processin

    Joint Channel Training and Feedback for FDD Massive MIMO Systems

    Full text link
    Massive multiple-input multiple-output (MIMO) is widely recognized as a promising technology for future 5G wireless communication systems. To achieve the theoretical performance gains in massive MIMO systems, accurate channel state information at the transmitter (CSIT) is crucial. Due to the overwhelming pilot signaling and channel feedback overhead, however, conventional downlink channel estimation and uplink channel feedback schemes might not be suitable for frequency-division duplexing (FDD) massive MIMO systems. In addition, these two topics are usually separately considered in the literature. In this paper, we propose a joint channel training and feedback scheme for FDD massive MIMO systems. Specifically, we firstly exploit the temporal correlation of time-varying channels to propose a differential channel training and feedback scheme, which simultaneously reduces the overhead for downlink training and uplink feedback. We next propose a structured compressive sampling matching pursuit (S-CoSaMP) algorithm to acquire a reliable CSIT by exploiting the structured sparsity of wireless MIMO channels. Simulation results demonstrate that the proposed scheme can achieve substantial reduction in the training and feedback overhead

    From Theory to Practice: Sub-Nyquist Sampling of Sparse Wideband Analog Signals

    Full text link
    Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We propose a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then lowpass filtered and sampled uniformly at a low rate, which is orders of magnitude smaller than Nyquist. Perfect recovery from the proposed samples is achieved under certain necessary and sufficient conditions. We also develop a digital architecture, which allows either reconstruction of the analog input, or processing of any band of interest at a low rate, that is, without interpolating to the high Nyquist rate. Numerical simulations demonstrate many engineering aspects: robustness to noise and mismodeling, potential hardware simplifications, realtime performance for signals with time-varying support and stability to quantization effects. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is restricted to discrete multitone signals and has a high computational load. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of state-of-the-art analog conversion technologies such as interleaved converters.Comment: 17 pages, 12 figures, to appear in IEEE Journal of Selected Topics in Signal Processing, the special issue on Compressed Sensin

    IDENT: Identifying Differential Equations with Numerical Time evolution

    Full text link
    Identifying unknown differential equations from a given set of discrete time dependent data is a challenging problem. A small amount of noise can make the recovery unstable, and nonlinearity and differential equations with varying coefficients add complexity to the problem. We assume that the governing partial differential equation (PDE) is a linear combination of a subset of a prescribed dictionary containing different differential terms, and the objective of this paper is to find the correct coefficients. We propose a new direction based on the fundamental idea of convergence analysis of numerical PDE schemes. We utilize Lasso for efficiency, and a performance guarantee is established based on an incoherence property. The main contribution is to validate and correct the results by Time Evolution Error (TEE). The new algorithm, called Identifying Differential Equations with Numerical Time evolution (IDENT), is explored for data with non-periodic boundary conditions, noisy data and PDEs with varying coefficients. From the recovery analysis of Lasso, we propose a new definition of Noise-to-Signal ratio, which better represents the level of noise in the case of PDE identification. We systematically analyze the effects of data generations and downsampling, and propose an order preserving denoising method called Least-Squares Moving Average (LSMA), to preprocess the given data. For the identification of PDEs with varying coefficients, we propose to add Base Element Expansion (BEE) to aide the computation. Various numerical experiments from basic tests to noisy data, downsampling effects and varying coefficients are presented

    LAMP: A Locally Adapting Matching Pursuit Framework for Group Sparse Signatures in Ultra-Wide Band Radar Imaging

    Full text link
    It has been found that radar returns of extended targets are not only sparse but also exhibit a tendency to cluster into randomly located, variable sized groups. However, the standard techniques of Compressive Sensing as applied in radar imaging hardly considers the clustering tendency into account while reconstructing the image from the compressed measurements. If the group sparsity is taken into account, it is intuitive that one might obtain better results both in terms of accuracy and time complexity as compared to the conventional recovery techniques like Orthogonal Matching Pursuit (OMP). In order to remedy this, techniques like Block OMP have been used in the existing literature. An alternate approach is via reconstructing the signal by transforming into the Hough Transform Domain where they become point-wise sparse. However, these techniques essentially assume specific size and structure of the groups and are not always effective if the exact characteristics of the groups are not known, prior to reconstruction. In this manuscript, a novel framework that we call locally adapting matching pursuit (LAMP) have been proposed for efficient reconstruction of group sparse signals from compressed measurements without assuming any specific size, location, or structure of the groups. The recovery guarantee of the LAMP and its superiority compared to the existing algorithms has been established with respect to accuracy, time complexity and flexibility in group size. LAMP has been successfully used on a real-world, experimental data set.Comment: 14 pages,22 figures, Draft to be submitted to journa
    • …
    corecore