26,076 research outputs found

    Direction-of-Arrival Estimation Based on Sparse Recovery with Second-Order Statistics

    Get PDF
    Traditional direction-of-arrival (DOA) estimation techniques perform Nyquist-rate sampling of the received signals and as a result they require high storage. To reduce sampling ratio, we introduce level-crossing (LC) sampling which captures samples whenever the signal crosses predetermined reference levels, and the LC-based analog-to-digital converter (LC ADC) has been shown to efficiently sample certain classes of signals. In this paper, we focus on the DOA estimation problem by using second-order statistics based on the LC samplings recording on one sensor, along with the synchronous samplings of the another sensors, a sparse angle space scenario can be found by solving an ell1ell_1 minimization problem, giving the number of sources and their DOA's. The experimental results show that our proposed method, when compared with some existing norm-based constrained optimization compressive sensing (CS) algorithms, as well as subspace method, improves the DOA estimation performance, while using less samples when compared with Nyquist-rate sampling and reducing sensor activity especially for long time silence signal

    Feedback Acquisition and Reconstruction of Spectrum-Sparse Signals by Predictive Level Comparisons

    Full text link
    In this letter, we propose a sparsity promoting feedback acquisition and reconstruction scheme for sensing, encoding and subsequent reconstruction of spectrally sparse signals. In the proposed scheme, the spectral components are estimated utilizing a sparsity-promoting, sliding-window algorithm in a feedback loop. Utilizing the estimated spectral components, a level signal is predicted and sign measurements of the prediction error are acquired. The sparsity promoting algorithm can then estimate the spectral components iteratively from the sign measurements. Unlike many batch-based Compressive Sensing (CS) algorithms, our proposed algorithm gradually estimates and follows slow changes in the sparse components utilizing a sliding-window technique. We also consider the scenario in which possible flipping errors in the sign bits propagate along iterations (due to the feedback loop) during reconstruction. We propose an iterative error correction algorithm to cope with this error propagation phenomenon considering a binary-sparse occurrence model on the error sequence. Simulation results show effective performance of the proposed scheme in comparison with the literature

    High-resolution distributed sampling of bandlimited fields with low-precision sensors

    Full text link
    The problem of sampling a discrete-time sequence of spatially bandlimited fields with a bounded dynamic range, in a distributed, communication-constrained, processing environment is addressed. A central unit, having access to the data gathered by a dense network of fixed-precision sensors, operating under stringent inter-node communication constraints, is required to reconstruct the field snapshots to maximum accuracy. Both deterministic and stochastic field models are considered. For stochastic fields, results are established in the almost-sure sense. The feasibility of having a flexible tradeoff between the oversampling rate (sensor density) and the analog-to-digital converter (ADC) precision, while achieving an exponential accuracy in the number of bits per Nyquist-interval per snapshot is demonstrated. This exposes an underlying ``conservation of bits'' principle: the bit-budget per Nyquist-interval per snapshot (the rate) can be distributed along the amplitude axis (sensor-precision) and space (sensor density) in an almost arbitrary discrete-valued manner, while retaining the same (exponential) distortion-rate characteristics. Achievable information scaling laws for field reconstruction over a bounded region are also derived: With N one-bit sensors per Nyquist-interval, Θ(logN)\Theta(\log N) Nyquist-intervals, and total network bitrate Rnet=Θ((logN)2)R_{net} = \Theta((\log N)^2) (per-sensor bitrate Θ((logN)/N)\Theta((\log N)/N)), the maximum pointwise distortion goes to zero as D=O((logN)2/N)D = O((\log N)^2/N) or D=O(Rnet2βRnet)D = O(R_{net} 2^{-\beta \sqrt{R_{net}}}). This is shown to be possible with only nearest-neighbor communication, distributed coding, and appropriate interpolation algorithms. For a fixed, nonzero target distortion, the number of fixed-precision sensors and the network rate needed is always finite.Comment: 17 pages, 6 figures; paper withdrawn from IEEE Transactions on Signal Processing and re-submitted to the IEEE Transactions on Information Theor

    A Simplified Crossing Fiber Model in Diffusion Weighted Imaging

    Get PDF
    Diffusion MRI (dMRI) is a vital source of imaging data for identifying anatomical connections in the living human brain that form the substrate for information transfer between brain regions. dMRI can thus play a central role toward our understanding of brain function. The quantitative modeling and analysis of dMRI data deduces the features of neural fibers at the voxel level, such as direction and density. The modeling methods that have been developed range from deterministic to probabilistic approaches. Currently, the Ball-and-Stick model serves as a widely implemented probabilistic approach in the tractography toolbox of the popular FSL software package and FreeSurfer/TRACULA software package. However, estimation of the features of neural fibers is complex under the scenario of two crossing neural fibers, which occurs in a sizeable proportion of voxels within the brain. A Bayesian non-linear regression is adopted, comprised of a mixture of multiple non-linear components. Such models can pose a difficult statistical estimation problem computationally. To make the approach of Ball-and-Stick model more feasible and accurate, we propose a simplified version of Ball-and-Stick model that reduces parameter space dimensionality. This simplified model is vastly more efficient in the terms of computation time required in estimating parameters pertaining to two crossing neural fibers through Bayesian simulation approaches. Moreover, the performance of this new model is comparable or better in terms of bias and estimation variance as compared to existing models

    Pushing towards the Limit of Sampling Rate: Adaptive Chasing Sampling

    Full text link
    Measurement samples are often taken in various monitoring applications. To reduce the sensing cost, it is desirable to achieve better sensing quality while using fewer samples. Compressive Sensing (CS) technique finds its role when the signal to be sampled meets certain sparsity requirements. In this paper we investigate the possibility and basic techniques that could further reduce the number of samples involved in conventional CS theory by exploiting learning-based non-uniform adaptive sampling. Based on a typical signal sensing application, we illustrate and evaluate the performance of two of our algorithms, Individual Chasing and Centroid Chasing, for signals of different distribution features. Our proposed learning-based adaptive sampling schemes complement existing efforts in CS fields and do not depend on any specific signal reconstruction technique. Compared to conventional sparse sampling methods, the simulation results demonstrate that our algorithms allow 46%46\% less number of samples for accurate signal reconstruction and achieve up to 57%57\% smaller signal reconstruction error under the same noise condition.Comment: 9 pages, IEEE MASS 201
    corecore