1,699 research outputs found

    Tracking Target Signal Strengths on a Grid using Sparsity

    Get PDF
    Multi-target tracking is mainly challenged by the nonlinearity present in the measurement equation, and the difficulty in fast and accurate data association. To overcome these challenges, the present paper introduces a grid-based model in which the state captures target signal strengths on a known spatial grid (TSSG). This model leads to \emph{linear} state and measurement equations, which bypass data association and can afford state estimation via sparsity-aware Kalman filtering (KF). Leveraging the grid-induced sparsity of the novel model, two types of sparsity-cognizant TSSG-KF trackers are developed: one effects sparsity through â„“1\ell_1-norm regularization, and the other invokes sparsity as an extra measurement. Iterative extended KF and Gauss-Newton algorithms are developed for reduced-complexity tracking, along with accurate error covariance updates for assessing performance of the resultant sparsity-aware state estimators. Based on TSSG state estimates, more informative target position and track estimates can be obtained in a follow-up step, ensuring that track association and position estimation errors do not propagate back into TSSG state estimates. The novel TSSG trackers do not require knowing the number of targets or their signal strengths, and exhibit considerably lower complexity than the benchmark hidden Markov model filter, especially for a large number of targets. Numerical simulations demonstrate that sparsity-cognizant trackers enjoy improved root mean-square error performance at reduced complexity when compared to their sparsity-agnostic counterparts.Comment: Submitted to IEEE Trans. on Signal Processin

    Complexity Analysis and Efficient Measurement Selection Primitives for High-Rate Graph SLAM

    Get PDF
    Sparsity has been widely recognized as crucial for efficient optimization in graph-based SLAM. Because the sparsity and structure of the SLAM graph reflect the set of incorporated measurements, many methods for sparsification have been proposed in hopes of reducing computation. These methods often focus narrowly on reducing edge count without regard for structure at a global level. Such structurally-naive techniques can fail to produce significant computational savings, even after aggressive pruning. In contrast, simple heuristics such as measurement decimation and keyframing are known empirically to produce significant computation reductions. To demonstrate why, we propose a quantitative metric called elimination complexity (EC) that bridges the existing analytic gap between graph structure and computation. EC quantifies the complexity of the primary computational bottleneck: the factorization step of a Gauss-Newton iteration. Using this metric, we show rigorously that decimation and keyframing impose favorable global structures and therefore achieve computation reductions on the order of r2/9r^2/9 and r3r^3, respectively, where rr is the pruning rate. We additionally present numerical results showing EC provides a good approximation of computation in both batch and incremental (iSAM2) optimization and demonstrate that pruning methods promoting globally-efficient structure outperform those that do not.Comment: Pre-print accepted to ICRA 201

    Multiple and single snapshot compressive beamforming

    Full text link
    For a sound field observed on a sensor array, compressive sensing (CS) reconstructs the direction-of-arrival (DOA) of multiple sources using a sparsity constraint. The DOA estimation is posed as an underdetermined problem by expressing the acoustic pressure at each sensor as a phase-lagged superposition of source amplitudes at all hypothetical DOAs. Regularizing with an â„“1\ell_1-norm constraint renders the problem solvable with convex optimization, and promoting sparsity gives high-resolution DOA maps. Here, the sparse source distribution is derived using maximum a posteriori (MAP) estimates for both single and multiple snapshots. CS does not require inversion of the data covariance matrix and thus works well even for a single snapshot where it gives higher resolution than conventional beamforming. For multiple snapshots, CS outperforms conventional high-resolution methods, even with coherent arrivals and at low signal-to-noise ratio. The superior resolution of CS is demonstrated with vertical array data from the SWellEx96 experiment for coherent multi-paths.Comment: In press Journal of Acoustical Society of Americ
    • …
    corecore