493 research outputs found

    Spatial correlation analysis using canonical correlation decomposition for sparse sonar array processing

    Get PDF
    Abstract-This paper uses the canonical correlation decomposition (CCD) framework to investigate the spatial correlation of sources captured using two spatially separated sensor arrays. The relationship between the canonical correlations of the observed signals and the spatial correlation coefficients of the source signals are first derived, including an analysis of the changes seen in this relationship under certain noise level and array geometry assumptions. Additionally, simulation results are presented that demonstrate the effects of different noise levels and array geometries on the canonical correlations for the case of two uniform linear sparse arrays

    Canonical correlation analysis of high-dimensional data with very small sample support

    Get PDF
    This paper is concerned with the analysis of correlation between two high-dimensional data sets when there are only few correlated signal components but the number of samples is very small, possibly much smaller than the dimensions of the data. In such a scenario, a principal component analysis (PCA) rank-reduction preprocessing step is commonly performed before applying canonical correlation analysis (CCA). We present simple, yet very effective approaches to the joint model-order selection of the number of dimensions that should be retained through the PCA step and the number of correlated signals. These approaches are based on reduced-rank versions of the Bartlett-Lawley hypothesis test and the minimum description length information-theoretic criterion. Simulation results show that the techniques perform well for very small sample sizes even in colored noise

    The University Defence Research Collaboration In Signal Processing

    Get PDF
    This chapter describes the development of algorithms for automatic detection of anomalies from multi-dimensional, undersampled and incomplete datasets. The challenge in this work is to identify and classify behaviours as normal or abnormal, safe or threatening, from an irregular and often heterogeneous sensor network. Many defence and civilian applications can be modelled as complex networks of interconnected nodes with unknown or uncertain spatio-temporal relations. The behavior of such heterogeneous networks can exhibit dynamic properties, reflecting evolution in both network structure (new nodes appearing and existing nodes disappearing), as well as inter-node relations. The UDRC work has addressed not only the detection of anomalies, but also the identification of their nature and their statistical characteristics. Normal patterns and changes in behavior have been incorporated to provide an acceptable balance between true positive rate, false positive rate, performance and computational cost. Data quality measures have been used to ensure the models of normality are not corrupted by unreliable and ambiguous data. The context for the activity of each node in complex networks offers an even more efficient anomaly detection mechanism. This has allowed the development of efficient approaches which not only detect anomalies but which also go on to classify their behaviour

    Simultaneous Source Localization and Polarization Estimation via Non-Orthogonal Joint Diagonalization with Vector-Sensors

    Get PDF
    Joint estimation of direction-of-arrival (DOA) and polarization with electromagnetic vector-sensors (EMVS) is considered in the framework of complex-valued non-orthogonal joint diagonalization (CNJD). Two new CNJD algorithms are presented, which propose to tackle the high dimensional optimization problem in CNJD via a sequence of simple sub-optimization problems, by using LU or LQ decompositions of the target matrices as well as the Jacobi-type scheme. Furthermore, based on the above CNJD algorithms we present a novel strategy to exploit the multi-dimensional structure present in the second-order statistics of EMVS outputs for simultaneous DOA and polarization estimation. Simulations are provided to compare the proposed strategy with existing tensorial or joint diagonalization based methods

    Sparsity-Cognizant Total Least-Squares for Perturbed Compressive Sampling

    Full text link
    Solving linear regression problems based on the total least-squares (TLS) criterion has well-documented merits in various applications, where perturbations appear both in the data vector as well as in the regression matrix. However, existing TLS approaches do not account for sparsity possibly present in the unknown vector of regression coefficients. On the other hand, sparsity is the key attribute exploited by modern compressive sampling and variable selection approaches to linear regression, which include noise in the data, but do not account for perturbations in the regression matrix. The present paper fills this gap by formulating and solving TLS optimization problems under sparsity constraints. Near-optimum and reduced-complexity suboptimum sparse (S-) TLS algorithms are developed to address the perturbed compressive sampling (and the related dictionary learning) challenge, when there is a mismatch between the true and adopted bases over which the unknown vector is sparse. The novel S-TLS schemes also allow for perturbations in the regression matrix of the least-absolute selection and shrinkage selection operator (Lasso), and endow TLS approaches with ability to cope with sparse, under-determined "errors-in-variables" models. Interesting generalizations can further exploit prior knowledge on the perturbations to obtain novel weighted and structured S-TLS solvers. Analysis and simulations demonstrate the practical impact of S-TLS in calibrating the mismatch effects of contemporary grid-based approaches to cognitive radio sensing, and robust direction-of-arrival estimation using antenna arrays.Comment: 30 pages, 10 figures, submitted to IEEE Transactions on Signal Processin

    Blind Multilinear Identification

    Full text link
    We discuss a technique that allows blind recovery of signals or blind identification of mixtures in instances where such recovery or identification were previously thought to be impossible: (i) closely located or highly correlated sources in antenna array processing, (ii) highly correlated spreading codes in CDMA radio communication, (iii) nearly dependent spectra in fluorescent spectroscopy. This has important implications --- in the case of antenna array processing, it allows for joint localization and extraction of multiple sources from the measurement of a noisy mixture recorded on multiple sensors in an entirely deterministic manner. In the case of CDMA, it allows the possibility of having a number of users larger than the spreading gain. In the case of fluorescent spectroscopy, it allows for detection of nearly identical chemical constituents. The proposed technique involves the solution of a bounded coherence low-rank multilinear approximation problem. We show that bounded coherence allows us to establish existence and uniqueness of the recovered solution. We will provide some statistical motivation for the approximation problem and discuss greedy approximation bounds. To provide the theoretical underpinnings for this technique, we develop a corresponding theory of sparse separable decompositions of functions, including notions of rank and nuclear norm that specialize to the usual ones for matrices and operators but apply to also hypermatrices and tensors.Comment: 20 pages, to appear in IEEE Transactions on Information Theor

    Wave Propagation and Source Localization in Random and Refracting Media

    Full text link
    This thesis focuses on understanding the way that acoustic and electromagnetic waves propagate through an inhomogeneous or turbulent environment, and analyzes the effect that this uncertainty has on signal processing algorithms. These methods are applied to determining the effectiveness of matched-field style source localization algorithms in uncertain ocean environments, and to analyzing the effect that random media composed of electrically large scatterers has on propagating waves. The first half of this dissertation introduces the frequency-difference autoproduct, a surrogate field quantity, and applies this quantity to passive acoustic remote sensing in waveguiding ocean environments. The frequency-difference autoproduct, a quadratic product of frequency-domain complex measured field values, is demonstrated to retain phase stability in the face of significant environmental uncertainty even when the related pressure field’s phase is as unstable as noise. This result demonstrates that a measured autoproduct (at difference frequencies less than 5 Hz) that is associated with a pressure field (measured in the hundreds of Hz) and which has propagated hundreds of kilometers in a deep ocean sound channel can be consistently cross-correlated with a calculated autoproduct. This cross-correlation is shown to give a cross-correlation coefficient that is more than 10 dB greater than the equivalent cross-correlation coefficient of the measured pressure field, demonstrating that the autoproduct is a stable alternative to the pressure field for array signal processing algorithms. The next major result demonstrates that the frequency-difference autoproduct can be used to passively localize remote unknown sound sources that broadcast sound hundreds of kilometers to a measuring device at hundreds of Hz frequencies. Because of the high frequency content of the measured pressure field, an equivalent conventional localization result is not possible using frequency-domain methods. These two primary contributions, recovery of frequency-domain phase stability and robust source localization, represent unique contributions to existing signal processing techniques. The second half of this thesis focuses on understanding electromagnetic wave propagation in a random medium composed of metallic scatterers placed within a background medium. This thesis focuses on developing new methods to compute the extinction and phase matrices, quantities related to Radiative Transfer theory, of a random medium composed of electrically large, interacting scatterers. A new method is proposed, based on using Monte Carlo simulation and full-wave computational electromagnetics methods simultaneously, to calculate the extinction coefficient and phase function of such a random medium. Another major result of this thesis demonstrates that the coherent portion of the field scattered by a configuration of the random medium is equivalent to the field scattered by a homogeneous dielectric that occupies the same volume as the configuration. This thesis also demonstrates that the incoherent portion of the field scattered by a configuration of the random medium, related to the phase function of the medium, can be calculated using buffer zone averaging. These methods are applied to model field propagation in a random medium, and propose an extension of single scattering theory that can be used to understand mean field propagation in relatively dense (tens of particles per cubic wavelength) random media composed of electrically large (up to 3 wavelengths long) conductors and incoherent field propagation in relatively dense (up to 5 particles per cubic wavelength) media composed of electrically large (up to two wavelengths) conductors. These results represent an important contribution to the field of incoherent, polarimetric remote sensing of the environment.PHDApplied PhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169886/1/geroskdj_1.pd

    Sparse Bayesian information filters for localization and mapping

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution February 2008This thesis formulates an estimation framework for Simultaneous Localization and Mapping (SLAM) that addresses the problem of scalability in large environments. We describe an estimation-theoretic algorithm that achieves significant gains in computational efficiency while maintaining consistent estimates for the vehicle pose and the map of the environment. We specifically address the feature-based SLAM problem in which the robot represents the environment as a collection of landmarks. The thesis takes a Bayesian approach whereby we maintain a joint posterior over the vehicle pose and feature states, conditioned upon measurement data. We model the distribution as Gaussian and parametrize the posterior in the canonical form, in terms of the information (inverse covariance) matrix. When sparse, this representation is amenable to computationally efficient Bayesian SLAM filtering. However, while a large majority of the elements within the normalized information matrix are very small in magnitude, it is fully populated nonetheless. Recent feature-based SLAM filters achieve the scalability benefits of a sparse parametrization by explicitly pruning these weak links in an effort to enforce sparsity. We analyze one such algorithm, the Sparse Extended Information Filter (SEIF), which has laid much of the groundwork concerning the computational benefits of the sparse canonical form. The thesis performs a detailed analysis of the process by which the SEIF approximates the sparsity of the information matrix and reveals key insights into the consequences of different sparsification strategies. We demonstrate that the SEIF yields a sparse approximation to the posterior that is inconsistent, suffering from exaggerated confidence estimates. This overconfidence has detrimental effects on important aspects of the SLAM process and affects the higher level goal of producing accurate maps for subsequent localization and path planning. This thesis proposes an alternative scalable filter that maintains sparsity while preserving the consistency of the distribution. We leverage insights into the natural structure of the feature-based canonical parametrization and derive a method that actively maintains an exactly sparse posterior. Our algorithm exploits the structure of the parametrization to achieve gains in efficiency, with a computational cost that scales linearly with the size of the map. Unlike similar techniques that sacrifice consistency for improved scalability, our algorithm performs inference over a posterior that is conservative relative to the nominal Gaussian distribution. Consequently, we preserve the consistency of the pose and map estimates and avoid the effects of an overconfident posterior. We demonstrate our filter alongside the SEIF and the standard EKF both in simulation as well as on two real-world datasets. While we maintain the computational advantages of an exactly sparse representation, the results show convincingly that our method yields conservative estimates for the robot pose and map that are nearly identical to those of the original Gaussian distribution as produced by the EKF, but at much less computational expense. The thesis concludes with an extension of our SLAM filter to a complex underwater environment. We describe a systems-level framework for localization and mapping relative to a ship hull with an Autonomous Underwater Vehicle (AUV) equipped with a forward-looking sonar. The approach utilizes our filter to fuse measurements of vehicle attitude and motion from onboard sensors with data from sonar images of the hull. We employ the system to perform three-dimensional, 6-DOF SLAM on a ship hull
    • …
    corecore