775 research outputs found

    Detection of an unknown rank-one component in white noise

    Get PDF
    We consider the detection of an unknown and arbitrary rank-one signal in a spatial sector scanned by a small number of beams. We address the problem of finding the maximal invariant for the problem at hand and show that it consists of the ratio of the eigenvalues of a Wishart matrix to its trace. Next, we derive the generalized-likelihood ratio test (GLRT) along with expressions for its probability density function (pdf) under both hypotheses. Special attention is paid to the case m= 2, where the GLRT is shown to be a uniformly most powerful invariant (UMPI). Numerical simulations attest to the validity of the theoretical analysis and illustrate the detection performance of the GLRT

    Interference estimation with applications to blind multiple-access communication over fading channels

    Get PDF
    Includes bibliographical references.We consider the detection of nonorthogonal multipulse signals on multiple-access fading channels. The generalized maximum-likelihood rule is employed to decode users whose complex fading gains are unknown. We develop geometrical interpretations for the resulting detectors and their corresponding asymptotic efficiencies. The generalized maximum-likelihood detection rule is then applied to find a matched subspace detector for the frequency-selective fading channel, under the assumption of a short coherence time (or long coherence time without the computational power to track the fading parameters). We propose blind implementations of these detectors for nonorthogonal multipulse signaling on both frequency-nonselective and frequency-selective multiple-access fading channels. These blind detectors extend the results of Wang and Poor to multipulse modulation and fast frequency selective fading. For comparison, the minimum mean-squared error decision rules for these channels are derived and blind implementations of their corresponding detectors are developed.This work was supported by the National Science Foundation under Contract ECS 9979400 and by the Office of Naval Research under Contracts N00014-89-J-1070 and N0014-00-1-0033

    Sliding windows and lattice algorithms for computing QR factors in the least squares theory of linear prediction

    Get PDF
    Includes bibliographical references.In this correspondence we pose a sequence of linear prediction problems that differ a little from those previously posed. The solutions to these problems introduce a family of "sliding" window techniques into the least squares theory of linear prediction. By using these techniques we are able to QR factor the Toeplitz data matrices that arise in linear prediction. The matrix Q is an orthogonal version of the data matrix and the matrix R is a Cholesky factor of the experimental correlation matrix. Our QR and Cholesky algorithms generate generalized reflection coefficients that may be used in the usual ways for analysis, synthesis, or classification.This work was supported by the Office of Naval Research, Arlington, VA, under Contract N00014-85-K-0256

    Note on recursive maximum likelihood for autoregressive modeling, A

    Get PDF
    Includes bibliographical references.In this paper, we rederive recursive maximum likelihood (RML) for an autoregressive (AR) time series using the Levinson decomposition. This decomposition produces a recursive update of the likelihood function for the AR parameters in terms of the reflection coefficients, prediction error variances, and forward and backward prediction errors. A fast algorithm for this recursive update is presented and compared with the recursive updates of the Burg algorithm. The comparison clarifies the connection between Burg's algorithm and RML.This work was supported by Bonneville Power Administration under Contract no. DEBI7990BPO7346 and by the Office of Naval Research, Statistics, and Probability Branch under Contract no. N00014-89-J-1070

    Nonlinear maximum likelihood estimation of autoregressive time series

    Get PDF
    Includes bibliographical references.In this paper, we describe an algorithm for finding the exact, nonlinear, maximum likelihood (ML) estimators for the parameters of an autoregressive time series. We demonstrate that the ML normal equations can be written as an interdependent set of cubic and quadratic equations in the AR polynomial coefficients. We present an algorithm that algebraically solves this set of nonlinear equations for low-order problems. For high-order problems, we describe iterative algorithms for obtaining a ML solution.This work was supported by Bonneville Power Administration under Contract #DEBI7990BPO7346 and by the Office of Naval Research, Statistics and Probability Branch, under Contract N00014-89-J-1070

    Analysis of Fisher Information and the Cram\'{e}r-Rao Bound for Nonlinear Parameter Estimation after Compressed Sensing

    Full text link
    In this paper, we analyze the impact of compressed sensing with complex random matrices on Fisher information and the Cram\'{e}r-Rao Bound (CRB) for estimating unknown parameters in the mean value function of a complex multivariate normal distribution. We consider the class of random compression matrices whose distribution is right-orthogonally invariant. The compression matrix whose elements are i.i.d. standard normal random variables is one such matrix. We show that for all such compression matrices, the Fisher information matrix has a complex matrix beta distribution. We also derive the distribution of CRB. These distributions can be used to quantify the loss in CRB as a function of the Fisher information of the non-compressed data. In our numerical examples, we consider a direction of arrival estimation problem and discuss the use of these distributions as guidelines for choosing compression ratios based on the resulting loss in CRB.Comment: 12 pages, 3figure

    Data adaptive rank-shaping methods for solving least squares problems

    Get PDF
    Includes bibliographical references.There are two types of problems in the theory of least squares signal processing: parameter estimation and signal extraction. Parameter estimation is called "inversion" and signal extraction is called "filtering." In this paper, we present a unified theory of rank shaping for solving overdetermined and underdetermined versions of these problems. We develop several data-dependent rank-shaping methods and evaluate their performance. Our key result is a data-adaptive Wiener filter that automatically adjusts its gains to accommodate realizations that are a priori unlikely. The adaptive filter dramatically outperforms the Wiener filter on atypical realizations and just slightly underperforms it on typical realizations. This is the most one can hope for in a data-adaptive filter.Supported by the Office of Naval Research, Mathematics Division, under contract No. N00014-89-J-1070 and by Bonneville Power Administration under Contract #DEBI7990BPO7346
    • …
    corecore