79 research outputs found

    Approximate Random Matrix Models for Generalized Fading MIMO Channels

    Full text link
    Approximate random matrix models for κ−μ\kappa-\mu and η−μ\eta-\mu faded multiple input multiple output (MIMO) communication channels are derived in terms of a complex Wishart matrix. The proposed approximation has the least Kullback-Leibler (KL) divergence from the original matrix distribution. The utility of the results are demonstrated in a) computing the average capacity/rate expressions of κ−μ\kappa-\mu/η−μ\eta-\mu MIMO systems b) computing outage probability (OP) expressions for maximum ratio combining (MRC) for κ−μ\kappa-\mu/η−μ\eta-\mu faded MIMO channels c) ergodic rate expressions for zero-forcing (ZF) receiver in an uplink single cell massive MIMO scenario with low resolution analog-to-digital converters (ADCs) in the antennas. These approximate expressions are compared with Monte-Carlo simulations and a close match is observed

    Generalized Residual Ratio Thresholding

    Full text link
    Simultaneous orthogonal matching pursuit (SOMP) and block OMP (BOMP) are two widely used techniques for sparse support recovery in multiple measurement vector (MMV) and block sparse (BS) models respectively. For optimal performance, both SOMP and BOMP require \textit{a priori} knowledge of signal sparsity or noise variance. However, sparsity and noise variance are unavailable in most practical applications. This letter presents a novel technique called generalized residual ratio thresholding (GRRT) for operating SOMP and BOMP without the \textit{a priori} knowledge of signal sparsity and noise variance and derive finite sample and finite signal to noise ratio (SNR) guarantees for exact support recovery. Numerical simulations indicate that GRRT performs similar to BOMP and SOMP with \textit{a priori} knowledge of signal and noise statistics.Comment: 13 pages, 8 figure

    Tuning Free Orthogonal Matching Pursuit

    Full text link
    Orthogonal matching pursuit (OMP) is a widely used compressive sensing (CS) algorithm for recovering sparse signals in noisy linear regression models. The performance of OMP depends on its stopping criteria (SC). SC for OMP discussed in literature typically assumes knowledge of either the sparsity of the signal to be estimated k0k_0 or noise variance σ2\sigma^2, both of which are unavailable in many practical applications. In this article we develop a modified version of OMP called tuning free OMP or TF-OMP which does not require a SC. TF-OMP is proved to accomplish successful sparse recovery under the usual assumptions on restricted isometry constants (RIC) and mutual coherence of design matrix. TF-OMP is numerically shown to deliver a highly competitive performance in comparison with OMP having \textit{a priori} knowledge of k0k_0 or σ2\sigma^2. Greedy algorithm for robust de-noising (GARD) is an OMP like algorithm proposed for efficient estimation in classical overdetermined linear regression models corrupted by sparse outliers. However, GARD requires the knowledge of inlier noise variance which is difficult to estimate. We also produce a tuning free algorithm (TF-GARD) for efficient estimation in the presence of sparse outliers by extending the operating principle of TF-OMP to GARD. TF-GARD is numerically shown to achieve a performance comparable to that of the existing implementation of GARD.Comment: 13 pages. 9 figure

    Analysis of Outage Probability of MRC with η−μ\eta-\mu co-channel interference

    Full text link
    Approximate outage probability expressions are derived for systems employing maximum ratio combining, when both the desired signal and the interfering signals are subjected to η−μ\eta-\mu fading, with the interferers having unequal power. The approximations are in terms of the Appell Function and Gauss hypergeometric function. A close match is observed between the outage probability result obtained through the derived analytical expression and the one obtained through Monte-Carlo simulations

    Analysis of Optimal Combining in Rician Fading with Co-channel Interference

    Full text link
    Approximate Symbol error rate (SER), outage probability and rate expressions are derived for receive diversity system employing optimum combining when both the desired and the interfering signals are subjected to Rician fading, for the cases of a) equal power uncorrelated interferers b) unequal power interferers c) interferer correlation. The derived expressions are applicable for an arbitrary number of receive antennas and interferers and for any quadrature amplitude modulation (QAM) constellation. Furthermore, we derive a simple closed form expression for SER in the interference-limited regime, for the special case of Rayleigh faded interferers. A close match is observed between the SER, outage probability and rate results obtained through the derived analytical expressions and the ones obtained from Monte-Carlo simulations

    High SNR Consistent Compressive Sensing Without Signal and Noise Statistics

    Full text link
    Recovering the support of sparse vectors in underdetermined linear regression models, \textit{aka}, compressive sensing is important in many signal processing applications. High SNR consistency (HSC), i.e., the ability of a support recovery technique to correctly identify the support with increasing signal to noise ratio (SNR) is an increasingly popular criterion to qualify the high SNR optimality of support recovery techniques. The HSC results available in literature for support recovery techniques applicable to underdetermined linear regression models like least absolute shrinkage and selection operator (LASSO), orthogonal matching pursuit (OMP) etc. assume \textit{a priori} knowledge of noise variance or signal sparsity. However, both these parameters are unavailable in most practical applications. Further, it is extremely difficult to estimate noise variance or signal sparsity in underdetermined regression models. This limits the utility of existing HSC results. In this article, we propose two techniques, \textit{viz.}, residual ratio minimization (RRM) and residual ratio thresholding with adaptation (RRTA) to operate OMP algorithm without the \textit{a priroi} knowledge of noise variance and signal sparsity and establish their HSC analytically and numerically. To the best of our knowledge, these are the first and only noise statistics oblivious algorithms to report HSC in underdetermined regression models.Comment: 13 pages, 6 figure

    High SNR Consistent Compressive Sensing

    Full text link
    High signal to noise ratio (SNR) consistency of model selection criteria in linear regression models has attracted a lot of attention recently. However, most of the existing literature on high SNR consistency deals with model order selection. Further, the limited literature available on the high SNR consistency of subset selection procedures (SSPs) is applicable to linear regression with full rank measurement matrices only. Hence, the performance of SSPs used in underdetermined linear models (a.k.a compressive sensing (CS) algorithms) at high SNR is largely unknown. This paper fills this gap by deriving necessary and sufficient conditions for the high SNR consistency of popular CS algorithms like l0l_0-minimization, basis pursuit de-noising or LASSO, orthogonal matching pursuit and Dantzig selector. Necessary conditions analytically establish the high SNR inconsistency of CS algorithms when used with the tuning parameters discussed in literature. Novel tuning parameters with SNR adaptations are developed using the sufficient conditions and the choice of SNR adaptations are discussed analytically using convergence rate analysis. CS algorithms with the proposed tuning parameters are numerically shown to be high SNR consistent and outperform existing tuning parameters in the moderate to high SNR regime.Comment: 13 pages, 4 figure

    Outage Probability and Rate for κ\kappa-μ\mu Shadowed Fading in Interference Limited Scenario

    Full text link
    The κ\kappa-μ\mu shadowed fading model is a very general fading model as it includes both κ\kappa-μ\mu and η\eta-μ\mu as special cases. In this work, we derive the expression for outage probability when the signal-of-interest (SoI) and interferers both experience κ\kappa-μ\mu shadowed fading in an interference limited scenario. The derived expression is valid for arbitrary SoI parameters, arbitrary κ\kappa and μ\mu parameters for all interferers and any value of the parameter mm for the interferers excepting the limiting value of m→∞m\rightarrow \infty. The expression can be expressed in terms of Pochhammer integral where the integrands of integral only contains elementary functions. The outage probability expression is then simplified for various special cases, especially when SoI experiences η\eta-μ\mu or κ\kappa-μ\mu fading. Further, the rate expression is derived when the SoI experiences κ\kappa-μ\mu shadowed fading with integer values of μ\mu, and interferers experience κ\kappa-μ\mu shadowed fading with arbitrary parameters. The rate expression can be expressed in terms of sum of Lauricella's function of the fourth kind. The utility of our results is demonstrated by using the derived expression to study and compare FFR and SFR in the presence of κ\kappa-μ\mu shadowed fading. Extensive simulation results are provided and these further validate our theoretical results

    Concavifiability and convergence: necessary and sufficient conditions for gradient descent analysis

    Full text link
    Convergence of the gradient descent algorithm has been attracting renewed interest due to its utility in deep learning applications. Even as multiple variants of gradient descent were proposed, the assumption that the gradient of the objective is Lipschitz continuous remained an integral part of the analysis until recently. In this work, we look at convergence analysis by focusing on a property that we term as concavifiability, instead of Lipschitz continuity of gradients. We show that concavifiability is a necessary and sufficient condition to satisfy the upper quadratic approximation which is key in proving that the objective function decreases after every gradient descent update. We also show that any gradient Lipschitz function satisfies concavifiability. A constant known as the concavifier analogous to the gradient Lipschitz constant is derived which is indicative of the optimal step size. As an application, we demonstrate the utility of finding the concavifier the in convergence of gradient descent through an example inspired by neural networks. We derive bounds on the concavifier to obtain a fixed step size for a single hidden layer ReLU network

    Residual Ratio Thresholding for Model Order Selection

    Full text link
    Model order selection (MOS) in linear regression models is a widely studied problem in signal processing. Techniques based on information theoretic criteria (ITC) are algorithms of choice in MOS problems. This article proposes a novel technique called residual ratio thresholding for MOS in linear regression models which is fundamentally different from the ITC based MOS criteria widely discussed in literature. This article also provides a rigorous mathematical analysis of the high signal to noise ratio (SNR) and large sample size behaviour of RRT. RRT is numerically shown to deliver a highly competitive performance when compared to popular model order selection criteria like Akaike information criterion (AIC), Bayesian information criterion (BIC), penalised adaptive likelihood (PAL) etc. especially when the sample size is small.Comment: 13 pages, 23 figure
    • …
    corecore