604 research outputs found

    Minimum-Variance Importance-Sampling Bernoulli Estimator for Fast Simulation of Linear Block Codes over Binary Symmetric Channels

    Full text link
    In this paper the choice of the Bernoulli distribution as biased distribution for importance sampling (IS) Monte-Carlo (MC) simulation of linear block codes over binary symmetric channels (BSCs) is studied. Based on the analytical derivation of the optimal IS Bernoulli distribution, with explicit calculation of the variance of the corresponding IS estimator, two novel algorithms for fast-simulation of linear block codes are proposed. For sufficiently high signal-to-noise ratios (SNRs) one of the proposed algorithm is SNR-invariant, i.e. the IS estimator does not depend on the cross-over probability of the channel. Also, the proposed algorithms are shown to be suitable for the estimation of the error-correcting capability of the code and the decoder. Finally, the effectiveness of the algorithms is confirmed through simulation results in comparison to standard Monte Carlo method

    Unequal Error Protection Querying Policies for the Noisy 20 Questions Problem

    Full text link
    In this paper, we propose an open-loop unequal-error-protection querying policy based on superposition coding for the noisy 20 questions problem. In this problem, a player wishes to successively refine an estimate of the value of a continuous random variable by posing binary queries and receiving noisy responses. When the queries are designed non-adaptively as a single block and the noisy responses are modeled as the output of a binary symmetric channel the 20 questions problem can be mapped to an equivalent problem of channel coding with unequal error protection (UEP). A new non-adaptive querying strategy based on UEP superposition coding is introduced whose estimation error decreases with an exponential rate of convergence that is significantly better than that of the UEP repetition coding introduced by Variani et al. (2015). With the proposed querying strategy, the rate of exponential decrease in the number of queries matches the rate of a closed-loop adaptive scheme where queries are sequentially designed with the benefit of feedback. Furthermore, the achievable error exponent is significantly better than that of random block codes employing equal error protection.Comment: To appear in IEEE Transactions on Information Theor

    Efficient Importance sampling Simulations for Digital Communication Systems

    Get PDF
    Importance sampling is a- modified. Monte Carlo simulation technique which can dramatically reduce the computational cost of the Monte Carlo method. A complete development is presented for its use in the estimation of bit error rates /V for digital communication systems with small Gaussian noise inputs. Emphasis is on the optimal mean-translation Gaussian simulation density function design and the event simulation method as applied to systems which employ quasi-regular trellis codes. These codes include the convolutional codes and many TCM (Ungerboeck) codes. Euclidean distance information of a code is utilized to facilitate the simulation. Also, the conditional importance sampling technique is presented which can handle many non-Gaussian system inputs. Theories as well as numerical examples are given. In particular, we study the simulations of an uncoded MSK and a trellis-coded 8- PSK transmissions over a general bandlimited nonlinear satellite channel model. Our algorithms are shown to be very efficient at low Pb compared to the ordinary Monte Carlo method. Many techniques we have developed are applicable to other system simulations as building blocks for their particular system configurations and channels

    Sparsity-Based Algorithms for Line Spectral Estimation

    Get PDF

    Importance Sampling Simulation of the Stack Algorithm with Application to Sequential Decoding

    Get PDF
    Importance sampling is a Monte Carlo variance reduction technique which in many applications has resulted in a significant reduction in computational cost required to obtain accurate Monte Carlo estimates. The basic idea is to generate the random inputs using a biased simulation distribution. That is, one that differs from the true underlying probability model. Simulation data is then weighted by an appropriate likelihood ratio in order to obtain an unbiased estimate of the desired parameter. This thesis presents new importance sampling techniques for the simulation of systems that employ the stack algorithm. The stack algorithm is primarily used in digital communications to decode convolutional codes, but there are also other applications. For example, sequential edge linking is a method of finding edges in images that employs the stack algorithm. In brief, the stack algorithm is an algorithm that attempts to find the maximum metric path through a large decision tree. There are two quantities that characterize its performance. First there is the probability of a branching error. The second quantity is the distribution of computation. It turns out that the number of tree nodes examined in order to make a specific branching decision is a random variable. The distribution of computation is the distribution of this random variable. The estimation of the distribution of computation, and parameters derived from this distribution, is the main goal of this work. We present two new importance sampling schemes (including some variations) for estimating the distribution of computation of the stack algorithm. The first general method is called the reference path method. This method biases noise inputs using the weight distribution of the associated convolutional code. The second method is the partitioning method. This method uses a stationary biasing of noise inputs that alters the drift of the node metric process in an ensemble average sense. The biasing is applied only up to a certain point in time; the point where the correct path node metric minimum occurs. This method is inspired by both information theory and large deviations theory. This thesis also presents another two importance sampling techniques. The first is called the error events simulation method. This scheme will be used to estimate the error probabilities of stack algorithm decoders. The second method that we shall present is a new importance sampling technique for simulating the sequential edge linking algorithm. The main goal of this presentation will be the development of the basic theory that is relevant to this simulation problem, and to discuss some of the key issues that are related to the sequential edge linking simulation

    Compressive Sensing for Multi-channel and Large-scale MIMO Networks

    Get PDF
    Compressive sensing (CS) is a revolutionary theory that has important applications in many engineering areas. Using CS, sparse or compressible signals can be recovered from incoherent measurements with far fewer samples than the conventional Nyquist rate. In wireless communication problems where the sparsity structure of the signals and the channels can be explored and utilized, CS helps to significantly reduce the number of transmissions required to have an efficient and reliable data communication. The objective of this thesis is to study new methods of CS, both from theoretical and application perspectives, in various complex, multi-channel and large-scale wireless networks. Specifically, we explore new sparse signal and channel structures, and develop low-complexity CS-based algorithms to transmit and recover data over these networks more efficiently. Starting from the theory of sparse vector approximation based on CS, a compressive multiple-channel estimation (CMCE) method is developed to estimate multiple sparse channels simultaneously. CMCE provides a reduction in the required overhead for the estimation of multiple channels, and can be applied to estimate the composite channels of two-way relay channels (TWRCs) with sparse intersymbol interference (ISI). To improve end-to-end error performance of the networks, various iterative estimation and decoding schemes based on CS for ISI-TWRC are proposed, for both modes of cooperative relaying: Amplify-and-Forward (AF) and Decode-and-Forward (DF). Theoretical results including the Restricted Isometry Property (RIP) and low-coherent condition of the discrete pilot signaling matrix, the performance guarantees, and the convergence of the schemes are presented in this thesis. Numerical results suggest that the error performances of the system is significantly improved by the proposed CS-based methods, thanks to the awareness of the sparsity feature of the channels. Low-rank matrix approximation, an extension of CS-based sparse vector recovery theory, is then studied in this research to address the channel estimation problem of large-scale (or massive) multiuser (MU) multiple-input multiple-output (MIMO) systems. A low-rank channel matrix estimation method based on nuclear-norm regularization is formulated and solved via a dual quadratic semi-definite programming (SDP) problem. An explicit choice of the regularization parameter and useful upper bounds of the error are presented to show the efficacy of the CS method in this case. After that, both the uplink channel estimation and a downlink data recoding of massive MIMO in the interference-limited multicell scenarios are considered, where a CS-based rank-q channel approximation and multicell precoding method are proposed. The results in this work suggest that the proposed method can mitigate the effects of the pilot contamination and intercell interference, hence improves the achievable rates of the users in multicell massive MIMO systems. Finally, various low-complexity greedy techniques are then presented to confirm the efficacy and feasibility of the proposed approaches in practical applications
    • …
    corecore