2,103 research outputs found

    Fast performance estimation of block codes

    Get PDF
    Importance sampling is used in this paper to address the classical yet important problem of performance estimation of block codes. Simulation distributions that comprise discreteand continuous-mixture probability densities are motivated and used for this application. These mixtures are employed in concert with the so-called g-method, which is a conditional importance sampling technique that more effectively exploits knowledge of underlying input distributions. For performance estimation, the emphasis is on bit by bit maximum a-posteriori probability decoding, but message passing algorithms for certain codes have also been investigated. Considered here are single parity check codes, multidimensional product codes, and briefly, low-density parity-check codes. Several error rate results are presented for these various codes, together with performances of the simulation techniques

    Automatic Markov Chain Monte Carlo Procedures for Sampling from Multivariate Distributions

    Get PDF
    Generating samples from multivariate distributions efficiently is an important task in Monte Carlo integration and many other stochastic simulation problems. Markov chain Monte Carlo has been shown to be very efficient compared to "conventional methods", especially when many dimensions are involved. In this article we propose a Hit-and-Run sampler in combination with the Ratio-of-Uniforms method. We show that it is well suited for an algorithm to generate points from quite arbitrary distributions, which include all log-concave distributions. The algorithm works automatically in the sense that only the mode (or an approximation of it) and an oracle is required, i.e., a subroutine that returns the value of the density function at any point x. We show that the number of evaluations of the density increases slowly with dimension. (author's abstract)Series: Preprint Series / Department of Applied Statistics and Data Processin

    Convex set of quantum states with positive partial transpose analysed by hit and run algorithm

    Full text link
    The convex set of quantum states of a composite K×KK \times K system with positive partial transpose is analysed. A version of the hit and run algorithm is used to generate a sequence of random points covering this set uniformly and an estimation for the convergence speed of the algorithm is derived. For K3K\ge 3 this algorithm works faster than sampling over the entire set of states and verifying whether the partial transpose is positive. The level density of the PPT states is shown to differ from the Marchenko-Pastur distribution, supported in [0,4] and corresponding asymptotically to the entire set of quantum states. Based on the shifted semi--circle law, describing asymptotic level density of partially transposed states, and on the level density for the Gaussian unitary ensemble with constraints for the spectrum we find an explicit form of the probability distribution supported in [0,3], which describes well the level density obtained numerically for PPT states.Comment: 11 pages, 4 figure

    Generalized decomposition and cross entropy methods for many-objective optimization

    Get PDF
    Decomposition-based algorithms for multi-objective optimization problems have increased in popularity in the past decade. Although their convergence to the Pareto optimal front (PF) is in several instances superior to that of Pareto-based algorithms, the problem of selecting a way to distribute or guide these solutions in a high-dimensional space has not been explored. In this work, we introduce a novel concept which we call generalized decomposition. Generalized decomposition provides a framework with which the decision maker (DM) can guide the underlying evolutionary algorithm toward specific regions of interest or the entire Pareto front with the desired distribution of Pareto optimal solutions. Additionally, it is shown that generalized decomposition simplifies many-objective problems by unifying the three performance objectives of multi-objective evolutionary algorithms – convergence to the PF, evenly distributed Pareto optimal solutions and coverage of the entire front – to only one, that of convergence. A framework, established on generalized decomposition, and an estimation of distribution algorithm (EDA) based on low-order statistics, namely the cross-entropy method (CE), is created to illustrate the benefits of the proposed concept for many objective problems. This choice of EDA also enables the test of the hypothesis that low-order statistics based EDAs can have comparable performance to more elaborate EDAs
    corecore