24 research outputs found

    Naive mean field approximation for image restoration

    Full text link
    We attempt image restoration in the framework of the Baysian inference. Recently, it has been shown that under a certain criterion the MAP (Maximum A Posterior) estimate, which corresponds to the minimization of energy, can be outperformed by the MPM (Maximizer of the Posterior Marginals) estimate, which is equivalent to a finite-temperature decoding method. Since a lot of computational time is needed for the MPM estimate to calculate the thermal averages, the mean field method, which is a deterministic algorithm, is often utilized to avoid this difficulty. We present a statistical-mechanical analysis of naive mean field approximation in the framework of image restoration. We compare our theoretical results with those of computer simulation, and investigate the potential of naive mean field approximation.Comment: 9 pages, 11 figure

    Finite connectivity systems as error-correcting codes

    Get PDF
    We investigate the performance of parity check codes using the mapping onto spin glasses proposed by Sourlas. We study codes where each parity check comprises products of K bits selected from the original digital message with exactly C parity checks per message bit. We show, using the replica method, that these codes saturate Shannon's coding bound for K?8 when the code rate K/C is finite. We then examine the finite temperature case to asses the use of simulated annealing methods for decoding, study the performance of the finite K case and extend the analysis to accommodate different types of noisy channels. The analogy between statistical physics methods and decoding by belief propagation is also discussed

    Thouless-Anderson-Palmer Approach for Lossy Compression

    Full text link
    We study an ill-posed linear inverse problem, where a binary sequence will be reproduced using a sparce matrix. According to the previous study, this model can theoretically provide an optimal compression scheme for an arbitrary distortion level, though the encoding procedure remains an NP-complete problem. In this paper, we focus on the consistency condition for a dynamics model of Markov-type to derive an iterative algorithm, following the steps of Thouless-Anderson-Palmer's. Numerical results show that the algorithm can empirically saturate the theoretical limit for the sparse construction of our codes, which also is very close to the rate-distortion function.Comment: 10 pages, 3 figure

    Analysis of common attacks in LDPCC-based public-key cryptosystems

    Get PDF
    We analyze the security and reliability of a recently proposed class of public-key cryptosystems against attacks by unauthorized parties who have acquired partial knowledge of one or more of the private key components and/or of the plaintext. Phase diagrams are presented, showing critical partial knowledge levels required for unauthorized decryptionComment: 14 pages, 6 figure

    Correcting the Bias of Subtractive Interference Cancellation in CDMA: Advanced Mean Field Theory

    Get PDF
    In this paper we introduce an advanced mean field method to correct the inherent bias of conventional subtractive interference cancellation in Code Division Multiple Access (CDMA). In simulations, we get a performance quite close to that of the individual optimal exponential complexity detector and significant improvements over current state-of-the-art subtractive interference cancellation in all setups tested, for example in one case doubling the number of user at a bit error rate of. To obtain such a good performance for finite size systems, where the performance is normally degraded by the presence of suboptimal fix-point solutions, it is crucial to use the method in conjunction with mean field annealing, i.e. solving the fixed point equations at decreasing temperatures (noise levels). In the limit of infinite large system size, the new subtractive interference cancellation scheme is expected to be identical to the individual optimal detector. The computational complexity is cubic in the number of users whereas conventional (naive mean field) subtractive interference cancellation is quadratic. We also present a quadratic complexity approximation to our new method that also gives performance improvements, but in addition requires knowledge of the spreading code statistics. The proposed methodology is quite general and is expected to be applicable to other digital communication problems

    Typical behavior of relays in communication channels

    Get PDF
    The typical behavior of the relay-without-delay channel under low-density parity-check coding and its multiple-unit generalization, termed the relay array, is studied using methods of statistical mechanics. A demodulate-and- forward strategy is analytically solved using the replica symmetric ansatz which is exact in the system studied at Nishimori's temperature. In particular, the typical level of improvement in communication performance by relaying messages is shown in the case of a small and a large number of relay units. © 2007 The American Physical Society

    Sublinear Computation Paradigm

    Get PDF
    This open access book gives an overview of cutting-edge work on a new paradigm called the “sublinear computation paradigm,” which was proposed in the large multiyear academic research project “Foundations of Innovative Algorithms for Big Data.” That project ran from October 2014 to March 2020, in Japan. To handle the unprecedented explosion of big data sets in research, industry, and other areas of society, there is an urgent need to develop novel methods and approaches for big data analysis. To meet this need, innovative changes in algorithm theory for big data are being pursued. For example, polynomial-time algorithms have thus far been regarded as “fast,” but if a quadratic-time algorithm is applied to a petabyte-scale or larger big data set, problems are encountered in terms of computational resources or running time. To deal with this critical computational and algorithmic bottleneck, linear, sublinear, and constant time algorithms are required. The sublinear computation paradigm is proposed here in order to support innovation in the big data era. A foundation of innovative algorithms has been created by developing computational procedures, data structures, and modelling techniques for big data. The project is organized into three teams that focus on sublinear algorithms, sublinear data structures, and sublinear modelling. The work has provided high-level academic research results of strong computational and algorithmic interest, which are presented in this book. The book consists of five parts: Part I, which consists of a single chapter on the concept of the sublinear computation paradigm; Parts II, III, and IV review results on sublinear algorithms, sublinear data structures, and sublinear modelling, respectively; Part V presents application results. The information presented here will inspire the researchers who work in the field of modern algorithms
    corecore