1,808 research outputs found

    Fast randomized iteration: diffusion Monte Carlo through the lens of numerical linear algebra

    Full text link
    We review the basic outline of the highly successful diffusion Monte Carlo technique commonly used in contexts ranging from electronic structure calculations to rare event simulation and data assimilation, and propose a new class of randomized iterative algorithms based on similar principles to address a variety of common tasks in numerical linear algebra. From the point of view of numerical linear algebra, the main novelty of the Fast Randomized Iteration schemes described in this article is that they work in either linear or constant cost per iteration (and in total, under appropriate conditions) and are rather versatile: we will show how they apply to solution of linear systems, eigenvalue problems, and matrix exponentiation, in dimensions far beyond the present limits of numerical linear algebra. While traditional iterative methods in numerical linear algebra were created in part to deal with instances where a matrix (of size O(n2)\mathcal{O}(n^2)) is too big to store, the algorithms that we propose are effective even in instances where the solution vector itself (of size O(n)\mathcal{O}(n)) may be too big to store or manipulate. In fact, our work is motivated by recent DMC based quantum Monte Carlo schemes that have been applied to matrices as large as 10108×1010810^{108} \times 10^{108}. We provide basic convergence results, discuss the dependence of these results on the dimension of the system, and demonstrate dramatic cost savings on a range of test problems.Comment: 44 pages, 7 figure

    A Universal Scheme for Wyner–Ziv Coding of Discrete Sources

    Get PDF
    We consider the Wyner–Ziv (WZ) problem of lossy compression where the decompressor observes a noisy version of the source, whose statistics are unknown. A new family of WZ coding algorithms is proposed and their universal optimality is proven. Compression consists of sliding-window processing followed by Lempel–Ziv (LZ) compression, while the decompressor is based on a modification of the discrete universal denoiser (DUDE) algorithm to take advantage of side information. The new algorithms not only universally attain the fundamental limits, but also suggest a paradigm for practical WZ coding. The effectiveness of our approach is illustrated with experiments on binary images, and English text using a low complexity algorithm motivated by our class of universally optimal WZ codes

    Lossy Compression with Near-uniform Encoder Outputs

    Full text link
    It is well known that lossless compression of a discrete memoryless source with near-uniform encoder output is possible at a rate above its entropy if and only if the encoder is randomized. This work focuses on deriving conditions for near-uniform encoder output(s) in the Wyner-Ziv and the distributed lossy compression problems. We show that in the Wyner-Ziv problem, near-uniform encoder output and operation close to the WZ-rate limit is simultaneously possible, whereas in the distributed lossy compression problem, jointly near-uniform outputs is achievable in the interior of the distributed lossy compression rate region if the sources share non-trivial G\'{a}cs-K\"{o}rner common information.Comment: Submitted to the 2016 IEEE International Symposium on Information Theory (11 Pages, 3 Figures

    Iterative Slepian-Wolf Decoding and FEC Decoding for Compress-and-Forward Systems

    Get PDF
    While many studies have concentrated on providing theoretical analysis for the relay assisted compress-and-forward systems little effort has yet been made to the construction and evaluation of a practical system. In this paper a practical CF system incorporating an error-resilient multilevel Slepian-Wolf decoder is introduced and a novel iterative processing structure which allows information exchanging between the Slepian-Wolf decoder and the forward error correction decoder of the main source message is proposed. In addition, a new quantization scheme is incorporated as well to avoid the complexity of the reconstruction of the relay signal at the final decoder of the destination. The results demonstrate that the iterative structure not only reduces the decoding loss of the Slepian-Wolf decoder, it also improves the decoding performance of the main message from the source

    Joint source-channel coding with feedback

    Get PDF
    This paper quantifies the fundamental limits of variable-length transmission of a general (possibly analog) source over a memoryless channel with noiseless feedback, under a distortion constraint. We consider excess distortion, average distortion and guaranteed distortion (dd-semifaithful codes). In contrast to the asymptotic fundamental limit, a general conclusion is that allowing variable-length codes and feedback leads to a sizable improvement in the fundamental delay-distortion tradeoff. In addition, we investigate the minimum energy required to reproduce kk source samples with a given fidelity after transmission over a memoryless Gaussian channel, and we show that the required minimum energy is reduced with feedback and an average (rather than maximal) power constraint.Comment: To appear in IEEE Transactions on Information Theor

    Quantum Monte Carlo with very large multideterminant wavefunctions

    Full text link
    An algorithm to compute efficiently the first two derivatives of (very) large multideterminant wavefunctions for quantum Monte Carlo calculations is presented. The calculation of determinants and their derivatives is performed using the Sherman-Morrison formula for updating the inverse Slater matrix. An improved implementation based on the reduction of the number of column substitutions and on a very efficient implementation of the calculation of the scalar products involved is presented. It is emphasized that multideterminant expansions contain in general a large number of identical spin-specific determinants: for typical configuration interaction-type wavefunctions the number of unique spin-specific determinants NdetσN_{\rm det}^\sigma (σ=↑,↓\sigma=\uparrow,\downarrow) with a non-negligible weight in the expansion is of order O(Ndet){\cal O}(\sqrt{N_{\rm det}}). We show that a careful implementation of the calculation of the NdetN_{\rm det}-dependent contributions can make this step negligible enough so that in practice the algorithm scales as the total number of unique spin-specific determinants,   Ndet↑+Ndet↓\; N_{\rm det}^\uparrow + N_{\rm det}^\downarrow, over a wide range of total number of determinants (here, NdetN_{\rm det} up to about one million), thus greatly reducing the total computational cost. Finally, a new truncation scheme for the multideterminant expansion is proposed so that larger expansions can be considered without increasing the computational time. The algorithm is illustrated with all-electron Fixed-Node Diffusion Monte Carlo calculations of the total energy of the chlorine atom. Calculations using a trial wavefunction including about 750 000 determinants with a computational increase of ∌\sim 400 compared to a single-determinant calculation are shown to be feasible.Comment: 9 pages, 3 figure

    First-principles modeling of quantum nuclear effects and atomic interactions in solid He-4 at high pressure

    Get PDF
    We present a first-principles computational study of solid He-4 at T = 0 K and pressures up to similar to 160 GPa. Our computational strategy consists in using van der Waals density functional theory (DFT-vdW) to describe the electronic degrees of freedom in this material, and the diffusion Monte Carlo (DMC) method to solve the Schrodinger equation describing the behavior of the quantum nuclei. For this, we construct an analytical interaction function based on the pairwise Aziz potential that closely matches the volume variation of the cohesive energy calculated with DFT-vdW in dense helium. Interestingly, we find that the kinetic energy of solid He-4 does not increase appreciably with compression for P >= 85 GPa. Also, we show that the Lindemann ratio in dense solid He-4 amounts to 0.10 almost independently of pressure. The reliability of customary quasiharmonic DFT (QH DFT) approaches in describing quantum nuclear effects in solids is also studied. We find that QH DFT simulations, although provide a reasonable equation of state in agreement with experiments, are not able to reproduce correctly these critical effects in compressed He-4. In particular, we disclose huge discrepancies of at least similar to 50% in the calculated He-4 kinetic energies using both the QH DFT and present DFT-DMC methods.Postprint (published version

    Discrete denoising of heterogenous two-dimensional data

    Full text link
    We consider discrete denoising of two-dimensional data with characteristics that may be varying abruptly between regions. Using a quadtree decomposition technique and space-filling curves, we extend the recently developed S-DUDE (Shifting Discrete Universal DEnoiser), which was tailored to one-dimensional data, to the two-dimensional case. Our scheme competes with a genie that has access, in addition to the noisy data, also to the underlying noiseless data, and can employ mm different two-dimensional sliding window denoisers along mm distinct regions obtained by a quadtree decomposition with mm leaves, in a way that minimizes the overall loss. We show that, regardless of what the underlying noiseless data may be, the two-dimensional S-DUDE performs essentially as well as this genie, provided that the number of distinct regions satisfies m=o(n)m=o(n), where nn is the total size of the data. The resulting algorithm complexity is still linear in both nn and mm, as in the one-dimensional case. Our experimental results show that the two-dimensional S-DUDE can be effective when the characteristics of the underlying clean image vary across different regions in the data.Comment: 16 pages, submitted to IEEE Transactions on Information Theor

    How to Achieve the Capacity of Asymmetric Channels

    Full text link
    We survey coding techniques that enable reliable transmission at rates that approach the capacity of an arbitrary discrete memoryless channel. In particular, we take the point of view of modern coding theory and discuss how recent advances in coding for symmetric channels help provide more efficient solutions for the asymmetric case. We consider, in more detail, three basic coding paradigms. The first one is Gallager's scheme that consists of concatenating a linear code with a non-linear mapping so that the input distribution can be appropriately shaped. We explicitly show that both polar codes and spatially coupled codes can be employed in this scenario. Furthermore, we derive a scaling law between the gap to capacity, the cardinality of the input and output alphabets, and the required size of the mapper. The second one is an integrated scheme in which the code is used both for source coding, in order to create codewords distributed according to the capacity-achieving input distribution, and for channel coding, in order to provide error protection. Such a technique has been recently introduced by Honda and Yamamoto in the context of polar codes, and we show how to apply it also to the design of sparse graph codes. The third paradigm is based on an idea of B\"ocherer and Mathar, and separates the two tasks of source coding and channel coding by a chaining construction that binds together several codewords. We present conditions for the source code and the channel code, and we describe how to combine any source code with any channel code that fulfill those conditions, in order to provide capacity-achieving schemes for asymmetric channels. In particular, we show that polar codes, spatially coupled codes, and homophonic codes are suitable as basic building blocks of the proposed coding strategy.Comment: 32 pages, 4 figures, presented in part at Allerton'14 and published in IEEE Trans. Inform. Theor
    • 

    corecore