11 research outputs found

    A statistical approach to topological entanglement: Boltzmann machine representation of higher-order irreducible correlation

    Full text link
    Higher-order correlation is an interesting phenomena in many fields of physics and statistics. A quantum analogue of the higher-order correlation is the topological entanglement in topologically ordered states of matter at zero temperature, usually quantified by topological entanglement entropy (TEE). In this work we propose a statistical interpretation which unifies the two under the same information-theoretic framework. We demonstrate that the existence of a non-zero TEE can be understood in the statistical view as the emergent nnth order mutual information InI_n (for arbitrary integer n≥3n\ge 3) reflected in projectively measured samples, which also makes explicit the equivalence between the two existing methods for its extraction -- the Kitaev-Preskill and the Levin-Wen construction. To exploit the statistical nature of InI_n, we construct a restricted Boltzmann machine (RBM) which captures the higher-order correlation and/or topological entanglement that are encoded in the distribution of projected sample by representing the entanglement Hamiltonian of a local region under the proper basis. Furthermore, we derive a closed form which presents a method to interrogate the trained RBM, making explicit the analytical form of arbitrary order of correlation relevant for InI_n in terms of the entanglement Hamiltonian. We remark that the interrogation method for extracting higher-order correlation can also be applied in the construction of auxiliary fields which disentangle many-body interactions relevant for diverse interacting models.Comment: 16 pages, 4 figure

    Molecule Design by Latent Space Energy-Based Modeling and Gradual Distribution Shifting

    Full text link
    Generation of molecules with desired chemical and biological properties such as high drug-likeness, high binding affinity to target proteins, is critical for drug discovery. In this paper, we propose a probabilistic generative model to capture the joint distribution of molecules and their properties. Our model assumes an energy-based model (EBM) in the latent space. Conditional on the latent vector, the molecule and its properties are modeled by a molecule generation model and a property regression model respectively. To search for molecules with desired properties, we propose a sampling with gradual distribution shifting (SGDS) algorithm, so that after learning the model initially on the training data of existing molecules and their properties, the proposed algorithm gradually shifts the model distribution towards the region supported by molecules with desired values of properties. Our experiments show that our method achieves very strong performances on various molecule design tasks

    Diverse and Faithful Knowledge-Grounded Dialogue Generation via Sequential Posterior Inference

    Full text link
    The capability to generate responses with diversity and faithfulness using factual knowledge is paramount for creating a human-like, trustworthy dialogue system. Common strategies either adopt a two-step paradigm, which optimizes knowledge selection and response generation separately, and may overlook the inherent correlation between these two tasks, or leverage conditional variational method to jointly optimize knowledge selection and response generation by employing an inference network. In this paper, we present an end-to-end learning framework, termed Sequential Posterior Inference (SPI), capable of selecting knowledge and generating dialogues by approximately sampling from the posterior distribution. Unlike other methods, SPI does not require the inference network or assume a simple geometry of the posterior distribution. This straightforward and intuitive inference procedure of SPI directly queries the response generation model, allowing for accurate knowledge selection and generation of faithful responses. In addition to modeling contributions, our experimental results on two common dialogue datasets (Wizard of Wikipedia and Holl-E) demonstrate that SPI outperforms previous strong baselines according to both automatic and human evaluation metrics
    corecore