82 research outputs found

    Chromatic roots are dense in the whole complex plane

    Get PDF
    I show that the zeros of the chromatic polynomials P_G(q) for the generalized theta graphs \Theta^{(s,p)} are, taken together, dense in the whole complex plane with the possible exception of the disc |q-1| < 1. The same holds for their dichromatic polynomials (alias Tutte polynomials, alias Potts-model partition functions) Z_G(q,v) outside the disc |q+v| < |v|. An immediate corollary is that the chromatic zeros of not-necessarily-planar graphs are dense in the whole complex plane. The main technical tool in the proof of these results is the Beraha-Kahane-Weiss theorem on the limit sets of zeros for certain sequences of analytic functions, for which I give a new and simpler proof.Comment: LaTeX2e, 53 pages. Version 2 includes a new Appendix B. Version 3 adds a new Theorem 1.4 and a new Section 5, and makes several small improvements. To appear in Combinatorics, Probability & Computin

    Exponential families on resource-constrained systems

    Get PDF
    This work is about the estimation of exponential family models on resource-constrained systems. Our main goal is learning probabilistic models on devices with highly restricted storage, arithmetic, and computational capabilities—so called, ultra-low-power devices. Enhancing the learning capabilities of such devices opens up opportunities for intelligent ubiquitous systems in all areas of life, from medicine, over robotics, to home automation—to mention just a few. We investigate the inherent resource consumption of exponential families, review existing techniques, and devise new methods to reduce the resource consumption. The resource consumption, however, must not be reduced at all cost. Exponential families possess several desirable properties that must be preserved: Any probabilistic model encodes a conditional independence structure—our methods keep this structure intact. Exponential family models are theoretically well-founded. Instead of merely finding new algorithms based on intuition, our models are formalized within the framework of exponential families and derived from first principles. We do not introduce new assumptions which are incompatible with the formal derivation of the base model, and our methods do not rely on properties of particular high-level applications. To reduce the memory consumption, we combine and adapt reparametrization and regularization in an innovative way that facilitates the sparse parametrization of high-dimensional non-stationary time-series. The procedure allows us to load models in memory constrained systems, which would otherwise not fit. We provide new theoretical insights and prove that the uniform distance between the data generating process and our reparametrized solution is bounded. To reduce the arithmetic complexity of the learning problem, we derive the integer exponential family, based on the very definition of sufficient statistics and maximum entropy estimation. New integer-valued inference and learning algorithms are proposed, based on variational inference, proximal optimization, and regularization. The benefit of this technique is larger, the weaker the underlying system is, e.g., the probabilistic inference on a state-of-the-art ultra-lowpower microcontroller can be accelerated by a factor of 250. While our integer inference is fast, the underlying message passing relies on the variational principle, which is inexact and has unbounded error on general graphs. Since exact inference and other existing methods with bounded error exhibit exponential computational complexity, we employ near minimax optimal polynomial approximations to yield new stochastic algorithms for approximating the partition function and the marginal probabilities. Changing the polynomial degree allows us to control the complexity and the error of our new stochastic method. We provide an error bound that is parametrized by the number of samples, the polynomial degree, and the norm of the model’s parameter vector. Moreover, important intermediate quantities can be precomputed and shared with the weak computational device to reduce the resource requirement of our method even further. All new techniques are empirically evaluated on synthetic and real-world data, and the results confirm the properties which are predicted by our theoretical derivation. Our novel techniques allow a broader range of models to be learned on resource-constrained systems and imply several new research possibilities

    Feynman integrals and hyperlogarithms

    Get PDF
    We study Feynman integrals in the representation with Schwinger parameters and derive recursive integral formulas for massless 3- and 4-point functions. Properties of analytic (including dimensional) regularization are summarized and we prove that in the Euclidean region, each Feynman integral can be written as a linear combination of convergent Feynman integrals. This means that one can choose a basis of convergent master integrals and need not evaluate any divergent Feynman graph directly. Secondly we give a self-contained account of hyperlogarithms and explain in detail the algorithms needed for their application to the evaluation of multivariate integrals. We define a new method to track singularities of such integrals and present a computer program that implements the integration method. As our main result, we prove the existence of infinite families of massless 3- and 4-point graphs (including the ladder box graphs with arbitrary loop number and their minors) whose Feynman integrals can be expressed in terms of multiple polylogarithms, to all orders in the epsilon-expansion. These integrals can be computed effectively with the presented program. We include interesting examples of explicit results for Feynman integrals with up to 6 loops. In particular we present the first exactly computed counterterm in massless phi^4 theory which is not a multiple zeta value, but a linear combination of multiple polylogarithms at primitive sixth roots of unity (and divided by 3\sqrt{3}). To this end we derive a parity result on the reducibility of the real- and imaginary parts of such numbers into products and terms of lower depth.Comment: PhD thesis, 220 pages, many figure

    Sublinear Computation Paradigm

    Get PDF
    This open access book gives an overview of cutting-edge work on a new paradigm called the “sublinear computation paradigm,” which was proposed in the large multiyear academic research project “Foundations of Innovative Algorithms for Big Data.” That project ran from October 2014 to March 2020, in Japan. To handle the unprecedented explosion of big data sets in research, industry, and other areas of society, there is an urgent need to develop novel methods and approaches for big data analysis. To meet this need, innovative changes in algorithm theory for big data are being pursued. For example, polynomial-time algorithms have thus far been regarded as “fast,” but if a quadratic-time algorithm is applied to a petabyte-scale or larger big data set, problems are encountered in terms of computational resources or running time. To deal with this critical computational and algorithmic bottleneck, linear, sublinear, and constant time algorithms are required. The sublinear computation paradigm is proposed here in order to support innovation in the big data era. A foundation of innovative algorithms has been created by developing computational procedures, data structures, and modelling techniques for big data. The project is organized into three teams that focus on sublinear algorithms, sublinear data structures, and sublinear modelling. The work has provided high-level academic research results of strong computational and algorithmic interest, which are presented in this book. The book consists of five parts: Part I, which consists of a single chapter on the concept of the sublinear computation paradigm; Parts II, III, and IV review results on sublinear algorithms, sublinear data structures, and sublinear modelling, respectively; Part V presents application results. The information presented here will inspire the researchers who work in the field of modern algorithms

    Approximate sampling and counting for spin models in graphs

    Get PDF
    En aquest treball abordem els problemes de mostreig i comptatge aproximat en models d'espins en grafs, recopilant els resultats mÊs significatius de l'àrea i introduïnt els coneixements previs necessaris del camp de la física estadística. En particular, prestem especial atenció als mètodes generals de disseny d'algorismes desenvolupats per Weitz i Barvinok, així com els avenços recents en matèria de comptatge i mostreig de conjunts independents de mida donada. Així mateix, discutim com es podrien adaptar aquests arguments als problemes de comptatge i mostreig de coloracions amb les mides de cada color fixades, explicant amb detall la línia de recerca actual que estem duent a terme.En este trabajo abordamos los problemas de muestreo y conteo aproximado en modelos de espines en grafos, recopilando los resultados mås significativos del campo e introduciendo el conocimiento previo necesario del årea de la física estadística. En particular, prestamos especial atención a los mÊtodos generales de diseùo de algorismos desarrollados por Weitz y Barvinok, así como a los avances recientes en cuanto al conteo y muestreo de conjuntos independientes de un tamaùo dado. Así mismo, discutimos cómo se podrían adaptar estos argumentos al problema de contar y muestrear coloraciones con el tamaùo de cada color fijo, explicando en detalle la línea de investigación que estamos llevando a cabo actualmente.We approach the problems of approximate sampling and counting in spin models on graphs, surveying the most significant results in the area and introducing the necessary background from statistical physics. We pay particular attention to the general algorithm design frameworks developed by Weitz and Barvinok, as well as to the newer results on counting and sampling independent sets of given size. In addition, we discuss the adaptation of the arguments behind these results to count and sample colorings with fixed color sizes, explaining in detail the current research line we are undertaking.Outgoin

    Quantum Approaches to Data Science and Data Analytics

    Get PDF
    In this thesis are explored different research directions related to both the use of classical data analysis techniques for the study of quantum systems and the employment of quantum computing to speed up hard Machine Learning task

    Quantum Apices: Identifying Limits of Entanglement, Nonlocality, & Contextuality

    Get PDF
    This work develops analytic methods to quantitatively demarcate quantum reality from its subset of classical phenomenon, as well as from the superset of general probabilistic theories. Regarding quantum nonlocality, we discuss how to determine the quantum limit of Bell-type linear inequalities. In contrast to semidefinite programming approaches, our method allows for the consideration of inequalities with abstract weights, by means of leveraging the Hermiticity of quantum states. Recognizing that classical correlations correspond to measurements made on separable states, we also introduce a practical method for obtaining sufficient separability criteria. We specifically vet the candidacy of driven and undriven superradiance as schema for entanglement generation. We conclude by reviewing current approaches to quantum contextuality, emphasizing the operational distinction between nonlocal and contextual quantum statistics. We utilize our abstractly-weighted linear quantum bounds to explicitly demonstrate a set of conditional probability distributions which are simultaneously compatible with quantum contextuality while being incompatible with quantum nonlocality. It is noted that this novel statistical regime implies an experimentally-testable target for the Consistent Histories theory of quantum gravity.Comment: Doctoral Thesis for the University of Connecticu
    • …
    corecore