748,540 research outputs found

    An Introduction to Natural Computation,

    Get PDF
    ABSTRACT Coherence Spaces were defined by J. Y. Girard in Coherence Spaces are a special subcategory of Scott domains [4] having a strictly finitary structure. The objects are constructed over a set of tokens (basic elements) where a coherence (reflexive and symmetric) relation is defined. The order of information is the set inclusion relation. In this work, we introduce the Probabilistic Coherence Spaces by associating probabilistic values with the objects of coherence spaces. As a result we get a notion of partial probability associated with the partial objects of the probabilistic coherence spaces. It is possible to adopt a vector notation, introducing the Vector Coherence Spaces, so that Probabilistic Coherence Spaces can be used to represent state spaces of probabilistic processes. Since such states represent partial probabilities, computation with such states produces probabilistic approximation processes whose limits are the conventional probabilistic processes. We also study linear functions on probabilistic coherence spaces to represent those probabilistic approximation processes and conventional probabilistic limits. The aim to recast in terms of the special structure of Vector Coherence Spaces the fundamental notions of probabilistic and quantum computing One immediate application of the work is in the construction of a domain of Markov models [1] with partial probabilities

    Lotfi A. Zadeh: On the man and his work

    Get PDF
    AbstractZadeh is one of the most impressive thinkers of the current time. An engineer by formation, although the range of his scientific interests is very broad, this paper only refers to his work towards reaching computation, mimicking ordinary reasoning, expressed in natural language, namely, with the introduction of fuzzy sets, fuzzy logic, and soft computing, as well as more recently, computing with words and perceptions

    Optimization of supply diversity for the self-assembly of simple objects in two and three dimensions

    Full text link
    The field of algorithmic self-assembly is concerned with the design and analysis of self-assembly systems from a computational perspective, that is, from the perspective of mathematical problems whose study may give insight into the natural processes through which elementary objects self-assemble into more complex ones. One of the main problems of algorithmic self-assembly is the minimum tile set problem (MTSP), which asks for a collection of types of elementary objects (called tiles) to be found for the self-assembly of an object having a pre-established shape. Such a collection is to be as concise as possible, thus minimizing supply diversity, while satisfying a set of stringent constraints having to do with the termination and other properties of the self-assembly process from its tile types. We present a study of what we think is the first practical approach to MTSP. Our study starts with the introduction of an evolutionary heuristic to tackle MTSP and includes results from extensive experimentation with the heuristic on the self-assembly of simple objects in two and three dimensions. The heuristic we introduce combines classic elements from the field of evolutionary computation with a problem-specific variant of Pareto dominance into a multi-objective approach to MTSP.Comment: Minor typos correcte

    Bayesian Learning for Neural Networks: an algorithmic survey

    Get PDF
    The last decade witnessed a growing interest in Bayesian learning. Yet, the technicality of the topic and the multitude of ingredients involved therein, besides the complexity of turning theory into practical implementations, limit the use of the Bayesian learning paradigm, preventing its widespread adoption across different fields and applications. This self-contained survey engages and introduces readers to the principles and algorithms of Bayesian Learning for Neural Networks. It provides an introduction to the topic from an accessible, practical-algorithmic perspective. Upon providing a general introduction to Bayesian Neural Networks, we discuss and present both standard and recent approaches for Bayesian inference, with an emphasis on solutions relying on Variational Inference and the use of Natural gradients. We also discuss the use of manifold optimization as a state-of-the-art approach to Bayesian learning. We examine the characteristic properties of all the discussed methods, and provide pseudo-codes for their implementation, paying attention to practical aspects, such as the computation of the gradient

    Towards a formalization of a two traders market with information exchange

    Full text link
    This paper shows that Hamiltonians and operators can also be put to good use even in contexts which are not purely physics based. Consider the world of finance. The work presented here {models a two traders system with information exchange with the help of four fundamental operators: cash and share operators; a portfolio operator and an operator reflecting the loss of information. An information Hamiltonian is considered and an additional Hamiltonian is presented which reflects the dynamics of selling/buying shares between traders. An important result of the paper is that when the information Hamiltonian is zero, portfolio operators commute with the Hamiltonian and this suggests that the dynamics are really due to the information. Under the assumption that the interaction and information terms in the Hamiltonian have similar strength, a perturbation scheme is considered on the interaction parameter. Contrary to intuition, the paper shows that up to a second order in the interaction parameter, a key factor in the computation of the portfolios of traders will be the initial values of the loss of information (rather than the initial conditions on the cash and shares). Finally, the paper shows that a natural outcome from the inequality of the variation of the portfolio of trader one versus the variation of the portfolio of trader two, begs for the introduction of `good' and `bad' information. It is shown that `good' information is related to the reservoirs (where an infinite set of bosonic operators are used) which model rumors/news and external facts, whilst `bad' information is associated with a set of two modes bosonic operators.Comment: In press in Physica Script

    On the Quantum Resolution of Cosmological Singularities using AdS/CFT

    Full text link
    The AdS/CFT correspondence allows us to map a dynamical cosmology to a dual quantum field theory living on the boundary of spacetime. Specifically, we study a five-dimensional model cosmology in type IIB supergravity, where the dual theory is an unstable deformation of N=4\N=4 supersymmetric SU(N) gauge theory on \Rbar\times S^3. A one-loop computation shows that the coupling governing the instability is asymptotically free, so quantum corrections cannot turn the potential around. The big crunch singularity in the bulk occurs when a boundary scalar field runs to infinity, in finite time. Consistent quantum evolution requires that we impose boundary conditions at infinite scalar field, i.e. a self-adjoint extension of the system. We find that quantum spreading of the homogeneous mode of the boundary scalar leads to a natural UV cutoff in particle production as the wavefunction for the homogeneous mode bounces back from infinity. However a perturbative calculation indicates that despite this, the logarithmic running of the boundary coupling governing the instability generally leads to significant particle production across the bounce. This prevents the wave packet of the homogeneous boundary scalar to return close to its initial form. Translating back to the bulk theory, we conclude that a quantum transition from a big crunch to a big bang is an improbable outcome of cosmological evolution in this class of five-dimensional models.Comment: 91 pages, 24 figures; v2: minor reorganization of introduction, clarifying comments throughout; 77 pages, 22 figures;v5: error corrected which significantly changes conclusio

    Constructive Logics Part I: A Tutorial on Proof Systems and Typed Lambda-Calculi

    Get PDF
    The purpose of this paper is to give an exposition of material dealing with constructive logic, typed λ-calculi, and linear logic. The emergence in the past ten years of a coherent field of research often named logic and computation has had two major (and related) effects: firstly, it has rocked vigorously the world of mathematical logic; secondly, it has created a new computer science discipline, which spans from what is traditionally called theory of computation, to programming language design. Remarkably, this new body of work relies heavily on some old concepts found in mathematical logic, like natural deduction, sequent calculus, and λ-calculus (but often viewed in a different light), and also on some newer concepts. Thus, it may be quite a challenge to become initiated to this new body of work (but the situation is improving, there are now some excellent texts on this subject matter). This paper attempts to provide a coherent and hopefully gentle initiation to this new body of work. We have attempted to cover the basic material on natural deduction, sequent calculus, and typed λ-calculus, but also to provide an introduction to Girard\u27s linear logic, one of the most exciting developments in logic these past five years. The first part of these notes gives an exposition of background material (with the exception of the Girard-translation of classical logic into intuitionistic logic, which is new). The second part is devoted to linear logic and proof nets
    • …
    corecore