7,092 research outputs found

    Physics as Information Processing

    Full text link
    I review some recent advances in foundational research at Pavia QUIT group. The general idea is that there is only Quantum Theory without quantization rules, and the whole Physics---including space-time and relativity--is emergent from the quantum-information processing. And since Quantum Theory itself is axiomatized solely on informational principles, the whole Physics must be reformulated in information-theoretical terms: this is the "It from Bit of J. A. Wheeler. The review is divided into four parts: a) the informational axiomatization of Quantum Theory; b) how space-time and relativistic covariance emerge from quantum computation; c) what is the information-theoretical meaning of inertial mass and of \hbar, and how the quantum field emerges; d) an observational consequence of the new quantum field theory: a mass-dependent refraction index of vacuum. I will conclude with the research lines that will follow in the immediate future.Comment: Work presented at the conference "Advances in Quantum Theory" held on 14-17 June 2010 at the Linnaeus University, Vaxjo, Swede

    Approximate Inference in Probabilistic Answer Set Programming for Statistical Probabilities

    Get PDF
    Type 1 statements were introduced by Halpern in 1990 with the goal to represent statistical information about a domain of interest. These are of the form ''x of the elements share the same property''. The recently proposed language PASTA (Probabilistic Answer set programming for STAtistical probabilities) extends Probabilistic Logic Programs under the Distribution Semantics and allows the definition of this type of statements. To perform exact inference, PASTA programs are converted into probabilistic answer set programs under the Credal Semantics. However, this algorithm is infeasible for scenarios when more than a few random variables are involved. Here, we propose several algorithms to perform both conditional and unconditional approximate inference in PASTA programs and test them on different benchmarks. The results show that approximate algorithms scale to hundreds of variables and thus can manage real world domains

    MAP Inference in Probabilistic Answer Set Programs

    Get PDF
    Reasoning with uncertain data is a central task in artificial intelligence. In some cases, the goal is to find the most likely assignment to a subset of random variables, named query variables, while some other variables are observed. This task is called Maximum a Posteriori (MAP). When the set of query variables is the complement of the observed variables, the task goes under the name of Most Probable Explanation (MPE). In this paper, we introduce the definitions of cautious and brave MAP and MPE tasks in the context of Probabilistic Answer Set Programming under the credal semantics and provide an algorithm to solve them. Empirical results show that the brave version of both tasks is usually faster to compute. On the brave MPE task, the adoption of a state-of-the-art ASP solver makes the computation much faster than a naive approach based on the enumeration of all the worlds

    Credimus

    Full text link
    We believe that economic design and computational complexity---while already important to each other---should become even more important to each other with each passing year. But for that to happen, experts in on the one hand such areas as social choice, economics, and political science and on the other hand computational complexity will have to better understand each other's worldviews. This article, written by two complexity theorists who also work in computational social choice theory, focuses on one direction of that process by presenting a brief overview of how most computational complexity theorists view the world. Although our immediate motivation is to make the lens through which complexity theorists see the world be better understood by those in the social sciences, we also feel that even within computer science it is very important for nontheoreticians to understand how theoreticians think, just as it is equally important within computer science for theoreticians to understand how nontheoreticians think

    Alternation in Quantum Programming: From Superposition of Data to Superposition of Programs

    Full text link
    We extract a novel quantum programming paradigm - superposition of programs - from the design idea of a popular class of quantum algorithms, namely quantum walk-based algorithms. The generality of this paradigm is guaranteed by the universality of quantum walks as a computational model. A new quantum programming language QGCL is then proposed to support the paradigm of superposition of programs. This language can be seen as a quantum extension of Dijkstra's GCL (Guarded Command Language). Surprisingly, alternation in GCL splits into two different notions in the quantum setting: classical alternation (of quantum programs) and quantum alternation, with the latter being introduced in QGCL for the first time. Quantum alternation is the key program construct for realizing the paradigm of superposition of programs. The denotational semantics of QGCL are defined by introducing a new mathematical tool called the guarded composition of operator-valued functions. Then the weakest precondition semantics of QGCL can straightforwardly derived. Another very useful program construct in realizing the quantum programming paradigm of superposition of programs, called quantum choice, can be easily defined in terms of quantum alternation. The relation between quantum choices and probabilistic choices is clarified through defining the notion of local variables. We derive a family of algebraic laws for QGCL programs that can be used in program verification, transformations and compilation. The expressive power of QGCL is illustrated by several examples where various variants and generalizations of quantum walks are conveniently expressed using quantum alternation and quantum choice. We believe that quantum programming with quantum alternation and choice will play an important role in further exploiting the power of quantum computing.Comment: arXiv admin note: substantial text overlap with arXiv:1209.437

    A semantics for probabilistic answer set programs with incomplete stochastic knowledge

    Get PDF
    Some probabilistic answer set programs (PASP) semantics assign probabilities to sets of answer sets and implicitly assume these answer sets to be equiprobable. While this is a common choice in probability theory, it leads to unnatural behaviours with PASPs. We argue that the user should have a level of control over what assumption is used to obtain a probability distribution when the stochastic knowledge is incomplete. To this end, we introduce the Incomplete Knowledge Semantics (IKS) for probabilistic answer set programs. We take inspiration from the field of decision making under ignorance. Given a cost function, represented by a user-defined ordering over answer sets through weak constraints, we use the notion of Ordered Weighted Averaging (OWA) operator to distribute the probability over a set of answer sets accordingly to the user’s level of optimism. The more optimistic (or pessimistic) a user is, the more (or less) probability is assigned to the more optimal answer sets. We present an implementation and showcase the behaviour of this semantics on simple examples. We also highlight the impact that different OWA operators have on weight learning, showing that the equiprobability assumption is not always the best option

    Exploring Public Opinions Toward the Use of Generative Artificial Intelligence Chatbot in Higher Education:An Insight from Topic Modelling and Sentiment Analysis

    Get PDF
    The Generative Artificial Intelligence chatbots (GAI chatbots) have emerged as promising tools in various domains, including higher education, so this study aims to investigate the role of Bard, a newly developed GAI chatbot, in higher education. English tweets were collected from Twitter's free streaming Application Programming Interface (API). The Latent Dirichlet Allocation (LDA) algorithm was applied to extract latent topics from the tweets. User sentiments were extracted using the NRC Affect Intensity Lexicon and SentiStrength tools. This study explored the benefits, challenges, and future implications of integrating GAI chatbots in higher education. The findings shed light on the potential power of such tools, exemplified by Bard, in enhancing the learning process and providing support to students throughout their educational journe
    corecore