46,897 research outputs found

    Reasoning of non- and pre-linguistic creatures: How much do the experiments tell us?

    Get PDF
    If a conclusion was reached that creatures without a language capability exhibit some form of a capability for logic, this would shed a new light on the relationship between logic, language, and thought. Recent experimental attempts to test whether some animals, as well as pre-linguistic human infants, are capable of exclusionary reasoning are taken to support exactly that conclusion. The paper discusses the analyses and conclusions of two such studies: Call’s (2004) two cups task, and Mody and Carey’s (2016) four cups task. My paper exposes hidden assumptions within these analyses, which enable the authors to settle on the explanation which assigns logical capabilities to the participants of the studies, as opposed to the explanations which do not. The paper then demonstrates that the competing explanations of the experimental results are theoretically underdeveloped, rendering them unclear in their predictions concerning the behavior of cognitive subjects, and thus difficult to distinguish by use of experiments. Additionally, it is questioned whether the explanations are rivals at all, i.e. whether they compete to explain the cognitive processes of the same level. The contribution of the paper is conceptual. Its aim is to clear up the concepts involved in these analyses, in order to avoid oversimplified or premature conclusions about the cognitive abilities of pre- and non-linguistic creatures. It is also meant to show that the theoretical space surrounding the issues involved might be much more diverse and unknown than many of these studies imply

    Integrating Learning and Reasoning with Deep Logic Models

    Full text link
    Deep learning is very effective at jointly learning feature representations and classification models, especially when dealing with high dimensional input patterns. Probabilistic logic reasoning, on the other hand, is capable to take consistent and robust decisions in complex environments. The integration of deep learning and logic reasoning is still an open-research problem and it is considered to be the key for the development of real intelligent agents. This paper presents Deep Logic Models, which are deep graphical models integrating deep learning and logic reasoning both for learning and inference. Deep Logic Models create an end-to-end differentiable architecture, where deep learners are embedded into a network implementing a continuous relaxation of the logic knowledge. The learning process allows to jointly learn the weights of the deep learners and the meta-parameters controlling the high-level reasoning. The experimental results show that the proposed methodology overtakes the limitations of the other approaches that have been proposed to bridge deep learning and reasoning

    Query DAGs: A Practical Paradigm for Implementing Belief-Network Inference

    Full text link
    We describe a new paradigm for implementing inference in belief networks, which consists of two steps: (1) compiling a belief network into an arithmetic expression called a Query DAG (Q-DAG); and (2) answering queries using a simple evaluation algorithm. Each node of a Q-DAG represents a numeric operation, a number, or a symbol for evidence. Each leaf node of a Q-DAG represents the answer to a network query, that is, the probability of some event of interest. It appears that Q-DAGs can be generated using any of the standard algorithms for exact inference in belief networks (we show how they can be generated using clustering and conditioning algorithms). The time and space complexity of a Q-DAG generation algorithm is no worse than the time complexity of the inference algorithm on which it is based. The complexity of a Q-DAG evaluation algorithm is linear in the size of the Q-DAG, and such inference amounts to a standard evaluation of the arithmetic expression it represents. The intended value of Q-DAGs is in reducing the software and hardware resources required to utilize belief networks in on-line, real-world applications. The proposed framework also facilitates the development of on-line inference on different software and hardware platforms due to the simplicity of the Q-DAG evaluation algorithm. Interestingly enough, Q-DAGs were found to serve other purposes: simple techniques for reducing Q-DAGs tend to subsume relatively complex optimization techniques for belief-network inference, such as network-pruning and computation-caching.Comment: See http://www.jair.org/ for any accompanying file

    From Observations to Hypotheses: Probabilistic Reasoning Versus Falsificationism and its Statistical Variations

    Full text link
    Testing hypotheses is an issue of primary importance in the scientific research, as well as in many other human activities. Much clarification about it can be achieved if the process of learning from data is framed in a stochastic model of causes and effects. Formulated with Poincare's words, the "essential problem of the experimental method" becomes then solving a "problem in the probability of causes", i.e. ranking the several hypotheses, that might be responsible for the observations, in credibility. This probabilistic approach to the problem (nowadays known as the Bayesian approach) differs from the standard (i.e. frequentistic) statistical methods of hypothesis tests. The latter methods might be seen as practical attempts of implementing the ideal of falsificationism, that can itself be viewed as an extension of the proof by contradiction of the classical logic to the experimental method. Some criticisms concerning conceptual as well as practical aspects of na\"\i ve falsificationism and conventional, frequentistic hypothesis tests are presented, and the alternative, probabilistic approach is outlined.Comment: 17 pages, 4 figures (V2 fixes some typos and adds a reference). Invited talk at the 2004 Vulcano Workshop on Frontier Objects in Astrophysics and Particle Physics, Vulcano (Italy) May 24-29, 2004. This paper and related work are also available at http://www.roma1.infn.it/~dagos/prob+stat.htm
    • …
    corecore