162,100 research outputs found

    An Evolutionary Argument for a Self-Explanatory, Benevolent Metaphysics

    Get PDF
    In this paper, a metaphysics is proposed that includes everything that can be represented by a well-founded multiset. It is shown that this metaphysics, apart from being self-explanatory, is also benevolent. Paradoxically, it turns out that the probability that we were born in another life than our own is zero. More insights are gained by inducing properties from a metaphysics that is not self-explanatory. In particular, digital metaphysics is analyzed, which claims that only computable things exist. First of all, it is shown that digital metaphysics contradicts itself by leading to the conclusion that the shortest computer program that computes the world is infinitely long. This means that the Church-Turing conjecture must be false. Secondly, the applicability of Occam’s razor is explained by evolution: in an evolving physics it can appear at each moment as if the world is caused by only finitely many things. Thirdly and most importantly, this metaphysics is benevolent in the sense that it organizes itself to fulfill the deepest wishes of its observers. Fourthly, universal computers with an infinite memory capacity cannot be built in the world. And finally, all the properties of the world, both good and bad, can be explained by evolutionary conservation

    Real-time and Probabilistic Temporal Logics: An Overview

    Full text link
    Over the last two decades, there has been an extensive study on logical formalisms for specifying and verifying real-time systems. Temporal logics have been an important research subject within this direction. Although numerous logics have been introduced for the formal specification of real-time and complex systems, an up to date comprehensive analysis of these logics does not exist in the literature. In this paper we analyse real-time and probabilistic temporal logics which have been widely used in this field. We extrapolate the notions of decidability, axiomatizability, expressiveness, model checking, etc. for each logic analysed. We also provide a comparison of features of the temporal logics discussed

    Portfolio selection models: A review and new directions

    Get PDF
    Modern Portfolio Theory (MPT) is based upon the classical Markowitz model which uses variance as a risk measure. A generalization of this approach leads to mean-risk models, in which a return distribution is characterized by the expected value of return (desired to be large) and a risk value (desired to be kept small). Portfolio choice is made by solving an optimization problem, in which the portfolio risk is minimized and a desired level of expected return is specified as a constraint. The need to penalize different undesirable aspects of the return distribution led to the proposal of alternative risk measures, notably those penalizing only the downside part (adverse) and not the upside (potential). The downside risk considerations constitute the basis of the Post Modern Portfolio Theory (PMPT). Examples of such risk measures are lower partial moments, Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR). We revisit these risk measures and the resulting mean-risk models. We discuss alternative models for portfolio selection, their choice criteria and the evolution of MPT to PMPT which incorporates: utility maximization and stochastic dominance

    On the Foundations of the Brussels Operational-Realistic Approach to Cognition

    Get PDF
    The scientific community is becoming more and more interested in the research that applies the mathematical formalism of quantum theory to model human decision-making. In this paper, we provide the theoretical foundations of the quantum approach to cognition that we developed in Brussels. These foundations rest on the results of two decade studies on the axiomatic and operational-realistic approaches to the foundations of quantum physics. The deep analogies between the foundations of physics and cognition lead us to investigate the validity of quantum theory as a general and unitary framework for cognitive processes, and the empirical success of the Hilbert space models derived by such investigation provides a strong theoretical confirmation of this validity. However, two situations in the cognitive realm, 'question order effects' and 'response replicability', indicate that even the Hilbert space framework could be insufficient to reproduce the collected data. This does not mean that the mentioned operational-realistic approach would be incorrect, but simply that a larger class of measurements would be in force in human cognition, so that an extended quantum formalism may be needed to deal with all of them. As we will explain, the recently derived 'extended Bloch representation' of quantum theory (and the associated 'general tension-reduction' model) precisely provides such extended formalism, while remaining within the same unitary interpretative framework.Comment: 21 page

    ToyArchitecture: Unsupervised Learning of Interpretable Models of the World

    Full text link
    Research in Artificial Intelligence (AI) has focused mostly on two extremes: either on small improvements in narrow AI domains, or on universal theoretical frameworks which are usually uncomputable, incompatible with theories of biological intelligence, or lack practical implementations. The goal of this work is to combine the main advantages of the two: to follow a big picture view, while providing a particular theory and its implementation. In contrast with purely theoretical approaches, the resulting architecture should be usable in realistic settings, but also form the core of a framework containing all the basic mechanisms, into which it should be easier to integrate additional required functionality. In this paper, we present a novel, purposely simple, and interpretable hierarchical architecture which combines multiple different mechanisms into one system: unsupervised learning of a model of the world, learning the influence of one's own actions on the world, model-based reinforcement learning, hierarchical planning and plan execution, and symbolic/sub-symbolic integration in general. The learned model is stored in the form of hierarchical representations with the following properties: 1) they are increasingly more abstract, but can retain details when needed, and 2) they are easy to manipulate in their local and symbolic-like form, thus also allowing one to observe the learning process at each level of abstraction. On all levels of the system, the representation of the data can be interpreted in both a symbolic and a sub-symbolic manner. This enables the architecture to learn efficiently using sub-symbolic methods and to employ symbolic inference.Comment: Revision: changed the pdftitl
    corecore