3,141 research outputs found

    Preference-Dependent Unawareness

    Get PDF
    Morris (1996, 1997) introduced preference-based definitions of knowledge of belief in standard state-space structures. This paper extends this preference-based approach to unawareness structures (Heifetz, Meier, and Schipper, 2006, 2008). By defining unawareness and knowledge in terms of preferences over acts in unawareness structures and showing their equivalence to the epistemic notions of unawareness and knowledge, we try to build a bridge between decision theory and epistemic logic. Unawareness of an event is behaviorally characterized as the event being null and its negation being null.Unawareness, awareness, knowledge, preferences, subjective expected utility theory, decision theory, null event

    Preference-Based Unawareness

    Get PDF
    Morris (1996, 1997) introduced preference-based definitions of knowledge of belief in standard state-space structures. This paper extends this preference-based approach to unawareness structures (Heifetz, Meier, and Schipper, 2006, 2008). By defining unawareness and knowledge in terms of preferences over acts in unawareness structures and showing their equivalence to the epistemic notions of unawareness and knowledge, we try to build a bridge between decision theory and epistemic logic. Unawareness of an event is behaviorally characterized as the event being null and its negation being null.unawareness, awareness, knowledge, preferences, subjective expected utility theory, decision theory, null event

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page

    EMERGING THE EMERGENCE SOCIOLOGY: The Philosophical Framework of Agent-Based Social Studies

    Get PDF
    The structuration theory originally provided by Anthony Giddens and the advance improvement of the theory has been trying to solve the dilemma came up in the epistemological aspects of the social sciences and humanity. Social scientists apparently have to choose whether they are too sociological or too psychological. Nonetheless, in the works of the classical sociologist, Emile Durkheim, this thing has been stated long time ago. The usage of some models to construct the bottom-up theories has followed the vast of computational technology. This model is well known as the agent based modeling. This paper is giving a philosophical perspective of the agent-based social sciences, as the sociology to cope the emergent factors coming up in the sociological analysis. The framework is made by using the artificial neural network model to show how the emergent phenomena came from the complex system. Understanding the society has self-organizing (autopoietic) properties, the Kohonen’s self-organizing map is used in the paper. By the simulation examples, it can be seen obviously that the emergent phenomena in social system are seen by the sociologist apart from the qualitative framework on the atomistic sociology. In the end of the paper, it is clear that the emergence sociology is needed for sharpening the sociological analysis in the emergence sociology

    Computational complexity of the landscape I

    Get PDF
    We study the computational complexity of the physical problem of finding vacua of string theory which agree with data, such as the cosmological constant, and show that such problems are typically NP hard. In particular, we prove that in the Bousso-Polchinski model, the problem is NP complete. We discuss the issues this raises and the possibility that, even if we were to find compelling evidence that some vacuum of string theory describes our universe, we might never be able to find that vacuum explicitly. In a companion paper, we apply this point of view to the question of how early cosmology might select a vacuum.Comment: JHEP3 Latex, 53 pp, 2 .eps figure

    Lambek vs. Lambek: Functorial Vector Space Semantics and String Diagrams for Lambek Calculus

    Full text link
    The Distributional Compositional Categorical (DisCoCat) model is a mathematical framework that provides compositional semantics for meanings of natural language sentences. It consists of a computational procedure for constructing meanings of sentences, given their grammatical structure in terms of compositional type-logic, and given the empirically derived meanings of their words. For the particular case that the meaning of words is modelled within a distributional vector space model, its experimental predictions, derived from real large scale data, have outperformed other empirically validated methods that could build vectors for a full sentence. This success can be attributed to a conceptually motivated mathematical underpinning, by integrating qualitative compositional type-logic and quantitative modelling of meaning within a category-theoretic mathematical framework. The type-logic used in the DisCoCat model is Lambek's pregroup grammar. Pregroup types form a posetal compact closed category, which can be passed, in a functorial manner, on to the compact closed structure of vector spaces, linear maps and tensor product. The diagrammatic versions of the equational reasoning in compact closed categories can be interpreted as the flow of word meanings within sentences. Pregroups simplify Lambek's previous type-logic, the Lambek calculus, which has been extensively used to formalise and reason about various linguistic phenomena. The apparent reliance of the DisCoCat on pregroups has been seen as a shortcoming. This paper addresses this concern, by pointing out that one may as well realise a functorial passage from the original type-logic of Lambek, a monoidal bi-closed category, to vector spaces, or to any other model of meaning organised within a monoidal bi-closed category. The corresponding string diagram calculus, due to Baez and Stay, now depicts the flow of word meanings.Comment: 29 pages, pending publication in Annals of Pure and Applied Logi

    Coalgebraic Behavioral Metrics

    Get PDF
    We study different behavioral metrics, such as those arising from both branching and linear-time semantics, in a coalgebraic setting. Given a coalgebra α ⁣:XHX\alpha\colon X \to HX for a functor H ⁣:SetSetH \colon \mathrm{Set}\to \mathrm{Set}, we define a framework for deriving pseudometrics on XX which measure the behavioral distance of states. A crucial step is the lifting of the functor HH on Set\mathrm{Set} to a functor H\overline{H} on the category PMet\mathrm{PMet} of pseudometric spaces. We present two different approaches which can be viewed as generalizations of the Kantorovich and Wasserstein pseudometrics for probability measures. We show that the pseudometrics provided by the two approaches coincide on several natural examples, but in general they differ. If HH has a final coalgebra, every lifting H\overline{H} yields in a canonical way a behavioral distance which is usually branching-time, i.e., it generalizes bisimilarity. In order to model linear-time metrics (generalizing trace equivalences), we show sufficient conditions for lifting distributive laws and monads. These results enable us to employ the generalized powerset construction
    corecore