171 research outputs found

    Credence for Epistemic Discourse

    Get PDF
    Many recent theories of epistemic discourse exploit an informational notion of consequence, i.e. a notion that defines entailment as preservation of support by an information state. This paper investigates how informational consequence fits with probabilistic reasoning. I raise two problems. First, all informational inferences that are not also classical inferences are, intuitively, probabilistically invalid. Second, all these inferences can be exploited, in a systematic way, to generate triviality results. The informational theorist is left with two options, both of them radical: they can either deny that epistemic modal claims have probability at all, or they can move to a nonstandard probability theory

    Portfolio Optimization via Credal Probabilistic Circuits

    Get PDF
    Portfolio optimization is a crucial part of many investment approaches and is arguably employed by almost all traders in a way or another. We introduce novel approaches for determining optimal weights for portfolios using a class of robust probabilistic generative models. Specifically, we utilize credal probabilistic circuits, a type of generative model known for their ability to perform efficient exact probabilistic inferences and to handle uncertainty in a sound manner. To account for model or epistemic uncertainty, these models use the theory of imprecise probability. Sets of parameter values represent perturbations of the probabilistic circuit model and can be interpreted as an uncertainty-aware correction of the parameters of an underlying portfolio. We call the result as credal portfolio. We propose a method for determining the amount of perturbation that well-captures the uncertainty of the problem, which is employed for the analysis of investments with real-world daily stock market data, showing promising results when compared to usual approaches

    Tractable probabilistic models for causal learning and reasoning

    Get PDF
    This thesis examines the application of tractable probabilistic modelling principles to causal learning and reasoning. Tractable probabilistic modelling is a promising paradigm that has emerged in recent years, which focuses on probabilistic models that enable exact and efficient probabilistic reasoning. In particular, the framework of probabilistic circuits provides a systematic language of the tractability of models for various inference queries based on their structural properties, with recent proposals pushing the boundaries of expressiveness and tractability. However, not all information about a system can be captured through a probability distribution over observed variables; for example, the causal direction between two variables can be indistinguishable from data alone. Formalizing this, Pearl’s Causal Hierarchy (also known as the information hierarchy) delineates three levels of causal queries, namely, associational, interventional, and counterfactual, that require increasingly greater knowledge of the underlying causal system, represented by a structural causal model and associated causal diagram. Motivated by this, we investigate the possibility of tractable causal modelling; that is, exact and efficient reasoning with respect to classes of causal queries. In particular, we identify three scenarios, separated by the amount of knowledge available to the modeler: namely, when the full causal diagram/model is available, when only the observational distribution and identifiable causal estimand are available, and when there is additionally uncertainty over the causal diagram. In each of the scenarios, we propose probabilistic circuit representations, structural properties, and algorithms that enable efficient and exact causal reasoning. These models are distinguished from tractable probabilistic models in that they can not only answer different probabilistic inference queries, but also causal queries involving different interventions and even different causal diagrams. However, we also identify key limitations that cast doubt on the existence of a fully general tractable causal model. Our contributions also extend the theory of probabilistic circuits by proposing new properties and circuit architectures, which enable the analysis of advanced inference queries including, but not limited to, causal inference estimands

    Modes of Truth

    Get PDF
    The aim of this volume is to open up new perspectives and to raise new research questions about a unified approach to truth, modalities, and propositional attitudes. The volume’s essays are grouped thematically around different research questions. The first theme concerns the tension between the theoretical role of the truth predicate in semantics and its expressive function in language. The second theme of the volume concerns the interaction of truth with modal and doxastic notions. The third theme covers higher-order solutions to the semantic and modal paradoxes, providing an alternative to first-order solutions embraced in the first two themes. This book will be of interest to researchers working in epistemology, logic, philosophy of logic, philosophy of language, philosophy of mathematics, and semantics

    Modes of Truth

    Get PDF
    The aim of this volume is to open up new perspectives and to raise new research questions about a unified approach to truth, modalities, and propositional attitudes. The volume’s essays are grouped thematically around different research questions. The first theme concerns the tension between the theoretical role of the truth predicate in semantics and its expressive function in language. The second theme of the volume concerns the interaction of truth with modal and doxastic notions. The third theme covers higher-order solutions to the semantic and modal paradoxes, providing an alternative to first-order solutions embraced in the first two themes. This book will be of interest to researchers working in epistemology, logic, philosophy of logic, philosophy of language, philosophy of mathematics, and semantics

    Zero-shot Task Preference Addressing Enabled by Imprecise Bayesian Continual Learning

    Full text link
    Like generic multi-task learning, continual learning has the nature of multi-objective optimization, and therefore faces a trade-off between the performance of different tasks. That is, to optimize for the current task distribution, it may need to compromise performance on some tasks to improve on others. This means there exist multiple models that are each optimal at different times, each addressing a distinct task-performance trade-off. Researchers have discussed how to train particular models to address specific preferences on these trade-offs. However, existing algorithms require additional sample overheads -- a large burden when there are multiple, possibly infinitely many, preferences. As a response, we propose Imprecise Bayesian Continual Learning (IBCL). Upon a new task, IBCL (1) updates a knowledge base in the form of a convex hull of model parameter distributions and (2) obtains particular models to address preferences with zero-shot. That is, IBCL does not require any additional training overhead to construct preference-addressing models from its knowledge base. We show that models obtained by IBCL have guarantees in identifying the preferred parameters. Moreover, experiments show that IBCL is able to locate the Pareto set of parameters given a preference, maintain similar to better performance than baseline methods, and significantly reduce training overhead via zero-shot preference addressing
    • …
    corecore