1,240 research outputs found

    Foundational Extensible Corecursion

    Full text link
    This paper presents a formalized framework for defining corecursive functions safely in a total setting, based on corecursion up-to and relational parametricity. The end product is a general corecursor that allows corecursive (and even recursive) calls under well-behaved operations, including constructors. Corecursive functions that are well behaved can be registered as such, thereby increasing the corecursor's expressiveness. The metatheory is formalized in the Isabelle proof assistant and forms the core of a prototype tool. The corecursor is derived from first principles, without requiring new axioms or extensions of the logic

    Learning Probabilistic Logic Programs in Continuous Domains

    Get PDF
    The field of statistical relational learning aims at unifying logic and probability to reason and learn from data. Perhaps the most successful paradigm in the field is probabilistic logic programming: the enabling of stochastic primitives in logic programming, which is now increasingly seen to provide a declarative background to complex machine learning applications. While many systems offer inference capabilities, the more significant challenge is that of learning meaningful and interpretable symbolic representations from data. In that regard, inductive logic programming and related techniques have paved much of the way for the last few decades. Unfortunately, a major limitation of this exciting landscape is that much of the work is limited to finite-domain discrete probability distributions. Recently, a handful of systems have been extended to represent and perform inference with continuous distributions. The problem, of course, is that classical solutions for inference are either restricted to well-known parametric families (e.g., Gaussians) or resort to sampling strategies that provide correct answers only in the limit. When it comes to learning, moreover, inducing representations remains entirely open, other than "data-fitting" solutions that force-fit points to aforementioned parametric families. In this paper, we take the first steps towards inducing probabilistic logic programs for continuous and mixed discrete-continuous data, without being pigeon-holed to a fixed set of distribution families. Our key insight is to leverage techniques from piecewise polynomial function approximation theory, yielding a principled way to learn and compositionally construct density functions. We test the framework and discuss the learned representations.Comment: Accepted at the 2018 KR Workshop on Hybrid Reasoning and Learnin

    Generalized belief change with imprecise probabilities and graphical models

    Get PDF
    We provide a theoretical investigation of probabilistic belief revision in complex frameworks, under extended conditions of uncertainty, inconsistency and imprecision. We motivate our kinematical approach by specializing our discussion to probabilistic reasoning with graphical models, whose modular representation allows for efficient inference. Most results in this direction are derived from the relevant work of Chan and Darwiche (2005), that first proved the inter-reducibility of virtual and probabilistic evidence. Such forms of information, deeply distinct in their meaning, are extended to the conditional and imprecise frameworks, allowing further generalizations, e.g. to experts' qualitative assessments. Belief aggregation and iterated revision of a rational agent's belief are also explored

    On Universal Prediction and Bayesian Confirmation

    Get PDF
    The Bayesian framework is a well-studied and successful framework for inductive reasoning, which includes hypothesis testing and confirmation, parameter estimation, sequence prediction, classification, and regression. But standard statistical guidelines for choosing the model class and prior are not always available or fail, in particular in complex situations. Solomonoff completed the Bayesian framework by providing a rigorous, unique, formal, and universal choice for the model class and the prior. We discuss in breadth how and in which sense universal (non-i.i.d.) sequence prediction solves various (philosophical) problems of traditional Bayesian sequence prediction. We show that Solomonoff's model possesses many desirable properties: Strong total and weak instantaneous bounds, and in contrast to most classical continuous prior densities has no zero p(oste)rior problem, i.e. can confirm universal hypotheses, is reparametrization and regrouping invariant, and avoids the old-evidence and updating problem. It even performs well (actually better) in non-computable environments.Comment: 24 page

    A General Algorithm for Deciding Transportability of Experimental Results

    Full text link
    Generalizing empirical findings to new environments, settings, or populations is essential in most scientific explorations. This article treats a particular problem of generalizability, called "transportability", defined as a license to transfer information learned in experimental studies to a different population, on which only observational studies can be conducted. Given a set of assumptions concerning commonalities and differences between the two populations, Pearl and Bareinboim (2011) derived sufficient conditions that permit such transfer to take place. This article summarizes their findings and supplements them with an effective procedure for deciding when and how transportability is feasible. It establishes a necessary and sufficient condition for deciding when causal effects in the target population are estimable from both the statistical information available and the causal information transferred from the experiments. The article further provides a complete algorithm for computing the transport formula, that is, a way of combining observational and experimental information to synthesize bias-free estimate of the desired causal relation. Finally, the article examines the differences between transportability and other variants of generalizability

    Estratégias comutativas para análise de confiabilidade em linha de produtos de software

    Get PDF
    Dissertação (mestrado) — Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2016.Engenharia de linha de produtos de software é uma forma de gerenciar sistematicamente a variabilidade e a comunalidade em sistemas de software, possibilitando a síntese automática de programas relacionados (produtos) a partir de um conjunto de artefatos reutilizáveis. No entanto, o número de produtos em uma linha de produtos de software pode crescer exponencialmente em função de seu número de características, tornando inviável vericar a qualidade de cada um desses produtos isoladamente. Existem diversas abordagens cientes de variabilidade para análise de linha de produtos, as quais adaptam técnicas de análise de produtos isolados para lidar com a variabilidade de forma e ciente. Tais abordagens podem ser classificadas em três dimensões de análise (product-based, family-based e feature-based ), mas, particularmente no contexto de análise de conabilidade, não existe uma teoria que compreenda (a) uma especificação formal das três dimensões e das estratégias de análise resultantes e (b) prova de que tais análises são equivalentes uma à outra. A falta de uma teoria com essas propriedades impede que se raciocine formalmente sobre o relacionamento entre as dimensões de análise e técnicas de análise derivadas, limitando a con ança nos resultados correspondentes a elas. Para preencher essa lacuna, apresentamos uma linha de produtos que implementa cinco abordagens para análise de con abilidade de linhas de produtos. Encontrou-se evidência empírica de que as cinco abordagens são equivalentes, no sentido em que resultam em con abilidades iguais ao analisar uma mesma linha de produtos. Além disso, formalizamos três das estratégias implementadas e provamos que elas são corretas, contanto que a abordagem probabilística para análise de con abilidade de produtos individuais também o seja. Por m, apresentamos um diagrama comutativo de passos intermediários de análise, o qual relaciona estratégias diferentes e permite reusar demonstrações de corretude entre elas.Software product line engineering is a means to systematically manage variability and commonality in software systems, enabling the automated synthesis of related programs (products) from a set of reusable assets. However, the number of products in a software product line may grow exponentially with the number of features, so it is practically infeasible to quality-check each of these products in isolation. There is a number of variability-aware approaches to product-line analysis that adapt single-product analysis techniques to cope with variability in an e cient way. Such approaches can be classi ed along three analysis dimensions (product-based, family-based, and feature-based), but, particularly in the context of reliability analysis, there is no theory comprising both (a) a formal speci cation of the three dimensions and resulting analysis strategies and (b) proof that such analyses are equivalent to one another. The lack of such a theory prevents formal reasoning on the relationship between the analysis dimensions and derived analysis techniques, thereby limiting the con dence in the corresponding results. To ll this gap, we present a product line that implements ve approaches to reliability analysis of product lines. We have found empirical evidence that all ve approaches are equivalent, in the sense that they yield equal reliabilities from analyzing a given product line. We also formalize three of the implemented strategies and prove that they are sound with respect to the probabilistic approach to reliability analysis of a single product. Furthermore, we present a commuting diagram of intermediate analysis steps, which relates di erent strategies and enables the reuse of soundness proofs between them

    Computed tomography image analysis for the detection of obstructive lung diseases

    Get PDF
    Damage to the small airways resulting from direct lung injury or associated with many systemic disorders is not easy to identify. Non-invasive techniques such as chest radiography or conventional tests of lung function often cannot reveal the pathology. On Computed Tomography (CT) images, the signs suggesting the presence of obstructive airways disease are subtle, and inter- and intra-observer variability can be considerable. The goal of this research was to implement a system for the automated analysis of CT data of the lungs. Its function is to help clinicians establish a confident assessment of specific obstructive airways diseases and increase the precision of investigation of structure/function relationships. To help resolve the ambiguities of the CT scans, the main objectives of our system were to provide a functional description of the raster images, extract semi-quantitative measurements of the extent of obstructive airways disease and propose a clinical diagnosis aid using a priori knowledge of CT image features of the diseased lungs. The diagnostic process presented in this thesis involves the extraction and analysis of multiple findings. Several novel low-level computer vision feature extractors and image processing algorithms were developed for extracting the extent of the hypo-attenuated areas, textural characterisation of the lung parenchyma, and morphological description of the bronchi. The fusion of the results of these extractors was achieved with a probabilistic network combining a priori knowledge of lung pathology. Creating a CT lung phantom allowed for the initial validation of the proposed methods. Performance of the techniques was then assessed with clinical trials involving other diagnostic tests and expert chest radiologists. The results of the proposed system for diagnostic decision-support demonstrated the feasibility and importance of information fusion in medical image interpretation.Open acces

    Short-term wind power forecasting: probabilistic and space-time aspects

    Get PDF
    • …
    corecore