14,827 research outputs found

    Parameter Learning of Logic Programs for Symbolic-Statistical Modeling

    Full text link
    We propose a logical/mathematical framework for statistical parameter learning of parameterized logic programs, i.e. definite clause programs containing probabilistic facts with a parameterized distribution. It extends the traditional least Herbrand model semantics in logic programming to distribution semantics, possible world semantics with a probability distribution which is unconditionally applicable to arbitrary logic programs including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM algorithm, the graphical EM algorithm, that runs for a class of parameterized logic programs representing sequential decision processes where each decision is exclusive and independent. It runs on a new data structure called support graphs describing the logical relationship between observations and their explanations, and learns parameters by computing inside and outside probability generalized for logic programs. The complexity analysis shows that when combined with OLDT search for all explanations for observations, the graphical EM algorithm, despite its generality, has the same time complexity as existing EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside algorithm for PCFGs, and the one for singly connected Bayesian networks that have been developed independently in each research field. Learning experiments with PCFGs using two corpora of moderate size indicate that the graphical EM algorithm can significantly outperform the Inside-Outside algorithm

    The 1990 progress report and future plans

    Get PDF
    This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers

    Task adapted reconstruction for inverse problems

    Full text link
    The paper considers the problem of performing a task defined on a model parameter that is only observed indirectly through noisy data in an ill-posed inverse problem. A key aspect is to formalize the steps of reconstruction and task as appropriate estimators (non-randomized decision rules) in statistical estimation problems. The implementation makes use of (deep) neural networks to provide a differentiable parametrization of the family of estimators for both steps. These networks are combined and jointly trained against suitable supervised training data in order to minimize a joint differentiable loss function, resulting in an end-to-end task adapted reconstruction method. The suggested framework is generic, yet adaptable, with a plug-and-play structure for adjusting both the inverse problem and the task at hand. More precisely, the data model (forward operator and statistical model of the noise) associated with the inverse problem is exchangeable, e.g., by using neural network architecture given by a learned iterative method. Furthermore, any task that is encodable as a trainable neural network can be used. The approach is demonstrated on joint tomographic image reconstruction, classification and joint tomographic image reconstruction segmentation

    Detecting fraud: Utilizing new technology to advance the audit profession

    Get PDF

    CBR and MBR techniques: review for an application in the emergencies domain

    Get PDF
    The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system. RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to: a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location. In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations. This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version

    Use of a Bayesian belief network to predict the impacts of commercializing non-timber forest products on livelihoods

    Get PDF
    Commercialization of non-timber forest products (NTFPs) has been widely promoted as a means of sustainably developing tropical forest resources, in a way that promotes forest conservation while supporting rural livelihoods. However, in practice, NTFP commercialization has often failed to deliver the expected benefits. Progress in analyzing the causes of such failure has been hindered by the lack of a suitable framework for the analysis of NTFP case studies, and by the lack of predictive theory. We address these needs by developing a probabilistic model based on a livelihood framework, enabling the impact of NTFP commercialization on livelihoods to be predicted. The framework considers five types of capital asset needed to support livelihoods: natural, human, social, physical, and financial. Commercialization of NTFPs is represented in the model as the conversion of one form of capital asset into another, which is influenced by a variety of socio-economic, environmental, and political factors. Impacts on livelihoods are determined by the availability of the five types of assets following commercialization. The model, implemented as a Bayesian Belief Network, was tested using data from participatory research into 19 NTFP case studies undertaken in Mexico and Bolivia. The model provides a novel tool for diagnosing the causes of success and failure in NTFP commercialization, and can be used to explore the potential impacts of policy options and other interventions on livelihoods. The potential value of this approach for the development of NTFP theory is discussed

    Measuring and improving community resilience: a Fuzzy Logic approach

    Get PDF
    Due to the increasing frequency of natural and man-made disasters worldwide, the scientific community has paid considerable attention to the concept of resilience engineering in recent years. Authorities and decision-makers, on the other hand, have been focusing their efforts to develop strategies that can help increase community resilience to different types of extreme events. Since it is often impossible to prevent every risk, the focus is on adapting and managing risks in ways that minimize impacts to communities (e.g., humans and other systems). Several resilience strategies have been proposed in the literature to reduce disaster risk and improve community resilience. Generally, resilience assessment is challenging due to uncertainty and unavailability of data necessary for the estimation process. This paper proposes a Fuzzy Logic method for quantifying community resilience. The methodology is based on the PEOPLES framework, an indicator-based hierarchical framework that defines all aspects of the community. A fuzzy-based approach is implemented to quantify the PEOPLES indicators using descriptive knowledge instead of hard data, accounting also for the uncertainties involved in the analysis. To demonstrate the applicability of the methodology, data regarding the functionality of the city San Francisco before and after the Loma Prieta earthquake are used to obtain a resilience index of the Physical Infrastructure dimension of the PEOPLES framework. The results show that the methodology can provide good estimates of community resilience despite the uncertainty of the indicators. Hence, it serves as a decision-support tool to help decision-makers and stakeholders assess and improve the resilience of their communities
    corecore