14,827 research outputs found
Parameter Learning of Logic Programs for Symbolic-Statistical Modeling
We propose a logical/mathematical framework for statistical parameter
learning of parameterized logic programs, i.e. definite clause programs
containing probabilistic facts with a parameterized distribution. It extends
the traditional least Herbrand model semantics in logic programming to
distribution semantics, possible world semantics with a probability
distribution which is unconditionally applicable to arbitrary logic programs
including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM
algorithm, the graphical EM algorithm, that runs for a class of parameterized
logic programs representing sequential decision processes where each decision
is exclusive and independent. It runs on a new data structure called support
graphs describing the logical relationship between observations and their
explanations, and learns parameters by computing inside and outside probability
generalized for logic programs. The complexity analysis shows that when
combined with OLDT search for all explanations for observations, the graphical
EM algorithm, despite its generality, has the same time complexity as existing
EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside
algorithm for PCFGs, and the one for singly connected Bayesian networks that
have been developed independently in each research field. Learning experiments
with PCFGs using two corpora of moderate size indicate that the graphical EM
algorithm can significantly outperform the Inside-Outside algorithm
The 1990 progress report and future plans
This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers
Task adapted reconstruction for inverse problems
The paper considers the problem of performing a task defined on a model
parameter that is only observed indirectly through noisy data in an ill-posed
inverse problem. A key aspect is to formalize the steps of reconstruction and
task as appropriate estimators (non-randomized decision rules) in statistical
estimation problems. The implementation makes use of (deep) neural networks to
provide a differentiable parametrization of the family of estimators for both
steps. These networks are combined and jointly trained against suitable
supervised training data in order to minimize a joint differentiable loss
function, resulting in an end-to-end task adapted reconstruction method. The
suggested framework is generic, yet adaptable, with a plug-and-play structure
for adjusting both the inverse problem and the task at hand. More precisely,
the data model (forward operator and statistical model of the noise) associated
with the inverse problem is exchangeable, e.g., by using neural network
architecture given by a learned iterative method. Furthermore, any task that is
encodable as a trainable neural network can be used. The approach is
demonstrated on joint tomographic image reconstruction, classification and
joint tomographic image reconstruction segmentation
CBR and MBR techniques: review for an application in the emergencies domain
The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system.
RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to:
a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions
b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location.
In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations.
This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version
Use of a Bayesian belief network to predict the impacts of commercializing non-timber forest products on livelihoods
Commercialization of non-timber forest products (NTFPs) has been widely promoted as a means of sustainably developing tropical forest resources, in a way that promotes forest conservation while supporting rural livelihoods. However, in practice, NTFP commercialization has often failed to deliver the expected benefits. Progress in analyzing the causes of such failure has been hindered by the lack of a
suitable framework for the analysis of NTFP case studies, and by the lack of predictive theory. We address
these needs by developing a probabilistic model based on a livelihood framework, enabling the impact of
NTFP commercialization on livelihoods to be predicted. The framework considers five types of capital
asset needed to support livelihoods: natural, human, social, physical, and financial. Commercialization of
NTFPs is represented in the model as the conversion of one form of capital asset into another, which is
influenced by a variety of socio-economic, environmental, and political factors. Impacts on livelihoods are
determined by the availability of the five types of assets following commercialization. The model,
implemented as a Bayesian Belief Network, was tested using data from participatory research into 19 NTFP
case studies undertaken in Mexico and Bolivia. The model provides a novel tool for diagnosing the causes
of success and failure in NTFP commercialization, and can be used to explore the potential impacts of
policy options and other interventions on livelihoods. The potential value of this approach for the
development of NTFP theory is discussed
Recommended from our members
State-of-the-art on research and applications of machine learning in the building life cycle
Fueled by big data, powerful and affordable computing resources, and advanced algorithms, machine learning has been explored and applied to buildings research for the past decades and has demonstrated its potential to enhance building performance. This study systematically surveyed how machine learning has been applied at different stages of building life cycle. By conducting a literature search on the Web of Knowledge platform, we found 9579 papers in this field and selected 153 papers for an in-depth review. The number of published papers is increasing year by year, with a focus on building design, operation, and control. However, no study was found using machine learning in building commissioning. There are successful pilot studies on fault detection and diagnosis of HVAC equipment and systems, load prediction, energy baseline estimate, load shape clustering, occupancy prediction, and learning occupant behaviors and energy use patterns. None of the existing studies were adopted broadly by the building industry, due to common challenges including (1) lack of large scale labeled data to train and validate the model, (2) lack of model transferability, which limits a model trained with one data-rich building to be used in another building with limited data, (3) lack of strong justification of costs and benefits of deploying machine learning, and (4) the performance might not be reliable and robust for the stated goals, as the method might work for some buildings but could not be generalized to others. Findings from the study can inform future machine learning research to improve occupant comfort, energy efficiency, demand flexibility, and resilience of buildings, as well as to inspire young researchers in the field to explore multidisciplinary approaches that integrate building science, computing science, data science, and social science
Measuring and improving community resilience: a Fuzzy Logic approach
Due to the increasing frequency of natural and man-made disasters worldwide,
the scientific community has paid considerable attention to the concept of
resilience engineering in recent years. Authorities and decision-makers, on the
other hand, have been focusing their efforts to develop strategies that can
help increase community resilience to different types of extreme events. Since
it is often impossible to prevent every risk, the focus is on adapting and
managing risks in ways that minimize impacts to communities (e.g., humans and
other systems). Several resilience strategies have been proposed in the
literature to reduce disaster risk and improve community resilience. Generally,
resilience assessment is challenging due to uncertainty and unavailability of
data necessary for the estimation process. This paper proposes a Fuzzy Logic
method for quantifying community resilience. The methodology is based on the
PEOPLES framework, an indicator-based hierarchical framework that defines all
aspects of the community. A fuzzy-based approach is implemented to quantify the
PEOPLES indicators using descriptive knowledge instead of hard data, accounting
also for the uncertainties involved in the analysis. To demonstrate the
applicability of the methodology, data regarding the functionality of the city
San Francisco before and after the Loma Prieta earthquake are used to obtain a
resilience index of the Physical Infrastructure dimension of the PEOPLES
framework. The results show that the methodology can provide good estimates of
community resilience despite the uncertainty of the indicators. Hence, it
serves as a decision-support tool to help decision-makers and stakeholders
assess and improve the resilience of their communities
- …