123,764 research outputs found
Every normal logic program has a 2-valued semantics: theory, extensions, applications, implementations
Trabalho apresentado no âmbito do Doutoramento em Informática, como requisito parcial para obtenção do grau de Doutor em InformáticaAfter a very brief introduction to the general subject of Knowledge Representation and Reasoning with Logic Programs we analyse the syntactic structure of a logic program and how it can influence the semantics. We outline the important properties of a 2-valued semantics for Normal Logic Programs, proceed to define the new Minimal Hypotheses semantics with those properties and explore how it can be used to benefit some knowledge representation and reasoning mechanisms.
The main original contributions of this work, whose connections will be detailed in
the sequel, are:
• The Layering for generic graphs which we then apply to NLPs yielding the Rule
Layering and Atom Layering — a generalization of the stratification notion;
• The Full shifting transformation of Disjunctive Logic Programs into (highly nonstratified)NLPs;
• The Layer Support — a generalization of the classical notion of support;
• The Brave Relevance and Brave Cautious Monotony properties of a 2-valued semantics;
• The notions of Relevant Partial Knowledge Answer to a Query and Locally Consistent
Relevant Partial Knowledge Answer to a Query;
• The Layer-Decomposable Semantics family — the family of semantics that reflect
the above mentioned Layerings;
• The Approved Models argumentation approach to semantics;
• The Minimal Hypotheses 2-valued semantics for NLP — a member of the Layer-Decomposable Semantics family rooted on a minimization of positive hypotheses assumption approach;
• The definition and implementation of the Answer Completion mechanism in XSB
Prolog — an essential component to ensure XSB’s WAM full compliance with the
Well-Founded Semantics;
• The definition of the Inspection Points mechanism for Abductive Logic Programs;• An implementation of the Inspection Points workings within the Abdual system [21]
We recommend reading the chapters in this thesis in the sequence they appear. However,
if the reader is not interested in all the subjects, or is more keen on some topics
rather than others, we provide alternative reading paths as shown below.
1-2-3-4-5-6-7-8-9-12 Definition of the Layer-Decomposable Semantics family and the Minimal Hypotheses semantics (1 and 2 are optional)
3-6-7-8-10-11-12 All main contributions – assumes the reader
is familiarized with logic programming topics
3-4-5-10-11-12 Focus on abductive reasoning and applications.FCT-MCTES (Fundação para a Ciência e Tecnologia do Ministério da Ciência,Tecnologia e Ensino Superior)- (no. SFRH/BD/28761/2006
Logic Programming with Default, Weak and Strict Negations
This paper treats logic programming with three kinds of negation: default,
weak and strict negations. A 3-valued logic model theory is discussed for logic
programs with three kinds of negation. The procedure is constructed for
negations so that a soundness of the procedure is guaranteed in terms of
3-valued logic model theory.Comment: 14 pages, to appear in Theory and Practice of Logic Programming
(TPLP
Recommended from our members
Boolean analysis identifies CD38 as a biomarker of aggressive localized prostate cancer.
The introduction of serum Prostate Specific Antigen (PSA) testing nearly 30 years ago has been associated with a significant shift towards localized disease and decreased deaths due to prostate cancer. Recognition that PSA testing has caused over diagnosis and over treatment of prostate cancer has generated considerable controversy over its value, and has spurred efforts to identify prognostic biomarkers to distinguish patients who need treatment from those that can be observed. Recent studies show that cancer is heterogeneous and forms a hierarchy of tumor cell populations. We developed a method of identifying prostate cancer differentiation states related to androgen signaling using Boolean logic. Using gene expression data, we identified two markers, CD38 and ARG2, that group prostate cancer into three differentiation states. Cancers with CD38-, ARG2- expression patterns, corresponding to an undifferentiated state, had significantly lower 10-year recurrence-free survival compared to the most differentiated group (CD38+ARG2+). We carried out immunohistochemical (IHC) staining for these two markers in a single institution (Stanford; n = 234) and multi-institution (Canary; n = 1326) cohorts. IHC staining for CD38 and ARG2 in the Stanford cohort demonstrated that combined expression of CD38 and ARG2 was prognostic. In the Canary cohort, low CD38 protein expression by IHC was significantly associated with recurrence-free survival (RFS), seminal vesicle invasion (SVI), extra-capsular extension (ECE) in univariable analysis. In multivariable analysis, ARG2 and CD38 IHC staining results were not independently associated with RFS, overall survival, or disease-specific survival after adjusting for other factors including SVI, ECE, Gleason score, pre-operative PSA, and surgical margins
Reusable rocket engine turbopump health monitoring system, part 3
Degradation mechanisms and sensor identification/selection resulted in a list of degradation modes and a list of sensors that are utilized in the diagnosis of these degradation modes. The sensor list is divided into primary and secondary indicators of the corresponding degradation modes. The signal conditioning requirements are discussed, describing the methods of producing the Space Shuttle Main Engine (SSME) post-hot-fire test data to be utilized by the Health Monitoring System. Development of the diagnostic logic and algorithms is also presented. The knowledge engineering approach, as utilized, includes the knowledge acquisition effort, characterization of the expert's problem solving strategy, conceptually defining the form of the applicable knowledge base, and rule base, and identifying an appropriate inferencing mechanism for the problem domain. The resulting logic flow graphs detail the diagnosis/prognosis procedure as followed by the experts. The nature and content of required support data and databases is also presented. The distinction between deep and shallow types of knowledge is identified. Computer coding of the Health Monitoring System is shown to follow the logical inferencing of the logic flow graphs/algorithms
The Value of Evidence-Based Computer Simulation of Oral Health Outcomes for Management Analysis of the Alaska Dental Health Aide Program
Objectives: To create an evidence‐based research tool to inform and guide policy and program
managers as they develop and deploy new service delivery models for oral disease prevention and
intervention.
Methods: A village‐level discrete event simulation was developed to project outcomes
associated with different service delivery patterns. Evidence‐ based outcomes were associated with
dental health aide activities, and projected indicators (DMFT, F+ST, T‐health, SiC, CPI, ECC) were proxy
for oral health outcomes. Model runs representing the planned program implementation, a more
intensive staffing scenario, and a more robust prevention scenario, generated 20‐year projections of
clinical indicators; graphs and tallies were analyzed for trends and differences.
Results: Outcomes associated with alternative patterns of service delivery indicate there is
potential for substantial improvement in clinical outcomes with modest program changes. Not all
segments of the population derive equal benefit when program variables are altered. Children benefit
more from increased prevention, while adults benefit more from intensive staffing.
Conclusions: Evidence‐ based simulation is a useful tool to analyze the impact of changing
program variables on program outcome measures. This simulation informs dental managers of the
clinical outcomes associated with policy and service delivery variables. Simulation tools can assist public
health managers in analyzing and understanding the relationship between their policy decisions and
long‐term clinical outcomes.The Ford Foundation
Recommended from our members
On the use of testability measures for dependability assessment
Program “testability” is informally, the probability that a program will fail under test if it contains at least one fault. When a dependability assessment has to be derived from the observation of a series of failure free test executions (a common need for software subject to “ultra high reliability” requirements), measures of testability can-in theory-be used to draw inferences on program correctness. We rigorously investigate the concept of testability and its use in dependability assessment, criticizing, and improving on, previously published results. We give a general descriptive model of program execution and testing, on which the different measures of interest can be defined. We propose a more precise definition of program testability than that given by other authors, and discuss how to increase testing effectiveness without impairing program reliability in operation. We then study the mathematics of using testability to estimate, from test results: the probability of program correctness and the probability of failures. To derive the probability of program correctness, we use a Bayesian inference procedure and argue that this is more useful than deriving a classical “confidence level”. We also show that a high testability is not an unconditionally desirable property for a program. In particular, for programs complex enough that they are unlikely to be completely fault free, increasing testability may produce a program which will be less trustworthy, even after successful testin
Evolution of shuttle avionics redundancy management/fault tolerance
The challenge of providing redundancy management (RM) and fault tolerance to meet the Shuttle Program requirements of fail operational/fail safe for the avionics systems was complicated by the critical program constraints of weight, cost, and schedule. The basic and sometimes false effectivity of less than pure RM designs is addressed. Evolution of the multiple input selection filter (the heart of the RM function) is discussed with emphasis on the subtle interactions of the flight control system that were found to be potentially catastrophic. Several other general RM development problems are discussed, with particular emphasis on the inertial measurement unit RM, indicative of the complexity of managing that three string system and its critical interfaces with the guidance and control systems
Logic-Based Specification Languages for Intelligent Software Agents
The research field of Agent-Oriented Software Engineering (AOSE) aims to find
abstractions, languages, methodologies and toolkits for modeling, verifying,
validating and prototyping complex applications conceptualized as Multiagent
Systems (MASs). A very lively research sub-field studies how formal methods can
be used for AOSE. This paper presents a detailed survey of six logic-based
executable agent specification languages that have been chosen for their
potential to be integrated in our ARPEGGIO project, an open framework for
specifying and prototyping a MAS. The six languages are ConGoLog, Agent-0, the
IMPACT agent programming language, DyLog, Concurrent METATEM and Ehhf. For each
executable language, the logic foundations are described and an example of use
is shown. A comparison of the six languages and a survey of similar approaches
complete the paper, together with considerations of the advantages of using
logic-based languages in MAS modeling and prototyping.Comment: 67 pages, 1 table, 1 figure. Accepted for publication by the Journal
"Theory and Practice of Logic Programming", volume 4, Maurice Bruynooghe
Editor-in-Chie
- …