27 research outputs found
Minimal Undefinedness for Fuzzy Answer Sets
Fuzzy Answer Set Programming (FASP) combines the non-monotonic reasoning typical of Answer Set Programming with the capability of Fuzzy Logic to deal with imprecise information and paraconsistent reasoning. In the context of paraconsistent reasoning, the fundamental principle of minimal undefinedness states that truth degrees close to 0 and 1 should be preferred to those close to 0.5, to minimize the ambiguity of the scenario. The aim of this paper is to enforce such a principle in FASP through the minimization of a measure of undefinedness. Algorithms that minimize undefinedness of fuzzy answer sets are presented, and implemented
Externally Supported Models for Efficient Computation of Paracoherent Answer Sets
Answer Set Programming (ASP) is a well established formalism for nonmonotonic reasoning. While incoherence, the non-existence of answer sets for some programs, is an important feature of ASP, it has frequently been criticised and indeed has some disadvantages, especially for query answering. Paracoherent semantics have been suggested as a remedy, which extend the classical notion of answer sets to draw meaningful conclusions also from incoherent programs. In this paper we present an alternative characterization of the two major paracoherent semantics in terms of (extended) externally supported models. This definition uses a transformation of ASP programs that is more parsimonious than the classic epistemic transformation used in recent implementations. A performance comparison carried out on benchmarks from ASP competitions shows that the usage of the new transformation brings about performance improvements that are independent of the underlying algorithms
Unifying Theories of Logics with Undefinedness
A relational approach to the question of how different logics relate formally is described. We consider three three-valued logics, as well as classical and semi-classical logic. A fundamental representation of three-valued predicates is developed in the Unifying Theories of Programming (UTP) framework of Hoare and He. On this foundation, the five logics are encoded semantically as UTP theories. Several fundamental relationships are revealed using theory linking mechanisms, which corroborate results found in the literature, and which have direct applicability to the sound mixing of logics in order to prove facts. The initial development of the fundamental three-valued predicate model, on which the theories are based, is then applied to the novel systems-of-systems specification language CML, in order to reveal proof obligations which bridge a gap that exists between the semantics of CML and the existing semantics of one of its sub-languages, VDM. Finally, a detailed account is given of an envisioned model theory for our proposed structuring, which aims to lift the sentences of the five logics encoded to the second order, allowing them to range over elements of existing UTP theories of computation, such as designs and CSP processes. We explain how this would form a complete treatment of logic interplay that is expressed entirely inside UTP
Every normal logic program has a 2-valued semantics: theory, extensions, applications, implementations
Trabalho apresentado no âmbito do Doutoramento em Informática, como requisito parcial para obtenção do grau de Doutor em InformáticaAfter a very brief introduction to the general subject of Knowledge Representation and Reasoning with Logic Programs we analyse the syntactic structure of a logic program and how it can influence the semantics. We outline the important properties of a 2-valued semantics for Normal Logic Programs, proceed to define the new Minimal Hypotheses semantics with those properties and explore how it can be used to benefit some knowledge representation and reasoning mechanisms.
The main original contributions of this work, whose connections will be detailed in
the sequel, are:
• The Layering for generic graphs which we then apply to NLPs yielding the Rule
Layering and Atom Layering — a generalization of the stratification notion;
• The Full shifting transformation of Disjunctive Logic Programs into (highly nonstratified)NLPs;
• The Layer Support — a generalization of the classical notion of support;
• The Brave Relevance and Brave Cautious Monotony properties of a 2-valued semantics;
• The notions of Relevant Partial Knowledge Answer to a Query and Locally Consistent
Relevant Partial Knowledge Answer to a Query;
• The Layer-Decomposable Semantics family — the family of semantics that reflect
the above mentioned Layerings;
• The Approved Models argumentation approach to semantics;
• The Minimal Hypotheses 2-valued semantics for NLP — a member of the Layer-Decomposable Semantics family rooted on a minimization of positive hypotheses assumption approach;
• The definition and implementation of the Answer Completion mechanism in XSB
Prolog — an essential component to ensure XSB’s WAM full compliance with the
Well-Founded Semantics;
• The definition of the Inspection Points mechanism for Abductive Logic Programs;• An implementation of the Inspection Points workings within the Abdual system [21]
We recommend reading the chapters in this thesis in the sequence they appear. However,
if the reader is not interested in all the subjects, or is more keen on some topics
rather than others, we provide alternative reading paths as shown below.
1-2-3-4-5-6-7-8-9-12 Definition of the Layer-Decomposable Semantics family and the Minimal Hypotheses semantics (1 and 2 are optional)
3-6-7-8-10-11-12 All main contributions – assumes the reader
is familiarized with logic programming topics
3-4-5-10-11-12 Focus on abductive reasoning and applications.FCT-MCTES (Fundação para a Ciência e Tecnologia do Ministério da Ciência,Tecnologia e Ensino Superior)- (no. SFRH/BD/28761/2006
FCAIR 2012 Formal Concept Analysis Meets Information Retrieval Workshop co-located with the 35th European Conference on Information Retrieval (ECIR 2013) March 24, 2013, Moscow, Russia
International audienceFormal Concept Analysis (FCA) is a mathematically well-founded theory aimed at data analysis and classifiation. The area came into being in the early 1980s and has since then spawned over 10000 scientific publications and a variety of practically deployed tools. FCA allows one to build from a data table with objects in rows and attributes in columns a taxonomic data structure called concept lattice, which can be used for many purposes, especially for Knowledge Discovery and Information Retrieval. The Formal Concept Analysis Meets Information Retrieval (FCAIR) workshop collocated with the 35th European Conference on Information Retrieval (ECIR 2013) was intended, on the one hand, to attract researchers from FCA community to a broad discussion of FCA-based research on information retrieval, and, on the other hand, to promote ideas, models, and methods of FCA in the community of Information Retrieval
A cognitive exploration of the “non-visual” nature of geometric proofs
Why are Geometric Proofs (Usually) “Non-Visual”? We asked this question as
a way to explore the similarities and differences between diagrams and text (visual
thinking versus language thinking). Traditional text-based proofs are considered
(by many to be) more rigorous than diagrams alone. In this paper we focus on
human perceptual-cognitive characteristics that may encourage textual modes for
proofs because of the ergonomic affordances of text relative to diagrams. We suggest
that visual-spatial perception of physical objects, where an object is perceived
with greater acuity through foveal vision rather than peripheral vision, is similar
to attention navigating a conceptual visual-spatial structure. We suggest that attention
has foveal-like and peripheral-like characteristics and that textual modes
appeal to what we refer to here as foveal-focal attention, an extension of prior
work in focused attention