614 research outputs found
Recommended from our members
Neurons and symbols: a manifesto
We discuss the purpose of neural-symbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model in the broader context of multi-agent systems, machine learning and automated reasoning, and list some of the challenges for the area of
neural-symbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty
A QBF-based Formalization of Abstract Argumentation Semantics
Supported by the National Research Fund, Luxembourg (LAAMI project) and by the Engineering and Physical Sciences Research Council (EPSRC, UK), grant ref. EP/J012084/1 (SAsSY project).Peer reviewedPostprin
A generalized notion of weak interpretability and the corresponding modal logic
AbstractDzhaparidze, G., A generalized notion of weak interpretability and the corresponding modal logic, Annals of Pure and Applied Logic 61 (1993) 113-160. A tree Tr(T1,...,Tn) of theories T1,...,Tn is called tolerant, if there are consistent extensions T+1,...,T+n of T1,...,Tn, where each T+i interprets its successors in the tree Tr(T+1,...,T+n). We consider a propositional language with the following modal formation rule: if Tr is a (finite) tree of formulas, then ā¢Tr is a formula, and axiomatically define in this language the decidable logics TLR and TLRĻ. It is proved that TLR (resp. TLRĻ) yields exactly the schemata of PA-provable (resp. true) sentences, if ā¢Tr(A1,...,An) is understood as (a formalization of) āTr(PA + A1,...,PA + An) is tolerantā. In fact, TLR axiomatizes a considerable fragment of provability logic with quantifiers over ā1-sentences, and many relations that have been studied in the literature can be expressed in terms of tolerance. We introduce and study two more relations between theories: cointerpretability and cotolerance which are, in a sense, dual to interpretability and tolerance. Cointerpretability is a characterization of ā1-conservativity for essentially reflexive theories in terms of translations
An alternative proof method for possibilistic logic and its application to terminological logics
Possibilistic logic, an extension of first-order logic, deals with uncertainty that can be estimated in terms of possibility and necessity measures. Syntactically, this means that a first-order formula is equipped with a possibility degree or a necessity degree that expresses to what extent the formula is possibly or necessarily true. Possibilistic resolution, an extension of the well-known resolution principle, yields a calculus for possibilistic logic which respects the semantics developed for possibilistic logic. A drawback, which possibilistic resolution inherits from classical resolution, is that it may not terminate if applied to formulas belonging to decidable fragments of first-order logic. Therefore we propose an alternative proof method for possibilistic logic. The main feature of this method is that it completely abstracts from a concrete calculus but uses as basic operation a test for classical entailment. If this test is decidable for some fragment of first-order logic then possibilistic reasoning is also decidable for this fragment. We then instantiate possibilistic logic with a terminological logic, which is a decidable subclass of first-order logic but nevertheless much more expressive than propositional logic. This yields an extension of terminological logics towards the representation of uncertain knowledge which is satisfactory from a semantic as well as algorithmic point of view
Neurons and Symbols: A Manifesto
We discuss the purpose of neural-symbolic integration including its
principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model
in the broader context of multi-agent systems, machine learning and
automated reasoning, and list some of the challenges for the area of
neural-symbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty
On the Extensibility of Formal Methods Tools
Modern software systems often have long lifespans over which they must continually evolve to meet new, and sometimes unforeseen, requirements. One way to effectively deal with this is by developing the system as a series of extensions. As requirements change, the system evolves through the addition of new extensions and, potentially, the removal of existing extensions. In order for this kind of development process to thrive, it is necessary that the system have a high level of extensibility. Extensibility is the capability of a system to support the gradual addition of new, unplanned functionalities. This dissertation investigates extensibility of software systems and focuses on a particular class of software: formal methods tools. The approach is broad in scope. Extensibility of systems is addressed in terms of design, analysis and improvement, which are carried out in terms of source code and software architecture. For additional perspective, extensibility is also considered in the context of formal modelling. The work carried out in this dissertation led to the development of various extensions to the Overture tool supporting the Vienna Development Method, including a new proof obligation generator and integration with theorem provers. Additionally, the extensibility of Overture itself was also improved and it now better supports the development and integration of various kinds of extensions. Finally, extensibility techniques have been applied to formal modelling, leading to an extensible architectural style for formal models
- ā¦