85 research outputs found
A Bird\u27s Eye View of the Stable Model Languages
Logic programming presents us with a wonderful paradigm within which to develop reasoning systems. This paradigm is very expressive, has well understood mathematical properties, and is an area of intense international research. However, the research has not yet been adopted by the practitioner community. Current implementations center around rule-based systems, and object-oriented systems. Thus, the practitioner community is missing out on very powerful reasoning tools. The semantics of logic programs is a very difficult, technical arena. The purpose of this paper is to disseminate this information in a very understandable format, in the hopes that technology transfer w.r.t. logic programming will begin to take place. Both, the synthesis and the simplicity of this presentation of the stable model languages is absent from the literature
Semantics for Probabilistic Inference
A number of writers(Joseph Halpern and Fahiem Bacchus among them) have
offered semantics for formal languages in which inferences concerning
probabilities can be made. Our concern is different. This paper provides a
formalization of nonmonotonic inferences in which the conclusion is supported
only to a certain degree. Such inferences are clearly 'invalid' since they must
allow the falsity of a conclusion even when the premises are true.
Nevertheless, such inferences can be characterized both syntactically and
semantically. The 'premises' of probabilistic arguments are sets of statements
(as in a database or knowledge base), the conclusions categorical statements in
the language. We provide standards for both this form of inference, for which
high probability is required, and for an inference in which the conclusion is
qualified by an intermediate interval of support.Comment: Appears in Proceedings of the Eighth Conference on Uncertainty in
Artificial Intelligence (UAI1992
Irrelevance in Problem Solving
The notion of irrelevance underlies many different works in AI, such as detecting redundant facts, creating abstraction hierarchies and reformulation and modeling physical devices. However, in order to design problem solvers that exploit the notion of irrelevance, either by automatically detecting irrelevance or by being given knowledge about irrelevance, a formal treatment of the notion is required. In this paper we present a general framework for analyzing irrelevance. We discuss several properties of irrelevance and show how they vary in a space of definitions outlined by the framework. We show how irrelevance claims can be used to justify the creation of abstractions thereby suggesting a new view on the work on abstraction
Embodied Cognitive Science of Music. Modeling Experience and Behavior in Musical Contexts
Recently, the role of corporeal interaction has gained wide recognition within cognitive musicology. This thesis reviews evidence from different directions in music research supporting the importance of body-based processes for the understanding of music-related experience and behaviour. Stressing the synthetic focus of cognitive science, cognitive science of music is discussed as a modeling approach that takes these processes into account and may theoretically be embedded within the theory of dynamic systems. In particular, arguments are presented for the use of robotic devices as tools for the investigation of processes underlying human music-related capabilities (musical robotics)
A new logical framework for deductive planning
In this paper we present a logical framework for defining consistent axiomatizations of planning domains. A language to define basic actions and structured plans is embedded in a logic. This allows general properties of a whole planning scenario to be proved as well as plans to be formed deductively. In particular, frame assertions and domain constraints as invariants of the basic actions can be formulated and proved. Even for complex plans most frame assertions are obtained by purely syntactic analysis. In such cases the formal proof can be generated in a uniform way. The formalism we introduce is especially useful when treating recursive plans.
A tactical theorem prover, the Karlsruhe Interactive Verifier KIV is used to implement this logical framework
Measure instrumental in russian
We will argue that some seemingly adverbial free DPs in the instrumental in Russian which are traditionally termed measure instrumental are best understood as secondary predicates. We present the relevant syntactic assumptions and propose a semantics of this use of DPs in the instrumental. This proposal hears on the distinction between adjunct modification and secondary predication
Improving Data Quality by Leveraging Statistical Relational Learning
Digitally collected data su
āµ
ers from many data quality issues, such as duplicate, incorrect, or incomplete data. A common
approach for counteracting these issues is to formulate a set of data cleaning rules to identify and repair incorrect, duplicate and
missing data. Data cleaning systems must be able to treat data quality rules holistically, to incorporate heterogeneous constraints
within a single routine, and to automate data curation. We propose an approach to data cleaning based on statistical relational
learning (SRL). We argue that a formalism - Markov logic - is a natural fit for modeling data quality rules. Our approach
allows for the usage of probabilistic joint inference over interleaved data cleaning rules to improve data quality. Furthermore, it
obliterates the need to specify the order of rule execution. We describe how data quality rules expressed as formulas in first-order
logic directly translate into the predictive model in our SRL framework
Improving Data Quality by Leveraging Statistical Relational\ud Learning
Digitally collected data su\ud
āµ\ud
ers from many data quality issues, such as duplicate, incorrect, or incomplete data. A common\ud
approach for counteracting these issues is to formulate a set of data cleaning rules to identify and repair incorrect, duplicate and\ud
missing data. Data cleaning systems must be able to treat data quality rules holistically, to incorporate heterogeneous constraints\ud
within a single routine, and to automate data curation. We propose an approach to data cleaning based on statistical relational\ud
learning (SRL). We argue that a formalism - Markov logic - is a natural fit for modeling data quality rules. Our approach\ud
allows for the usage of probabilistic joint inference over interleaved data cleaning rules to improve data quality. Furthermore, it\ud
obliterates the need to specify the order of rule execution. We describe how data quality rules expressed as formulas in first-order\ud
logic directly translate into the predictive model in our SRL framework
- ā¦