1,501,372 research outputs found
Commonsense knowledge representation and reasoning with fuzzy neural networks
This paper highlights the theory of common-sense knowledge in terms of representation and reasoning. A connectionist model is proposed for common-sense knowledge representation and reasoning. A generic fuzzy neuron is employed as a basic element for the connectionist model. The representation and reasoning ability of the model is described through examples
Rerepresenting and Restructuring Domain Theories: A Constructive Induction Approach
Theory revision integrates inductive learning and background knowledge by
combining training examples with a coarse domain theory to produce a more
accurate theory. There are two challenges that theory revision and other
theory-guided systems face. First, a representation language appropriate for
the initial theory may be inappropriate for an improved theory. While the
original representation may concisely express the initial theory, a more
accurate theory forced to use that same representation may be bulky,
cumbersome, and difficult to reach. Second, a theory structure suitable for a
coarse domain theory may be insufficient for a fine-tuned theory. Systems that
produce only small, local changes to a theory have limited value for
accomplishing complex structural alterations that may be required.
Consequently, advanced theory-guided learning systems require flexible
representation and flexible structure. An analysis of various theory revision
systems and theory-guided learning systems reveals specific strengths and
weaknesses in terms of these two desired properties. Designed to capture the
underlying qualities of each system, a new system uses theory-guided
constructive induction. Experiments in three domains show improvement over
previous theory-guided systems. This leads to a study of the behavior,
limitations, and potential of theory-guided constructive induction.Comment: See http://www.jair.org/ for an online appendix and other files
accompanying this articl
A model of learning task-specific knowledge for a new task
In this paper I will present a detailed ACT-R model of how the task-specific knowledge for a new, complex task is learned. The model is capable of acquiring its knowledge through experience, using a declarative representation that is gradually compiled into a procedural representation. The model exhibits several characteristics that concur with FittÂs theory of skill learning, and can be used to show that individual differences in working memory capacity initially have a large impact on performance, but that this impact diminished after sufficient experience. Some preliminary experimental data support these findings
Lazy Model Expansion: Interleaving Grounding with Search
Finding satisfying assignments for the variables involved in a set of
constraints can be cast as a (bounded) model generation problem: search for
(bounded) models of a theory in some logic. The state-of-the-art approach for
bounded model generation for rich knowledge representation languages, like ASP,
FO(.) and Zinc, is ground-and-solve: reduce the theory to a ground or
propositional one and apply a search algorithm to the resulting theory.
An important bottleneck is the blowup of the size of the theory caused by the
reduction phase. Lazily grounding the theory during search is a way to overcome
this bottleneck. We present a theoretical framework and an implementation in
the context of the FO(.) knowledge representation language. Instead of
grounding all parts of a theory, justifications are derived for some parts of
it. Given a partial assignment for the grounded part of the theory and valid
justifications for the formulas of the non-grounded part, the justifications
provide a recipe to construct a complete assignment that satisfies the
non-grounded part. When a justification for a particular formula becomes
invalid during search, a new one is derived; if that fails, the formula is
split in a part to be grounded and a part that can be justified.
The theoretical framework captures existing approaches for tackling the
grounding bottleneck such as lazy clause generation and grounding-on-the-fly,
and presents a generalization of the 2-watched literal scheme. We present an
algorithm for lazy model expansion and integrate it in a model generator for
FO(ID), a language extending first-order logic with inductive definitions. The
algorithm is implemented as part of the state-of-the-art FO(ID) Knowledge-Base
System IDP. Experimental results illustrate the power and generality of the
approach
The LQG -- String: Loop Quantum Gravity Quantization of String Theory I. Flat Target Space
We combine I. background independent Loop Quantum Gravity (LQG) quantization
techniques, II. the mathematically rigorous framework of Algebraic Quantum
Field Theory (AQFT) and III. the theory of integrable systems resulting in the
invariant Pohlmeyer Charges in order to set up the general representation
theory (superselection theory) for the closed bosonic quantum string on flat
target space. While we do not solve the, expectedly, rich representation theory
completely, we present a, to the best of our knowledge new, non -- trivial
solution to the representation problem. This solution exists 1. for any target
space dimension, 2. for Minkowski signature of the target space, 3. without
tachyons, 4. manifestly ghost -- free (no negative norm states), 5. without
fixing a worldsheet or target space gauge, 6. without (Virasoro) anomalies
(zero central charge), 7. while preserving manifest target space Poincar\'e
invariance and 8. without picking up UV divergences. The existence of this
stable solution is exciting because it raises the hope that among all the
solutions to the representation problem (including fermionic degrees of
freedom) we find stable, phenomenologically acceptable ones in lower
dimensional target spaces, possibly without supersymmetry, that are much
simpler than the solutions that arise via compactification of the standard Fock
representation of the string. Moreover, these new representations could solve
some of the major puzzles of string theory such as the cosmological constant
problem. The solution presented in this paper exploits the flatness of the
target space in several important ways. In a companion paper we treat the more
complicated case of curved target spaces.Comment: 46 p., LaTex2e, no figure
Interpolation in local theory extensions
In this paper we study interpolation in local extensions of a base theory. We
identify situations in which it is possible to obtain interpolants in a
hierarchical manner, by using a prover and a procedure for generating
interpolants in the base theory as black-boxes. We present several examples of
theory extensions in which interpolants can be computed this way, and discuss
applications in verification, knowledge representation, and modular reasoning
in combinations of local theories.Comment: 31 pages, 1 figur
- âŠ