3,665 research outputs found
Graph-Based Shape Analysis Beyond Context-Freeness
We develop a shape analysis for reasoning about relational properties of data
structures. Both the concrete and the abstract domain are represented by
hypergraphs. The analysis is parameterized by user-supplied indexed graph
grammars to guide concretization and abstraction. This novel extension of
context-free graph grammars is powerful enough to model complex data structures
such as balanced binary trees with parent pointers, while preserving most
desirable properties of context-free graph grammars. One strength of our
analysis is that no artifacts apart from grammars are required from the user;
it thus offers a high degree of automation. We implemented our analysis and
successfully applied it to various programs manipulating AVL trees,
(doubly-linked) lists, and combinations of both
Data-Oriented Language Processing. An Overview
During the last few years, a new approach to language processing has started
to emerge, which has become known under various labels such as "data-oriented
parsing", "corpus-based interpretation", and "tree-bank grammar" (cf. van den
Berg et al. 1994; Bod 1992-96; Bod et al. 1996a/b; Bonnema 1996; Charniak
1996a/b; Goodman 1996; Kaplan 1996; Rajman 1995a/b; Scha 1990-92; Sekine &
Grishman 1995; Sima'an et al. 1994; Sima'an 1995-96; Tugwell 1995). This
approach, which we will call "data-oriented processing" or "DOP", embodies the
assumption that human language perception and production works with
representations of concrete past language experiences, rather than with
abstract linguistic rules. The models that instantiate this approach therefore
maintain large corpora of linguistic representations of previously occurring
utterances. When processing a new input utterance, analyses of this utterance
are constructed by combining fragments from the corpus; the
occurrence-frequencies of the fragments are used to estimate which analysis is
the most probable one.
In this paper we give an in-depth discussion of a data-oriented processing
model which employs a corpus of labelled phrase-structure trees. Then we review
some other models that instantiate the DOP approach. Many of these models also
employ labelled phrase-structure trees, but use different criteria for
extracting fragments from the corpus or employ different disambiguation
strategies (Bod 1996b; Charniak 1996a/b; Goodman 1996; Rajman 1995a/b; Sekine &
Grishman 1995; Sima'an 1995-96); other models use richer formalisms for their
corpus annotations (van den Berg et al. 1994; Bod et al., 1996a/b; Bonnema
1996; Kaplan 1996; Tugwell 1995).Comment: 34 pages, Postscrip
The Lambek calculus with iteration: two variants
Formulae of the Lambek calculus are constructed using three binary
connectives, multiplication and two divisions. We extend it using a unary
connective, positive Kleene iteration. For this new operation, following its
natural interpretation, we present two lines of calculi. The first one is a
fragment of infinitary action logic and includes an omega-rule for introducing
iteration to the antecedent. We also consider a version with infinite (but
finitely branching) derivations and prove equivalence of these two versions. In
Kleene algebras, this line of calculi corresponds to the *-continuous case. For
the second line, we restrict our infinite derivations to cyclic (regular) ones.
We show that this system is equivalent to a variant of action logic that
corresponds to general residuated Kleene algebras, not necessarily
*-continuous. Finally, we show that, in contrast with the case without division
operations (considered by Kozen), the first system is strictly stronger than
the second one. To prove this, we use a complexity argument. Namely, we show,
using methods of Buszkowski and Palka, that the first system is -hard,
and therefore is not recursively enumerable and cannot be described by a
calculus with finite derivations
Semantics, Modelling, and the Problem of Representation of Meaning -- a Brief Survey of Recent Literature
Over the past 50 years many have debated what representation should be used
to capture the meaning of natural language utterances. Recently new needs of
such representations have been raised in research. Here I survey some of the
interesting representations suggested to answer for these new needs.Comment: 15 pages, no figure
Probabilistic Constraint Logic Programming
This paper addresses two central problems for probabilistic processing
models: parameter estimation from incomplete data and efficient retrieval of
most probable analyses. These questions have been answered satisfactorily only
for probabilistic regular and context-free models. We address these problems
for a more expressive probabilistic constraint logic programming model. We
present a log-linear probability model for probabilistic constraint logic
programming. On top of this model we define an algorithm to estimate the
parameters and to select the properties of log-linear models from incomplete
data. This algorithm is an extension of the improved iterative scaling
algorithm of Della-Pietra, Della-Pietra, and Lafferty (1995). Our algorithm
applies to log-linear models in general and is accompanied with suitable
approximation methods when applied to large data spaces. Furthermore, we
present an approach for searching for most probable analyses of the
probabilistic constraint logic programming model. This method can be applied to
the ambiguity resolution problem in natural language processing applications.Comment: 35 pages, uses sfbart.cl
- …