255 research outputs found
The one-round Voronoi game replayed
We consider the one-round Voronoi game, where player one (``White'', called
``Wilma'') places a set of n points in a rectangular area of aspect ratio r
<=1, followed by the second player (``Black'', called ``Barney''), who places
the same number of points. Each player wins the fraction of the board closest
to one of his points, and the goal is to win more than half of the total area.
This problem has been studied by Cheong et al., who showed that for large
enough and r=1, Barney has a strategy that guarantees a fraction of 1/2+a,
for some small fixed a.
We resolve a number of open problems raised by that paper. In particular, we
give a precise characterization of the outcome of the game for optimal play: We
show that Barney has a winning strategy for n>2 and r>sqrt{2}/n, and for n=2
and r>sqrt{3}/2. Wilma wins in all remaining cases, i.e., for n>=3 and
r<=sqrt{2}/n, for n=2 and r<=sqrt{3}/2, and for n=1. We also discuss complexity
aspects of the game on more general boards, by proving that for a polygon with
holes, it is NP-hard to maximize the area Barney can win against a given set of
points by Wilma.Comment: 14 pages, 6 figures, Latex; revised for journal version, to appear in
Computational Geometry: Theory and Applications. Extended abstract version
appeared in Workshop on Algorithms and Data Structures, Springer Lecture
Notes in Computer Science, vol.2748, 2003, pp. 150-16
Having Your Cake and Eating It Too: Autonomy and Interaction in a Model of Sentence Processing
Is the human language understander a collection of modular processes
operating with relative autonomy, or is it a single integrated process? This
ongoing debate has polarized the language processing community, with two
fundamentally different types of model posited, and with each camp concluding
that the other is wrong. One camp puts forth a model with separate processors
and distinct knowledge sources to explain one body of data, and the other
proposes a model with a single processor and a homogeneous, monolithic
knowledge source to explain the other body of data. In this paper we argue that
a hybrid approach which combines a unified processor with separate knowledge
sources provides an explanation of both bodies of data, and we demonstrate the
feasibility of this approach with the computational model called COMPERE. We
believe that this approach brings the language processing community
significantly closer to offering human-like language processing systems.Comment: 7 pages, uses aaai.sty macr
Recommended from our members
A parallel-process model of on-line inference processing
This paper presents a new model of on-line inference processes during text understanding. The model, called ATLAST, integrates inference processing at the lexical, syntactic, and pragmatic levels of understanding, and is consistent with the results of controlled psychological experiments. ATLAST interprets input text through the interaction of independent but communicating inference processes running in parallel. The focus of this paper is on the initial computer implementation of the ATLAST model, and some observations and issues which arise from that implementation
Recommended from our members
STRATEGIST : a program that models strategy-driven and content-driven inference behavior
In the course of understanding a text, different readers use different inference strategies to guide their choice of interpretations of the events in the text. This is in contrast to previous computer models of understanding, which all use the content-driven inference. The separate strategies are theorized to be composed of the same component inference processes, but of different rules for application of the processes. The use of different strategies occasionally results in different results of new experimental data and a working computer program, called STRATEGIST, that models both strategy-drive and content-driven inference behavior. The rules which make up two of these strategies are presented
Recommended from our members
Parsing with parallelism : a spreading-activation model of inference processing during text understanding
The past decade of reseatch in Natural Language Processing has universally recognized that, since natural language input is almost always ambiguous with respect to its pragmatic implications, its syntactic parse, and even its lexical analysis (i.e., choice of correct word-sense for an ambiguous word), processing natural language input requires decisions about word meanings, syntactic structure, and pragmatic inferences. The lexical, syntactic, and pragmatic levels of inferencing are not as disparate as they have often been treated in both psychological and artificial intelligence research. In fact, these three levels of analysis interact to form a joint interpretation of text.ATLAST (A Three-level Language Analysis SysTem) is an implemented integration of human language understanding at the lexical, the syntactic, and the pragmatic levels. For psychological validity, ATLAST is based on results of experiments with human subjects. The ATLAST model uses a new architecture which was developed to incorporate three features: spreading activation memory, two-stage syntax, and parallel processing of syntax and semantics. It is also a new framework within which to interpret and tackle unsolved problems through implementation and experimentation
Overview of the 1st international competition on plagiarism detection
The 1st International Competition on Plagiarism Detection, held in conjunction with the 3rd PAN workshop on Uncovering Plagiarism, Authorship, and Social Software Misuse, brought together researchers from many disciplines around the exciting retrieval task of automatic plagiarism detection. The competition was divided into the subtasks external plagiarism detection and intrinsic plagiarism detection, which were tackled by 13 participating groups. An important by-product of the competition is an evaluation framework for plagiarism detection, which consists of a large-scale plagiarism corpus and detection quality measures. The framework may serve as a unified test environment to compare future plagiarism detection research. In this paper we describe the corpus design and the quality measures, survey the detection approaches developed by the participants, and compile the achieved performance results of the competitors
Overview of the 3rd international competition on plagiarism detection
This paper overviews eleven plagiarism detectors that have been developed and evaluated within PAN'11. We survey the detection approaches developed for the two sub-tasks "external plagiarism detection" and "intrinsic plagiarism detection," and we report on their detailed evaluation based on the third revised edition of the PAN plagiarism corpus PAN-PC-11
Overview of the 2nd international competition on plagiarism detection
This paper overviews 18 plagiarism detectors that have been developed and evaluated within PAN'10. We start with a unified retrieval process that summarizes the best practices employed this year. Then, the detectors' performances are evaluated in detail, highlighting several important aspects of plagiarism detection, such as obfuscation, intrinsic vs. external plagiarism, and plagiarism case length. Finally, all results are compared to those of last year's competition
- …