57,087 research outputs found

    Introduction to the CoNLL-2001 Shared Task: Clause Identification

    Full text link
    We describe the CoNLL-2001 shared task: dividing text into clauses. We give background information on the data sets, present a general overview of the systems that have taken part in the shared task and briefly discuss their performance

    When Are Tree Structures Necessary for Deep Learning of Representations?

    Full text link
    Recursive neural models, which use syntactic parse trees to recursively generate representations bottom-up, are a popular architecture. But there have not been rigorous evaluations showing for exactly which tasks this syntax-based method is appropriate. In this paper we benchmark {\bf recursive} neural models against sequential {\bf recurrent} neural models (simple recurrent and LSTM models), enforcing apples-to-apples comparison as much as possible. We investigate 4 tasks: (1) sentiment classification at the sentence level and phrase level; (2) matching questions to answer-phrases; (3) discourse parsing; (4) semantic relation extraction (e.g., {\em component-whole} between nouns). Our goal is to understand better when, and why, recursive models can outperform simpler models. We find that recursive models help mainly on tasks (like semantic relation extraction) that require associating headwords across a long distance, particularly on very long sequences. We then introduce a method for allowing recurrent models to achieve similar performance: breaking long sentences into clause-like units at punctuation and processing them separately before combining. Our results thus help understand the limitations of both classes of models, and suggest directions for improving recurrent models

    The Costs of Changing Our Minds

    Get PDF
    This isn’t quite a draft yet – it’s a concept paper. You’ll see after the first 10 pages a good bit of text in brackets, which are primarily notes for me, but it’ll give you a sense of the content of those sections. I’d like to talk through the concept – the “duty” to mitigate emotional distress damages and how courts have struggled with it, as a foray into a broader dichotomy that I see in a number of areas of law that suggest an implicit value in “cognitive liberty.” This is a smaller version of a broader book project “On Cognitive Liberty” that I’m writing, but I’d like to talk through how I might structure this as a standalone article. Forgive its brevity and incompleteness, but it’s a great time for me to workshop the concept with you

    "The Predication Semantics Model: The Role of Predicate: Class in Text Comprehension and Recall"

    Get PDF
    This paper presents and tests the predication semantics model, a computational model of text comprehension. It goes beyond previous case grammar approaches to text comprehension in employing a propositional rather than a rigid hierarchical tree notion, attempting to maintain a coherent set of propositions in working memory. The authors' assertion is that predicate class contains semantic information that readers use to make generally accurate predictions of a given proposition. Thus, the main purpose of the model-which works as a series of input and reduction cycles-is to explore the extent to which predicate categories play a role in reading comprehension and recall. In the reduction phase of the model, the propositions entered into the memory during the input phase are decreased while coherence is maintained among them. In an examination of the working memory at the end of each cycle, the computational model maintained coherence for 70% of cycles. The model appeared prone to serial dependence in errors: the coherence problem appears to occur because (unlike real readers) the simulation docs not reread when necessary. Overall, the experiment suggested that the predication semantics model is robust. The results suggested that the model emulates a primary process in text comprehension: predicate categories provide semantic information that helps to initiate and control automatic processes in reading, and allows people to grasp the gist of a text even when they have only minimal background knowledge. While needing refinement in several areas presenting minor problems-for example, the lack of a sufficiently complex memory to ensure that when the simulation of the model goes wrong it does not, as at present, stay wrong for successive intervals-the success of the model even at the current restrictive level of detail demonstrates the importance of the semantic information in predicate categories.

    Confronting Memory Loss

    Get PDF
    The Confrontation Clause of the Sixth Amendment grants “the accused” in “all criminal prosecutions” a right “to be confronted with the witnesses against him.” A particular problem occurs when there is a gap in time between the testimony that is offered, and the cross-examination of it, as where, pursuant to a hearsay exception or exemption, evidence of a current witness’s prior statement is offered and for some intervening reason her current memory is impaired. Does this fatally affect the opportunity to “confront” the witness? The Supreme Court has, to date, left unclear the extent to which a memory-impaired witness can afford a criminal defendant her right to confront. Would, for instance, it be of any value to permit a defendant the opportunity to cross-examine a witness claiming no recollection of having seen the crime or identified the defendant as the perpetrator? Should the right to confront simply imply the ability to look one’s accuser in the eye at trial or should it necessitate some degree of opportunity for substantive cross-examination? Two petitions denied certiorari by the Supreme Court in December 2019—White v. Louisiana and Tapia v. New York—could have permitted the Court to clarify confrontation rights in memory loss cases. The purpose of this Article is to identify and discuss eight key issues arising in connection with memory impairment in Confrontation Clause witnesses. Although the Court chose not to put these issues to bed in the context of White or Tapia, these are the issues we anticipate federal and state courts will be called upon to answer in the coming years, and we suspect the Supreme Court will eventually need to answer them

    Neural blackboard architectures of combinatorial structures in cognition

    Get PDF
    Human cognition is unique in the way in which it relies on combinatorial (or compositional) structures. Language provides ample evidence for the existence of combinatorial structures, but they can also be found in visual cognition. To understand the neural basis of human cognition, it is therefore essential to understand how combinatorial structures can be instantiated in neural terms. In his recent book on the foundations of language, Jackendoff described four fundamental problems for a neural instantiation of combinatorial structures: the massiveness of the binding problem, the problem of 2, the problem of variables and the transformation of combinatorial structures from working memory to long-term memory. This paper aims to show that these problems can be solved by means of neural ‘blackboard’ architectures. For this purpose, a neural blackboard architecture for sentence structure is presented. In this architecture, neural structures that encode for words are temporarily bound in a manner that preserves the structure of the sentence. It is shown that the architecture solves the four problems presented by Jackendoff. The ability of the architecture to instantiate sentence structures is illustrated with examples of sentence complexity observed in human language performance. Similarities exist between the architecture for sentence structure and blackboard architectures for combinatorial structures in visual cognition, derived from the structure of the visual cortex. These architectures are briefly discussed, together with an example of a combinatorial structure in which the blackboard architectures for language and vision are combined. In this way, the architecture for language is grounded in perception

    Bilingual language processing

    Get PDF

    Effects of short-term storage in processing rightward movement

    Get PDF
    • 

    corecore