266 research outputs found

    Understanding Teacher Leadership in Middle School Mathematics: A Collaborative Research Effort

    Get PDF
    We report findings from a collaborative research effort designed to examine how teachers act as leaders in their schools. We find that teachers educated by the Math in the Middle Institute act as key sources of advice for colleagues within their schools while drawing support from a network consisting of other teachers in the program and university-level advisors. In addition to reporting on our findings, we reflect on our research process, noting some of the practical challenges involved, as well as some of the benefits of collaboration

    The Grail theorem prover: Type theory for syntax and semantics

    Full text link
    As the name suggests, type-logical grammars are a grammar formalism based on logic and type theory. From the prespective of grammar design, type-logical grammars develop the syntactic and semantic aspects of linguistic phenomena hand-in-hand, letting the desired semantics of an expression inform the syntactic type and vice versa. Prototypical examples of the successful application of type-logical grammars to the syntax-semantics interface include coordination, quantifier scope and extraction.This chapter describes the Grail theorem prover, a series of tools for designing and testing grammars in various modern type-logical grammars which functions as a tool . All tools described in this chapter are freely available

    Exploiting Lexical Conceptual Structure for paraphrase generation

    Get PDF
    Abstract. Lexical Conceptual Structure (LCS) represents verbs as semantic structures with a limited number of semantic predicates. This paper attempts to exploit how LCS can be used to explain the regularities underlying lexical and syntactic paraphrases, such as verb alternation, compound word decomposition, and lexical derivation. We propose a paraphrase generation model which transforms LCSs of verbs, and then conduct an empirical experiment taking the paraphrasing of Japanese light-verb constructions as an example. Experimental results justify that syntactic and semantic properties of verbs encoded in LCS are useful to semantically constrain the syntactic transformation in paraphrase generation.

    Diachronic Variation of Temporal Expressions in Scientific Writing Through the Lens of Relative Entropy

    Get PDF
    The abundance of temporal information in documents has lead to an increased interest in processing such information in the NLP community by considering temporal expressions. Besides domain-adaptation, acquiring knowledge on variation of temporal expressions according to time is relevant for improvement in automatic processing. So far, frequency-based accounts dominate in the investigation of specific temporal expressions. We present an approach to investigate diachronic changes of temporal expressions based on relative entropy – with the advantage of using conditioned probabilities rather than mere frequency. While we focus on scientific writing, our approach is generalizable to other domains and interesting not only in the field of NLP, but also in humanities.This work is partially funded by Deutsche Forschungsgemeinschaft (DFG) under grant SFB 1102: Information Density and Linguistic Encoding (www.sfb1102.uni-saarland.de)

    The algebra of lexical semantics

    Get PDF
    Abstract. The current generative theory of the lexicon relies primar-ily on tools from formal language theory and mathematical logic. Here we describe how a different formal apparatus, taken from algebra and automata theory, resolves many of the known problems with the gener-ative lexicon. We develop a finite state theory of word meaning based on machines in the sense of Eilenberg [11], a formalism capable of de-scribing discrepancies between syntactic type (lexical category) and se-mantic type (number of arguments). This mechanism is compared both to the standard linguistic approaches and to the formalisms developed in AI/KR. 1 Problem Statement In developing a formal theory of lexicography our starting point will be the informal practice of lexicography, rather than the more immediately related for-mal theories of Artificial Intelligence (AI) and Knowledge Representation (KR). Lexicography is a relatively mature field, with centuries of work experience an

    Abbreviation definition identification based on automatic precision estimates

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The rapid growth of biomedical literature presents challenges for automatic text processing, and one of the challenges is abbreviation identification. The presence of unrecognized abbreviations in text hinders indexing algorithms and adversely affects information retrieval and extraction. Automatic abbreviation definition identification can help resolve these issues. However, abbreviations and their definitions identified by an automatic process are of uncertain validity. Due to the size of databases such as MEDLINE only a small fraction of abbreviation-definition pairs can be examined manually. An automatic way to estimate the accuracy of abbreviation-definition pairs extracted from text is needed. In this paper we propose an abbreviation definition identification algorithm that employs a variety of strategies to identify the most probable abbreviation definition. In addition our algorithm produces an accuracy estimate, pseudo-precision, for each strategy without using a human-judged gold standard. The pseudo-precisions determine the order in which the algorithm applies the strategies in seeking to identify the definition of an abbreviation.</p> <p>Results</p> <p>On the Medstract corpus our algorithm produced 97% precision and 85% recall which is higher than previously reported results. We also annotated 1250 randomly selected MEDLINE records as a gold standard. On this set we achieved 96.5% precision and 83.2% recall. This compares favourably with the well known Schwartz and Hearst algorithm.</p> <p>Conclusion</p> <p>We developed an algorithm for abbreviation identification that uses a variety of strategies to identify the most probable definition for an abbreviation and also produces an estimated accuracy of the result. This process is purely automatic.</p
    corecore