22,919 research outputs found

    The syntax of the periphrastic progressive in the Septuagint and the New Testament

    Get PDF
    In this article, I discuss the use of the periphrastic progressive construction of ΔጰΌ᜷ "be" with present participle in the Septuagint and the New Testament. I argue that a broad distinction can be made between two main uses, called ‘durative progressive’ and ‘focalized progressive’. In both cases, a number of syntactic frames can be specified in which the periphrastic construction occurs. I conclude the article by discussing the relationship between the Septuagintal and the New Testamental use of the periphrastic construction, arguing that while there are many similarities, this relationship should not be conceived of in terms of imitation, as some scholars have suggested

    The Pragmatics of Arabic Religious Posts on Facebook: A Relevance-Theoretic Account

    Get PDF
    Despite growing interest in the impact of computer-mediated communication on our lives, linguistic studies on such communication conducted in the Arabic language are scarce. Grounded in Relevance Theory, this paper seeks to fill this void by analysing the linguistic structure of Arabic religious posts on Facebook. First, I discuss communication on Facebook, treating it as a relevance-seeking process of writing or sharing posts, with the functions of ‘Like’ and ‘Share’ seen as cues for communicating propositional attitude. Second, I analyse a corpus of around 80 posts, revealing an interesting use of imperatives, interrogatives and conditionals which manipulate the interpretation of such posts between descriptive and interpretive readings. I also argue that a rigorous system of incentives is employed in such posts in order to boost their relevance. Positive, negative and challenging incentives link the textual to the visual message in an attempt to raise more cognitive effects for the readers

    Does the Principle of Compositionality Explain Productivity? For a Pluralist View of the Role of Formal Languages as Models

    Get PDF
    One of the main motivations for having a compositional semantics is the account of the productivity of natural languages. Formal languages are often part of the account of productivity, i.e., of how beings with finite capaci- ties are able to produce and understand a potentially infinite number of sen- tences, by offering a model of this process. This account of productivity con- sists in the generation of proofs in a formal system, that is taken to represent the way speakers grasp the meaning of an indefinite number of sentences. The informational basis is restricted to what is represented in the lexicon. This constraint is considered as a requirement for the account of productivity, or at least of an important feature of productivity, namely, that we can grasp auto- matically the meaning of a huge number of complex expressions, far beyond what can be memorized. However, empirical results in psycholinguistics, and especially particular patterns of ERP, show that the brain integrates informa- tion of different sources very fast, without any felt effort on the part of the speaker. This shows that formal procedures do not explain productivity. How- ever, formal models are still useful in the account of how we get at the seman- tic value of a complex expression, once we have the meanings of its parts, even if there is no formal explanation of how we get at those meanings. A practice-oriented view of modeling gives an adequate interpretation of this re- sult: formal compositional semantics may be a useful model for some ex- planatory purposes concerning natural languages, without being a good model for dealing with other explananda

    Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation

    Get PDF
    This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of relatively recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artificial intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of Natural Language Processing, with an emphasis on different evaluation methods and the relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118 pages, 8 figures, 1 tabl
    • 

    corecore