2,093 research outputs found

    Creating Local Coherence: An Empirical Assessment

    Get PDF
    Two of the mechanisms for creating natural transitions between adjacent sentences in a text, resulting in local coherence, involve discourse relations and switches of focus of attention between discourse entities. These two aspects of local coherence have been traditionally discussed and studied separately. But some empirical studies have given strong evidence for the necessity of understanding how the two types of coherence-creating devices interact. Here we present a joint corpus study of discourse relations and entity coherence exhibited in news texts from the Wall Street Journal and test several hypotheses expressed in earlier work about their interaction.

    Coherence in Machine Translation

    Get PDF
    Coherence ensures individual sentences work together to form a meaningful document. When properly translated, a coherent document in one language should result in a coherent document in another language. In Machine Translation, however, due to reasons of modeling and computational complexity, sentences are pieced together from words or phrases based on short context windows and with no access to extra-sentential context. In this thesis I propose ways to automatically assess the coherence of machine translation output. The work is structured around three dimensions: entity-based coherence, coherence as evidenced via syntactic patterns, and coherence as evidenced via discourse relations. For the first time, I evaluate existing monolingual coherence models on this new task, identifying issues and challenges that are specific to the machine translation setting. In order to address these issues, I adapted a state-of-the-art syntax model, which also resulted in improved performance for the monolingual task. The results clearly indicate how much more difficult the new task is than the task of detecting shuffled texts. I proposed a new coherence model, exploring the crosslingual transfer of discourse relations in machine translation. This model is novel in that it measures the correctness of the discourse relation by comparison to the source text rather than to a reference translation. I identified patterns of incoherence common across different language pairs, and created a corpus of machine translated output annotated with coherence errors for evaluation purposes. I then examined lexical coherence in a multilingual context, as a preliminary study for crosslingual transfer. Finally, I determine how the new and adapted models correlate with human judgements of translation quality and suggest that improvements in general evaluation within machine translation would benefit from having a coherence component that evaluated the translation output with respect to the source text

    Discourse parsing: Inferring discourse structure, modeling coherence, and its applications

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    An Exploratory Analysis of TED Talks in English and Lithuanian, Portuguese and Turkish Translations

    Get PDF
    CC BY 4.0This paper contributes to the question of how discourse relations are realised in TED talks. Drawing on an annotated, multilingual discourse corpus of TED talk transcripts, we examine discourse relations in English and Lithuanian, Portuguese and Turkish translations by concentrating on three aspects: the degree of explicitness in discourse relations, the extent to which explicit and implicit relations are encoded inter- or intra-sententially, and whether top-level discourse relation senses employed in English differ in the target languages. The study shows that while the target languages differ from English in the first two dimensions, they do not display considerable differences in the third dimension. The paper thus reveals variations in the realisation of discourse relations in translated transcripts of a spoken genre in three languages and offers some methodological insights for dealing with the issues surrounding discourse relations

    Identity, Effectiveness, and Newness in Transjudicialism\u27s Coming of Age

    Get PDF
    This Article attempts to expose and problematize the ideological connections and normative commitments between these theoretical explanations of effectiveness and the pragmatic process-oriented proposals made in the 1990s when the United Nations was searching for ways to renew the discipline of international human rights law while avoiding the dual risks of politicization and Third World normative fragmentation. The liberal theory of effective supranational adjudication was the culmination of decade-long efforts by American liberal internationalists to provide a theoretical basis for and programmatic proposals towards achieving a more effective international human rights regime. Their theory aims at structuring the interface between the universal and regional human rights systems through a reconfigured account of transjudicialism. It can be taken to exemplify a particular liberal legal sensibility shared with post-war mainstream human rights lawyers the author calls transnationalism-legal-process-antiformalism (TLPAF). TLPAF regards transnational governance through disaggregated processes of cooperation and dialogue in information exchange as instrumental to the development and effectiveness of supranational institutions. Formalist legal structures are, accordingly, rebuffed and displaced by the specification and enforcement of substantive human rights norms through informal, teleological procedural mechanisms

    The Pinboard and the Paradox of Pain: An Experiment of Post-Epistemological Method in Representing the Lived Experience of Persistent Pain

    Get PDF
    This thesis is about the crisis in representation that accompanies the attempt to account for lived experience, with particular reference to bodily pain in social science. The diagnosis of this problem of experience identifies epistemology as an inappropriate means of knowing that initiates a translational paradox unable to satisfy the simultaneous demands of making lived experience familiar in representational form yet retaining the foreignness of the original experience at the same time. This problem of simultaneity is not a problem, however, if it is built into a way of knowing, something that escapes epistemological conditions of possibility with its either/or of singularities. To know in such ‘double vision’, or fractionally, characterises post-epistemological thinking. This thesis draws on a relatively underdeveloped method for practicing a fractional means of knowing from post- actor-network theory, that of the pinboard, and explores how it might be usefully applied to the problem of experience. The thesis constitutes an experiment in producing a social science account of the lived experience of chronic pain using this method as an alternative to conventional epistemological techniques that initiate the problem of experience. Through initial theoretical discussion, followed by reflection on its practical application involving the construction of fractional accounts of lived experience for five participants experiencing chronic pain (interviewed individually over several sessions), the pinboard is developed as a technique that seeks to maintain ‘double vision’ whilst inherently resisting attempts to resolve the juxtaposition it makes visible, enacting and engaging in an ontological politics with conventional methods of social analysis. This includes discussion of how the method might be transported from methodological knowledge spaces to effectively intervene on such conventional methods

    Fuzzy Coherence : Making Sense of Continuity in Hypertext Narratives

    Get PDF
    Hypertexts are digital texts characterized by interactive hyperlinking and a fragmented textual organization. Increasingly prominent since the early 1990s, hypertexts have become a common text type both on the Internet and in a variety of other digital contexts. Although studied widely in disciplines like hypertext theory and media studies, formal linguistic approaches to hypertext continue to be relatively rare. This study examines coherence negotiation in hypertext with particularly reference to hypertext fiction. Coherence, or the quality of making sense, is a fundamental property of textness. Proceeding from the premise that coherence is a subjectively evaluated property rather than an objective quality arising directly from textual cues, the study focuses on the processes through which readers interact with hyperlinks and negotiate continuity between hypertextual fragments. The study begins with a typological discussion of textuality and an overview of the historical and technological precedents of modern hypertexts. Then, making use of text linguistic, discourse analytical, pragmatic, and narratological approaches to textual coherence, the study takes established models developed for analyzing and describing conventional texts, and examines their applicability to hypertext. Primary data derived from a collection of hyperfictions is used throughout to illustrate the mechanisms in practice. Hypertextual coherence negotiation is shown to require the ability to cognitively operate between local and global coherence by means of processing lexical cohesion, discourse topical continuities, inferences and implications, and shifting cognitive frames. The main conclusion of the study is that the style of reading required by hypertextuality fosters a new paradigm of coherence. Defined as fuzzy coherence, this new approach to textual sensemaking is predicated on an acceptance of the coherence challenges readers experience when the act of reading comes to involve repeated encounters with referentially imprecise hyperlinks and discourse topical shifts. A practical application of fuzzy coherence is shown to be in effect in the way coherence is actively manipulated in hypertext narratives.Hyperteksti on yleisnimi digitaaliselle tekstityyppille joka rakentuu yleensä suhteellisen lyhyistä, lukijan tekemien valintojen mukaan järjestyvistä osista. Hypertekstit yleistyivät 1990-luvun alussa erityisesti Internetin vaikutuksesta, ja tänä päivänä suuri osa tietokoneen ruudulta luettavista teksteistä onkin hypertekstejä. Hypertekstien piirteitä on tutkittu viimeisten 20 vuoden aikana erityisesti hypertekstiteorian ja mediatutkimuksen oppialoilla, mutta niiden kielitieteellinen tutkimus on monelta osin edelleen alkuvaiheessa. Tutkimus tarkastelee koherenssin eli tekstin eheyden rakentumista hyperteksteissä ja erityisesti hypertekstitarinoissa. Koherenssilla tarkoitetaan lukijalähtöistä kokemusta tekstin rakenteellisesta mielekkyydestä, eli lukijan vaikutelmaa tekstin eri osien kuulumisesta yhteen mielekkäänä kokonaisuutena. Hypertekstin osalta koherenssimuodostuksen keskeinen ongelma liittyy hyperlinkkien viittaussuhteiden epätarkkuuteen, epäsuoran vuorovaikutustilanteen dynamiikkaan ja tekstin pirstaleisen rakenteen synnyttämään kokemukseen temaattisesta epäjatkuvuudesta. Aihetta tarkastellaan tekstilingvistiikan, diskurssianalyysin, pragmatiikan ja narratologian teorioiden lähtökohdista. Tutkimus esittelee hyperlinkkien viittaussuhteiden tulkintaan liittyvät eri mekanismit primäärimateriaalista nostetuin esimerkein, korostaen erityisesti teoriamallien soveltuvuuden eroja hypertekstien ja perinteisten lineaaristen tekstien välillä. Tutkimuksen tuloksena todetaan että hypertekstuaalinen tekstityyppi on synnyttämässä uuden lukutavan, joka edellyttää koherenssin käsitteen uudelleenarviointia. Tämä uusi koherenssin erityistyyppi, jota tutkimuksessa kutsutaan nimellä fuzzy coherence eli sumea koherenssi, perustuu toistuvien pienten koherenssiongelmien hyväksymiseen osana lukukokemusta. Erityisesti hypertekstikirjallisuuden piirissä sumeaa koherenssia voidaan hyödyntää myös kerronnallisena keinona

    Binarism and indeterminacy in the novels of Thomas Pynchor

    Get PDF
    Bibliography: pages 397-401.I attempt in this thesis, to graft together a close critical, and predominantly thematic, reading of Thomas Pynchon's novels with selected issues treated in the work of Jacques Derrida on philosophy and textuality, illustrating how this work demands the revision and interrogation of several major critical issues, concepts, dualisms and presuppositions. The thesis consists of an Introduction which sets forth a brief rationale for the graft described above, followed by a short and unavoidably inadequate synopsis of Derrida's work with a brief review and explication of those of his 'concepts' which play an important role in my reading of Pynchon's texts. The Introduction is succeeded by three lengthy chapters in which I discuss, more or less separately, each of Pynchon's three novels to date. These are V. (1963), The Crying of Lot 49, (1966) and Gravity's Rainbow (1973), and I discuss them in the order of their appearance, devoting a chapter to each. I attempt to treat different but related issues, preoccupations, themes and tropes in each of the novels to avoid repeating myself, engaging the apparatuses derived from Derrida's writing where deemed strategic and instructive. I suggest moreover, that several of the issues examined apropos the novel under consideration in any one chapter apply mutandis rnutandi to the other novels. Each chapter therefore to some extent conducts a reading of the novels which it does not treat directly. Finally, supervising these separate chapters is a sustained focus on the epistemology of binarism and digitalism, and the conceptual dualisms which structure and inform major portions of the thematic and rhetorical dimensions The thesis concludes with a Bibliography and a summary Epilogue which seeks to assess briefly the 'achievement' of Pynchon's writing

    Communicating strategically: public relations and organisational legitimacy

    Get PDF
    This thesis aims to facilitate an understanding of some of the critical debates in public relations theory and practice. It joins others in contributing to a shift from a functional systems-based public relations paradigm to one where public relations is transparently seen as playing a role in shaping democracy in a global society. The research analyses NATO's communication operations in the Kosovo Campaign against the Federal Republic of Yugoslavia, examining the case from a number of different perspectives. The thesis contributes to the body of knowledge of public relations practices and how, in this case, they were used to inform and persuade publics of the moral cause of a bombing mission to achieve specific strategic organisational and communication goals. Further, it contextualises the case of NATO as an organisation facing a crisis in legitimacy following the end of the Cold War. It demonstrates how the Kosovo Campaign provided a vehicle to transform NATO's identity while retaining military capabilities, to make the organisation relevant to the global demands of the 21st century. In this way, NATO could claim a unique self and maintain its credibility and relevance. The thesis argues that NATO's public relations campaign was successful in maintaining both credibility and popular support for a 78-day bombing campaign within the 19 nations of the Alliance. The campaign allowed NATO to claim that it was the only organisation that could provide ii security and stability, as well as be the main bulwark of the defence of Western values in a rapidly globalising and changing world. Moreover, by framing the Kosovo air campaign as a humanitarian intervention, NATO was not only able to legitimise its actions but transform its military might with an acceptable human face in order to achieve its broader ideological goals in Europe. This thesis demonstrates how military interventions on behalf of powerful interests can be legitimised if the appropriate public relations framework is used and acceptable communication strategies employed. It suggests how citizens of democratic countries can be led to support decision-makers who present themselves as acting altruistically even when their actions may be self-interested

    Global Inference for Sentence Compression: An Integer Linear Programming Approach

    Get PDF
    Institute for Communicating and Collaborative SystemsIn this thesis we develop models for sentence compression. This text rewriting task has recently attracted a lot of attention due to its relevance for applications (e.g., summarisation) and simple formulation by means of word deletion. Previous models for sentence compression have been inherently local and thus fail to capture the long range dependencies and complex interactions involved in text rewriting. We present a solution by framing the task as an optimisation problem with local and global constraints and recast existing compression models into this framework. Using the constraints we instil syntactic, semantic and discourse knowledge the models otherwise fail to capture. We show that the addition of constraints allow relatively simple local models to reach state-of-the-art performance for sentence compression. The thesis provides a detailed study of sentence compression and its models. The differences between automatic and manually created compression corpora are assessed along with how compression varies across written and spoken text. We also discuss various techniques for automatically and manually evaluating compression output against a gold standard. Models are reviewed based on their assumptions, training requirements, and scalability. We introduce a general method for extending previous approaches to allow for more global models. This is achieved through the optimisation framework of Integer Linear Programming (ILP). We reformulate three compression models: an unsupervised model, a semi-supervised model and a fully supervised model as ILP problems and augment them with constraints. These constraints are intuitive for the compression task and are both syntactically and semantically motivated. We demonstrate how they improve compression quality and reduce the requirements on training material. Finally, we delve into document compression where the task is to compress every sentence of a document and use the resulting summary as a replacement for the original document. For document-based compression we investigate discourse information and its application to the compression task. Two discourse theories, Centering and lexical chains, are used to automatically annotate documents. These annotations are then used in our compression framework to impose additional constraints on the resulting document. The goal is to preserve the discourse structure of the original document and most of its content. We show how a discourse informed compression model can outperform a discourse agnostic state-of-the-art model using a question answering evaluation paradigm
    corecore