71,137 research outputs found

    Learning to Write with Coherence From Negative Examples

    Full text link
    Coherence is one of the critical factors that determine the quality of writing. We propose writing relevance (WR) training method for neural encoder-decoder natural language generation (NLG) models which improves coherence of the continuation by leveraging negative examples. WR loss regresses the vector representation of the context and generated sentence toward positive continuation by contrasting it with the negatives. We compare our approach with Unlikelihood (UL) training in a text continuation task on commonsense natural language inference (NLI) corpora to show which method better models the coherence by avoiding unlikely continuations. The preference of our approach in human evaluation shows the efficacy of our method in improving coherence.Comment: 4+1 pages, 4 figures, 2 tables. ICASSP 2022 rejecte

    Literal Perceptual Inference

    Get PDF
    In this paper, I argue that theories of perception that appeal to Helmholtzā€™s idea of unconscious inference (ā€œHelmholtzianā€ theories) should be taken literally, i.e. that the inferences appealed to in such theories are inferences in the full sense of the term, as employed elsewhere in philosophy and in ordinary discourse. In the course of the argument, I consider constraints on inference based on the idea that inference is a deliberate acton, and on the idea that inferences depend on the syntactic structure of representations. I argue that inference is a personal-level but sometimes unconscious process that cannot in general be distinguished from association on the basis of the structures of the representations over which itā€™s defined. I also critique arguments against representationalist interpretations of Helmholtzian theories, and argue against the view that perceptual inference is encapsulated in a module

    Effective teaching of inference skills for reading : literature review

    Get PDF

    Comprehension, Use Cases and Requirements

    Get PDF
    Within requirements engineering it is generally accepted that in writing specifications (or indeed any requirements phase document), one attempts to produce an artefact which will be simple to comprehend for the user. That is, whether the document is intended for customers to validate requirements, or engineers to understand what the design must deliver, comprehension is an important goal for the author. Indeed, advice on producing ā€˜readableā€™ or ā€˜understandableā€™ documents is often included in courses on requirements engineering. However, few researchers, particularly within the software engineering domain, have attempted either to define or to understand the nature of comprehension and itā€™s implications for guidance on the production of quality requirements. In contrast, this paper examines thoroughly the nature of textual comprehension, drawing heavily from research in discourse process, and suggests some implications for requirements (and other) software documentation. In essence, we find that the guidance on writing requirements, often prevalent within software engineering, may be based upon assumptions which are an oversimplification of the nature of comprehension. Furthermore, that these assumptions may lead to rules which detract from the quality of the requirements document and, thus, the understanding gained by the reader. Finally the paper suggests lessons learned which may be useful in formulating future guidance for the production of requirements documentation

    Exchangeability for sets of desirable gambles

    Get PDF
    Sets of desirable gambles constitute a quite general type of uncertainty model with an interesting geometrical interpretation. We study exchangeability assessments for such models, and prove a counterpart of de Finetti's finite representation theorem. We show that this representation theorem has a very nice geometrical interpretation. We also lay bare the relationships between the representations of updated exchangeable models, and discuss conservative inference (natural extension) under exchangeability
    • ā€¦
    corecore