4,906 research outputs found

    Totalism without Repugnance

    Get PDF
    Totalism is the view that one distribution of well-being is better than another just in case the one contains a greater sum of well-being than the other. Many philosophers, following Parfit, reject totalism on the grounds that it entails the repugnant conclusion: that, for any number of excellent lives, there is some number of lives that are barely worth living whose existence would be better. This paper develops a theory of welfare aggregation—the lexical-threshold view—that allows totalism to avoid the repugnant conclusion, as well as its analogues involving suffering populations and the lengths of individual lives. The theory is grounded in some independently plausible views about the structure of well-being, identifies a new source of incommensurability in population ethics, and avoids some of the implausibly extreme consequences of other lexical views, without violating the intuitive separability of lives

    Individual and Domain Adaptation in Sentence Planning for Dialogue

    Full text link
    One of the biggest challenges in the development and deployment of spoken dialogue systems is the design of the spoken language generation module. This challenge arises from the need for the generator to adapt to many features of the dialogue domain, user population, and dialogue context. A promising approach is trainable generation, which uses general-purpose linguistic knowledge that is automatically adapted to the features of interest, such as the application domain, individual user, or user group. In this paper we present and evaluate a trainable sentence planner for providing restaurant information in the MATCH dialogue system. We show that trainable sentence planning can produce complex information presentations whose quality is comparable to the output of a template-based generator tuned to this domain. We also show that our method easily supports adapting the sentence planner to individuals, and that the individualized sentence planners generally perform better than models trained and tested on a population of individuals. Previous work has documented and utilized individual preferences for content selection, but to our knowledge, these results provide the first demonstration of individual preferences for sentence planning operations, affecting the content order, discourse structure and sentence structure of system responses. Finally, we evaluate the contribution of different feature sets, and show that, in our application, n-gram features often do as well as features based on higher-level linguistic representations

    Structural variation in generated health reports

    Get PDF
    We present a natural language generator that produces a range of medical reports on the clinical histories of cancer patients, and discuss the problem of conceptual restatement in generating various textual views of the same conceptual content. We focus on two features of our system: the demand for 'loose paraphrases' between the various reports on a given patient, with a high degree of semantic overlap but some necessary amount of distinctive content; and the requirement for paraphrasing at primarily the discourse level

    Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation

    Get PDF
    This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of relatively recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artificial intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of Natural Language Processing, with an emphasis on different evaluation methods and the relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118 pages, 8 figures, 1 tabl

    Unsupervised Time Series Extraction from Controller Area Network Payloads

    Full text link
    This paper introduces a method for unsupervised tokenization of Controller Area Network (CAN) data payloads using bit level transition analysis and a greedy grouping strategy. The primary goal of this proposal is to extract individual time series which have been concatenated together before transmission onto a vehicle's CAN bus. This process is necessary because the documentation for how to properly extract data from a network may not always be available; passenger vehicle CAN configurations are protected as trade secrets. At least one major manufacturer has also been found to deliberately misconfigure their documented extraction methods. Thus, this proposal serves as a critical enabler for robust third-party security auditing and intrusion detection systems which do not rely on manufacturers sharing confidential information.Comment: 2018 IEEE 88th Vehicular Technology Conference (VTC2018-Fall

    Book Review: Fairness vs. Welfare

    Get PDF
    Reviewing Louis Kaplow & Steven Shavell, Fairness versus Welfare (2002

    Three Approaches to Generating Texts in Different Styles

    Get PDF
    Natural Language Generation (nlg) systems generate texts in English and other human languages from non-linguistic input data. Usually there are a large number of possible texts that can communicate the input data, and nlg systems must choose one of these. We argue that style can be used by nlg systems to choose between possible texts, and explore how this can be done by (1) explicit stylistic parameters, (2) imitating a genre style, and (3) imitating an individual’s style
    • …
    corecore