25 research outputs found

    HELP ME THINK: A Simple Prompting Strategy for Non-experts to Create Customized Content with Models

    Full text link
    Controlling the text generated by language models and customizing the content has been a long-standing challenge. Existing prompting techniques proposed in pursuit of providing control are task-specific and lack generality; this provides overwhelming choices for non-expert users to find a suitable method for their task. The effort associated with those techniques, such as in writing examples, explanations, instructions, etc. further limits their adoption among non-expert users. In this paper, we propose a simple prompting strategy HELP ME THINK where we encourage GPT3 to help non-expert users by asking a set of relevant questions and leveraging user answers to execute the task. We demonstrate the efficacy of our technique HELP ME THINK on a variety of tasks. Specifically, we focus on tasks that are hard for average humans and require significant thinking to perform. We hope our work will encourage the development of unconventional ways to harness the power of large language models.Comment: ACL 2023 Finding

    Initiative Taking in Negotiation

    Get PDF
    Abstract We examine the relationship between initiative behavior in negotiation dialogues and the goals and outcomes of the negotiation. We propose a novel annotation scheme for dialogue initiative, including four labels for initiative and response behavior in a dialogue turn. We annotate an existing human-human negotiation dataset, and use initiative-based features to try to predict both negotiation goal and outcome, comparing our results to prior work using other (non-initiative) features sets. Results show that combining initiative features with other features leads to improvements over either set and a majority class baseline

    Understanding Questions that Arise When Working with Business Documents

    Full text link
    While digital assistants are increasingly used to help with various productivity tasks, less attention has been paid to employing them in the domain of business documents. To build an agent that can handle users' information needs in this domain, we must first understand the types of assistance that users desire when working on their documents. In this work, we present results from two user studies that characterize the information needs and queries of authors, reviewers, and readers of business documents. In the first study, we used experience sampling to collect users' questions in-situ as they were working with their documents, and in the second, we built a human-in-the-loop document Q&A system which rendered assistance with a variety of users' questions. Our results have implications for the design of document assistants that complement AI with human intelligence including whether particular skillsets or roles within the document are needed from human respondents, as well as the challenges around such systems.Comment: This paper will appear in CSCW'2

    InstructExcel: A Benchmark for Natural Language Instruction in Excel

    Full text link
    With the evolution of Large Language Models (LLMs) we can solve increasingly more complex NLP tasks across various domains, including spreadsheets. This work investigates whether LLMs can generate code (Excel OfficeScripts, a TypeScript API for executing many tasks in Excel) that solves Excel specific tasks provided via natural language user instructions. To do so we introduce a new large-scale benchmark, InstructExcel, created by leveraging the 'Automate' feature in Excel to automatically generate OfficeScripts from users' actions. Our benchmark includes over 10k samples covering 170+ Excel operations across 2,000 publicly available Excel spreadsheets. Experiments across various zero-shot and few-shot settings show that InstructExcel is a hard benchmark for state of the art models like GPT-4. We observe that (1) using GPT-4 over GPT-3.5, (2) providing more in-context examples, and (3) dynamic prompting can help improve performance on this benchmark.Comment: Findings of EMNLP 2023, 18 page

    Population food intake clusters and cardiovascular disease incidence: a Bayesian quantifying of a prospective population-based cohort study in a low and middle-income country

    Get PDF
    AimsThis study was designed to explore the relationship between cardiovascular disease incidence and population clusters, which were established based on daily food intake.MethodsThe current study examined 5,396 Iranian adults (2,627 males and 2,769 females) aged 35 years and older, who participated in a 10-year longitudinal population-based study that began in 2001. The frequency of food group consumption over the preceding year (daily, weekly, or monthly) was assessed using a 49-item qualitative food frequency questionnaire (FFQ) administered via a face-to-face interview conducted by an expert dietitian. Participants were clustered based on their dietary intake by applying the semi-parametric Bayesian approach of the Dirichlet Process. In this approach, individuals with the same multivariate distribution based on dietary intake were assigned to the same cluster. The association between the extracted population clusters and the incidence of cardiovascular diseases was examined using Cox proportional hazard models.ResultsIn the 10-year follow-up, 741 participants (401 men and 340 women) were diagnosed with cardiovascular diseases. Individuals were categorized into three primary dietary clusters: healthy, unhealthy, and mixed. After adjusting for potential confounders, subjects in the unhealthy cluster exhibited a higher risk for cardiovascular diseases [Hazard Ratio (HR): 2.059; 95% CI: 1.013, 4.184] compared to those in the healthy cluster. In the unadjusted model, individuals in the mixed cluster demonstrated a higher risk for cardiovascular disease than those in the healthy cluster (HR: 1.515; 95% CI: 1.097, 2.092). However, this association was attenuated after adjusting for potential confounders (HR: 1.145; 95% CI: 0.769, 1.706).ConclusionThe results have shown that individuals within an unhealthy cluster have a risk that is twice as high for the incidence of cardiovascular diseases. However, these associations need to be confirmed through further prospective investigations

    Does History Help? An Experiment on How Context Affects Crowdsourcing Dialogue Annotation

    No full text
    Crowds of people can potentially solve some problems faster than individuals. Crowd sourced data can be leveraged to benefit the crowd by providing information or solutions faster than traditional means. Many tasks needed for developing dialogue systems such as annotation can benefit from crowdsourcing as well. We investigate how to outsource dialogue data annotation through Amazon Mechanical Turk. We are in particular interested in empirically analyzing how much context from previous parts of the dialogue (e.g. previous dialogue turns) is needed to be provided before the target part (dialogue turn) is presented to the annotator. The answer to this question is essentially important for leveraging crowd sourced data for appropriate and efficient response and coordination. We study the effect of presenting different numbers of previous data (turns) to the Turkers in annotating sentiments of dyadic negotiation dialogs on the inter annotator reliability and comparison to the gold standard
    corecore