114 research outputs found
HELP ME THINK: A Simple Prompting Strategy for Non-experts to Create Customized Content with Models
Controlling the text generated by language models and customizing the content
has been a long-standing challenge. Existing prompting techniques proposed in
pursuit of providing control are task-specific and lack generality; this
provides overwhelming choices for non-expert users to find a suitable method
for their task. The effort associated with those techniques, such as in writing
examples, explanations, instructions, etc. further limits their adoption among
non-expert users. In this paper, we propose a simple prompting strategy HELP ME
THINK where we encourage GPT3 to help non-expert users by asking a set of
relevant questions and leveraging user answers to execute the task. We
demonstrate the efficacy of our technique HELP ME THINK on a variety of tasks.
Specifically, we focus on tasks that are hard for average humans and require
significant thinking to perform. We hope our work will encourage the
development of unconventional ways to harness the power of large language
models.Comment: ACL 2023 Finding
Recommended from our members
Effectiveness of Teaching Metamorphic Testing
This paper is an attempt to understand the effectiveness of teaching metamorphic properties in a senior/graduate software engineering course classroom environment through gauging the success achieved by students in identifying these properties on the basis of the lectures and materials provided in class. The main findings were: (1) most of the students either misunderstood what metamorphic properties are or fell short of identifying all the metamorphic properties in their respective projects, (2) most of the students that were successful in finding all the metamorphic properties in their respective projects had incorporated certain arithmetic rules into their project logic, and (3) most of the properties identified were numerical metamorphic properties. A possible reason for this could be that the two relevant lectures given in class cited examples of metamorphic properties that were based on numerical properties. Based on the findings of the case study, pertinent suggestions were made in order to improve the impact of lectures provided for Metamorphic Testing
Review of Literature on Rural Road Improvement
Rural roads are the tertiary road system in total road network which provides connectivity for the rural population to market and other facility centres. In India rural roads are being planned and programmed in the favour of overall rural development, and tried to provide all weather connectivity with some level of achievement. The investment of funds for road development provided policy guidelines and priorities for rural roads. Improvement of rural road is needed where satisfactory results are not obtained
Investigating the Failure Modes of the AUC metric and Exploring Alternatives for Evaluating Systems in Safety Critical Applications
With the increasing importance of safety requirements associated with the use
of black box models, evaluation of selective answering capability of models has
been critical. Area under the curve (AUC) is used as a metric for this purpose.
We find limitations in AUC; e.g., a model having higher AUC is not always
better in performing selective answering. We propose three alternate metrics
that fix the identified limitations. On experimenting with ten models, our
results using the new metrics show that newer and larger pre-trained models do
not necessarily show better performance in selective answering. We hope our
insights will help develop better models tailored for safety-critical
applications
How Many Data Samples is an Additional Instruction Worth?
Recently introduced instruction-paradigm empowers non-expert users to
leverage NLP resources by defining a new task in natural language.
Instruction-tuned models have significantly outperformed multitask learning
models (without instruction); however they are far from state-of-the-art
task-specific models. Conventional approaches to improve model performance via
creating datasets with large number of task instances or architectural changes
in the model may not be feasible for non-expert users. However, they can write
alternate instructions to represent an instruction task. Is
Instruction-augmentation helpful? We augment a subset of tasks in the expanded
version of NATURAL INSTRUCTIONS with additional instructions and find that it
significantly improves model performance (up to 35%), especially in the
low-data regime. Our results indicate that an additional instruction can be
equivalent to ~200 data samples on average across tasks.Comment: EACL 2023 Finding
- …