Interaction with Large Language Models (LLMs) is primarily carried out via
prompting. A prompt is a natural language instruction designed to elicit
certain behaviour or output from a model. In theory, natural language prompts
enable non-experts to interact with and leverage LLMs. However, for complex
tasks and tasks with specific requirements, prompt design is not trivial.
Creating effective prompts requires skill and knowledge, as well as significant
iteration in order to determine model behavior, and guide the model to
accomplish a particular goal. We hypothesize that the way in which users
iterate on their prompts can provide insight into how they think prompting and
models work, as well as the kinds of support needed for more efficient prompt
engineering. To better understand prompt engineering practices, we analyzed
sessions of prompt editing behavior, categorizing the parts of prompts users
iterated on and the types of changes they made. We discuss design implications
and future directions based on these prompt engineering practices