Large Language Models (LLMs) are revolutionizing the field of computing
education with their powerful code-generating capabilities. Traditional
pedagogical practices have focused on code writing tasks, but there is now a
shift in importance towards code reading, comprehension and evaluation of
LLM-generated code. Alongside this shift, an important new skill is emerging --
the ability to solve programming tasks by constructing good prompts for
code-generating models. In this work we introduce a new type of programming
exercise to hone this nascent skill: 'Prompt Problems'. Prompt Problems are
designed to help students learn how to write effective prompts for AI code
generators. A student solves a Prompt Problem by crafting a natural language
prompt which, when provided as input to an LLM, outputs code that successfully
solves a specified programming task. We also present a new web-based tool
called Promptly which hosts a repository of Prompt Problems and supports the
automated evaluation of prompt-generated code. We deploy Promptly for the first
time in one CS1 and one CS2 course and describe our experiences, which include
student perceptions of this new type of activity and their interactions with
the tool. We find that students are enthusiastic about Prompt Problems, and
appreciate how the problems engage their computational thinking skills and
expose them to new programming constructs. We discuss ideas for the future
development of new variations of Prompt Problems, and the need to carefully
study their integration into classroom practice.Comment: Accepted to SIGCSE'24. arXiv admin note: substantial text overlap
with arXiv:2307.1636