Since the release of LLM-based tools such as GitHub Copilot and ChatGPT the
media and popular scientific literature, but also journals such as the
Communications of the ACM, have been flooded with opinions how these tools will
change programming. The opinions range from ``machines will program
themselves'', to ``AI does not help programmers''. Of course, these statements
are meant to to stir up a discussion, and should be taken with a grain of salt,
but we argue that such unfounded statements are potentially harmful. Instead,
we propose to investigate which skills are required to develop software using
LLM-based tools.
In this paper we report on an experiment in which we explore if Computational
Thinking (CT) skills predict the ability to develop software using LLM-based
tools. Our results show that the ability to develop software using LLM-based
tools can indeed be predicted by the score on a CT assessment. There are many
limitations to our experiment, and this paper is also a call to discuss how to
approach, preferably experimentally, the question of which skills are required
to develop software using LLM-based tools. We propose to rephrase this question
to include by what kind of people/programmers, to develop what kind of software
using what kind of LLM-based tools.Comment: 11 page