Computer programming is a fundamental tool for life scientists, allowing them
to carry out many essential research tasks. However, despite a variety of
educational efforts, learning to write code can be a challenging endeavor for
both researchers and students in life science disciplines. Recent advances in
artificial intelligence have made it possible to translate human-language
prompts to functional code, raising questions about whether these technologies
can aid (or replace) life scientists' efforts to write code. Using 184
programming exercises from an introductory-bioinformatics course, we evaluated
the extent to which one such model -- OpenAI's ChatGPT -- can successfully
complete basic- to moderate-level programming tasks. On its first attempt,
ChatGPT solved 139 (75.5%) of the exercises. For the remaining exercises, we
provided natural-language feedback to the model, prompting it to try different
approaches. Within 7 or fewer attempts, ChatGPT solved 179 (97.3%) of the
exercises. These findings have important implications for life-sciences
research and education. For many programming tasks, researchers no longer need
to write code from scratch. Instead, machine-learning models may produce usable
solutions. Instructors may need to adapt their pedagogical approaches and
assessment techniques to account for these new capabilities that are available
to the general public.Comment: 13 pages, 4 figures, to be submitted for publicatio