Developers often dedicate significant time to maintaining and refactoring
existing code. However, most prior work on generative models for code focuses
solely on creating new code, neglecting the unique requirements of editing
existing code. In this work, we explore a multi-round code auto-editing
setting, aiming to predict edits to a code region based on recent changes
within the same codebase. Our model, Coeditor, is a fine-tuned CodeT5 model
with enhancements specifically designed for code editing tasks. We encode code
changes using a line diff format and employ static analysis to form large
customized model contexts, ensuring appropriate information for prediction. We
collect a code editing dataset from the commit histories of 1650 open-source
Python projects for training and evaluation. In a simplified single-round,
single-edit task, Coeditor significantly outperforms the best code completion
approach -- nearly doubling its exact-match accuracy, despite using a much
smaller model -- demonstrating the benefits of incorporating editing history
for code completion. In a multi-round, multi-edit setting, we observe
substantial gains by iteratively prompting the model with additional user
edits. We open-source our code, data, and model weights to encourage future
research and release a VSCode extension powered by our model for interactive
usage