Pre-trained language models like ChatGPT have significantly improved code
generation. As these models scale up, there is an increasing need for the
output to handle more intricate tasks. Moreover, in bioinformatics, generating
functional programs poses additional notable challenges due to the amount of
domain knowledge, the need for complicated data operations, and intricate
functional dependencies between the operations. Here, we present BioCoder, a
benchmark developed to evaluate existing pre-trained models in generating
bioinformatics code. In relation to function-code generation, BioCoder covers
potential package dependencies, class declarations, and global variables. It
incorporates 1026 functions and 1243 methods in Python and Java from GitHub and
253 examples from the Rosalind Project. BioCoder incorporates a fuzz-testing
framework for evaluation, and we have applied it to evaluate many models
including InCoder, CodeGen, CodeGen2, SantaCoder, StarCoder, StarCoder+,
InstructCodeT5+, and ChatGPT. Our detailed analysis of these models emphasizes
the importance of domain knowledge, pragmatic code generation, and contextual
understanding. Our dataset, benchmark, Docker images, and scripts required for
testing are all available at https://github.com/gersteinlab/biocoder