Code summarization, the task of generating useful comments given the code,
has long been of interest. Most of the existing code summarization models are
trained and validated on widely-used code comment benchmark datasets. However,
little is known about the quality of the benchmark datasets built from
real-world projects. Are the benchmark datasets as good as expected? To bridge
the gap, we conduct a systematic research to assess and improve the quality of
four benchmark datasets widely used for code summarization tasks. First, we
propose an automated code-comment cleaning tool that can accurately detect
noisy data caused by inappropriate data preprocessing operations from existing
benchmark datasets. Then, we apply the tool to further assess the data quality
of the four benchmark datasets, based on the detected noises. Finally, we
conduct comparative experiments to investigate the impact of noisy data on the
performance of code summarization models. The results show that these data
preprocessing noises widely exist in all four benchmark datasets, and removing
these noisy data leads to a significant improvement on the performance of code
summarization. We believe that the findings and insights will enable a better
understanding of data quality in code summarization tasks, and pave the way for
relevant research and practice