Multi-span answer extraction, also known as the task of multi-span question
answering (MSQA), is critical for real-world applications, as it requires
extracting multiple pieces of information from a text to answer complex
questions. Despite the active studies and rapid progress in English MSQA
research, there is a notable lack of publicly available MSQA benchmark in
Chinese. Previous efforts for constructing MSQA datasets predominantly
emphasized entity-centric contextualization, resulting in a bias towards
collecting factoid questions and potentially overlooking questions requiring
more detailed descriptive responses. To overcome these limitations, we present
CLEAN, a comprehensive Chinese multi-span question answering dataset that
involves a wide range of open-domain subjects with a substantial number of
instances requiring descriptive answers. Additionally, we provide established
models from relevant literature as baselines for CLEAN. Experimental results
and analysis show the characteristics and challenge of the newly proposed CLEAN
dataset for the community. Our dataset, CLEAN, will be publicly released at
zhiyiluo.site/misc/clean_v1.0_ sample.json