Automatic speech recognition (ASR) allows transcribing the communications between air traffic controllers (ATCOs) and aircraft pilots. The transcriptions are used later to extract ATC named entities,
e.g., aircraft callsigns. One common challenge is speech activity detection (SAD) and speaker diarization (SD). In the failure condition,
two or more segments remain in the same recording, jeopardizing the
overall performance. We propose a system that combines SAD and a
BERT model to perform speaker change detection and speaker role
detection (SRD) by chunking ASR transcripts, i.e., SD with a defined number of speakers together with SRD. The proposed model is
evaluated on real-life public ATC databases. Our BERT SD model
baseline reaches up to 10% and 20% token-based Jaccard error rate
(JER) in public and private ATC databases. We also achieved relative
improvements of 32% and 7.7% in JERs and SD error ra