1 research outputs found
Augmenting NER Datasets with LLMs: Towards Automated and Refined Annotation
In the field of Natural Language Processing (NLP), Named Entity Recognition
(NER) is recognized as a critical technology, employed across a wide array of
applications. Traditional methodologies for annotating datasets for NER models
are challenged by high costs and variations in dataset quality. This research
introduces a novel hybrid annotation approach that synergizes human effort with
the capabilities of Large Language Models (LLMs). This approach not only aims
to ameliorate the noise inherent in manual annotations, such as omissions,
thereby enhancing the performance of NER models, but also achieves this in a
cost-effective manner. Additionally, by employing a label mixing strategy, it
addresses the issue of class imbalance encountered in LLM-based annotations.
Through an analysis across multiple datasets, this method has been consistently
shown to provide superior performance compared to traditional annotation
methods, even under constrained budget conditions. This study illuminates the
potential of leveraging LLMs to improve dataset quality, introduces a novel
technique to mitigate class imbalances, and demonstrates the feasibility of
achieving high-performance NER in a cost-effective way