Information Extraction (IE) stands as a cornerstone in natural language
processing, traditionally segmented into distinct sub-tasks. The advent of
Large Language Models (LLMs) heralds a paradigm shift, suggesting the
feasibility of a singular model addressing multiple IE subtasks. In this vein,
we introduce the General Information Extraction Large Language Model (GIELLM),
which integrates text Classification, Sentiment Analysis, Named Entity
Recognition, Relation Extraction, and Event Extraction using a uniform
input-output schema. This innovation marks the first instance of a model
simultaneously handling such a diverse array of IE subtasks. Notably, the
GIELLM leverages the Mutual Reinforcement Effect (MRE), enhancing performance
in integrated tasks compared to their isolated counterparts. Our experiments
demonstrate State-of-the-Art (SOTA) results in five out of six Japanese mixed
datasets, significantly surpassing GPT-3.5-Turbo. Further, an independent
evaluation using the novel Text Classification Relation and Event
Extraction(TCREE) dataset corroborates the synergistic advantages of MRE in
text and word classification. This breakthrough paves the way for most IE
subtasks to be subsumed under a singular LLM framework. Specialized fine-tune
task-specific models are no longer needed.Comment: 10 pages, 6 figure