Large Language Models (LLMs) have demonstrated significant potential in
performing multiple tasks in multimedia applications, ranging from content
generation to interactive entertainment, and artistic creation. However, the
diversity of downstream tasks in multitask scenarios presents substantial
adaptation challenges for LLMs. While traditional methods often succumb to
knowledge confusion on their monolithic dense models, Mixture-of-Experts (MoE)
has been emerged as a promising solution with its sparse architecture for
effective task decoupling. Inspired by the principles of human cognitive
neuroscience, we design a novel framework \texttt{Intuition-MoR1E} that
leverages the inherent semantic clustering of instances to mimic the human
brain to deal with multitask, offering implicit guidance to router for
optimized feature allocation. Moreover, we introduce cutting-edge Rank-1
Experts formulation designed to manage a spectrum of intuitions, demonstrating
enhanced parameter efficiency and effectiveness in multitask LLM finetuning.
Extensive experiments demonstrate that Intuition-MoR1E achieves superior
efficiency and 2.15\% overall accuracy improvement across 14 public datasets
against other state-of-the-art baselines.Comment: 13 pages, 5 figure