Knowledge-based dialogue systems with internet retrieval have recently
attracted considerable attention from researchers. The dialogue systems
overcome a major limitation of traditional knowledge dialogue systems, where
the timeliness of knowledge cannot be assured, hence providing greater
practical application value. Knowledge-based dialogue systems with internet
retrieval can be typically segmented into three tasks: Retrieval Decision,
Query Generation, and Response Generation. However, many of studies assumed
that all conversations require external knowledge to continue, neglecting the
critical step of determining when retrieval is necessary. This assumption often
leads to an over-dependence on external knowledge, even when it may not be
required. Our work addresses this oversight by employing a single unified model
facilitated by prompt and multi-task learning approaches. This model not only
decides whether retrieval is necessary but also generates retrieval queries and
responses. By integrating these functions, our system leverages the full
potential of pre-trained models and reduces the complexity and costs associated
with deploying multiple models. We conducted extensive experiments to
investigate the mutual enhancement among the three tasks in our system. What is
more, the experiment results on the Wizint and Dusinc datasets not only
demonstrate that our unified model surpasses the baseline performance for
individual tasks, but also reveal that it achieves comparable results when
contrasted with SOTA systems that deploy separate, specialized models for each
task