84 research outputs found

    Research on the Communication Thought of Lin Qiansan and Guo Moruo in Music Historiography

    Get PDF
    The exchange of musical historiographical ideas between Lin Qisan and Guo Moruo reflects the exchange of musical academic ideas between China and Japan in the early 20th century, and is a reflection of the exchange and convergence of historical views, perspectives, and research methods in Chinese and Japanese music historiography, as well as the formation of their musical ideas under the joint action of Japanese sinological and Chinese historical thinking. At the beginning of the 20th century, China was faced with a dilemma, both politically and culturally, as to whether to adopt full Westernization or to adhere to Chinese culture? Or should stick to Chinese culture? Many scholars went to Japan to search answers to the question of whether to adopt full Westernization or to adhere to Chinese culture, and how to learn from the success of Japan’s modern reformation. Chinese and Japanese cultural scholars, including Li Shutong, Zeng Zhimin, Guo Moruo and Lin Qiansan, all expressed their cultural orientation and research thoughts in the ideological dialogue of cultural exchange. Among them, Shutong Li, Zhimin Zeng et al.’s practice in music in the Academy, Lin Qiansan and Guo Moruo et al.’s exploration of music theory, Tanabe’s “History of Chinese Music”, and Lin Qiansan’s “Study on Yan Music in the Sui and Tang Dynasties” are all brief reflections of the musical cultural exchanges between China and Japan in the last century. In particular, Guo Moruo ‘s translation of Lin Qiansan’s “Study of Yan Music Tunes in the Sui and Tang Dynasties” has become an important academic reference for later scholars studying the culture of Yan music in the Sui and Tang Dynasties. At the same time, Qiansan Lin and Guo Moruo’s music thought also profoundly influenced the construction of music historiography in Japan and China. This paper argues that, if want to explore the spirituality of the ideas of Lin Qisan and Guo Moruo in the turbulent environment of the intersection and collision of Chinese and Japanese cultural thoughts in the 20th century, the cultural and ideological backgrounds of Lin Qiansan and Guo Moruo, their coinciding academic perspectives, and the revelation and significance of the interaction of musical ideas in their respective times should be analyzed and studied

    Instruction Tuning for Large Language Models: A Survey

    Full text link
    This paper surveys research works in the quickly advancing field of instruction tuning (IT), a crucial technique to enhance the capabilities and controllability of large language models (LLMs). Instruction tuning refers to the process of further training LLMs on a dataset consisting of \textsc{(instruction, output)} pairs in a supervised fashion, which bridges the gap between the next-word prediction objective of LLMs and the users' objective of having LLMs adhere to human instructions. In this work, we make a systematic review of the literature, including the general methodology of IT, the construction of IT datasets, the training of IT models, and applications to different modalities, domains and applications, along with an analysis on aspects that influence the outcome of IT (e.g., generation of instruction outputs, size of the instruction dataset, etc). We also review the potential pitfalls of IT along with criticism against it, along with efforts pointing out current deficiencies of existing strategies and suggest some avenues for fruitful research.Comment: A Survey paper, Pre-prin

    Pushing the Limits of ChatGPT on NLP Tasks

    Full text link
    Despite the success of ChatGPT, its performances on most NLP tasks are still well below the supervised baselines. In this work, we looked into the causes, and discovered that its subpar performance was caused by the following factors: (1) token limit in the prompt does not allow for the full utilization of the supervised datasets; (2) mismatch between the generation nature of ChatGPT and NLP tasks; (3) intrinsic pitfalls of LLMs models, e.g., hallucination, overly focus on certain keywords, etc. In this work, we propose a collection of general modules to address these issues, in an attempt to push the limits of ChatGPT on NLP tasks. Our proposed modules include (1) a one-input-multiple-prompts strategy that employs multiple prompts for one input to accommodate more demonstrations; (2) using fine-tuned models for better demonstration retrieval; (3) transforming tasks to formats that are more tailored to the generation nature; (4) employing reasoning strategies that are tailored to addressing the task-specific complexity; (5) the self-verification strategy to address the hallucination issue of LLMs; (6) the paraphrase strategy to improve the robustness of model predictions. We conduct experiments on 21 datasets of 10 representative NLP tasks, including question answering, commonsense reasoning, natural language inference, sentiment analysis, named entity recognition, entity-relation extraction, event extraction, dependency parsing, semantic role labeling, and part-of-speech tagging. Using the proposed assemble of techniques, we are able to significantly boost the performance of ChatGPT on the selected NLP tasks, achieving performances comparable to or better than supervised baselines, or even existing SOTA performances
    corecore