1,088 research outputs found

    The utilization of paper-level classification system on the evaluation of journal impact

    Full text link
    CAS Journal Ranking, a ranking system of journals based on the bibliometric indicator of citation impact, has been widely used in meso and macro-scale research evaluation in China since its first release in 2004. The ranking's coverage is journals which contained in the Clarivate's Journal Citation Reports (JCR). This paper will mainly introduce the upgraded version of the 2019 CAS journal ranking. Aiming at limitations around the indicator and classification system utilized in earlier editions, also the problem of journals' interdisciplinarity or multidisciplinarity, we will discuss the improvements in the 2019 upgraded version of CAS journal ranking (1) the CWTS paper-level classification system, a more fine-grained system, has been utilized, (2) a new indicator, Field Normalized Citation Success Index (FNCSI), which ia robust against not only extremely highly cited publications, but also the wrongly assigned document type, has been used, and (3) the calculation of the indicator is from a paper-level. In addition, this paper will present a small part of ranking results and an interpretation of the robustness of the new FNCSI indicator. By exploring more sophisticated methods and indicators, like the CWTS paper-level classification system and the new FNCSI indicator, CAS Journal Ranking will continue its original purpose for responsible research evaluation

    Exploring the Potential of Large Language Models in Computational Argumentation

    Full text link
    Computational argumentation has become an essential tool in various fields, including artificial intelligence, law, and public policy. It is an emerging research field in natural language processing (NLP) that attracts increasing attention. Research on computational argumentation mainly involves two types of tasks: argument mining and argument generation. As large language models (LLMs) have demonstrated strong abilities in understanding context and generating natural language, it is worthwhile to evaluate the performance of LLMs on various computational argumentation tasks. This work aims to embark on an assessment of LLMs, such as ChatGPT, Flan models and LLaMA2 models, under zero-shot and few-shot settings within the realm of computational argumentation. We organize existing tasks into 6 main classes and standardise the format of 14 open-sourced datasets. In addition, we present a new benchmark dataset on counter speech generation, that aims to holistically evaluate the end-to-end performance of LLMs on argument mining and argument generation. Extensive experiments show that LLMs exhibit commendable performance across most of these datasets, demonstrating their capabilities in the field of argumentation. We also highlight the limitations in evaluating computational argumentation and provide suggestions for future research directions in this field
    • …
    corecore