810 research outputs found

    Utilising content marketing metrics and social networks for academic visibility

    Get PDF
    There are numerous assumptions on research evaluation in terms of quality and relevance of academic contributions. Researchers are becoming increasingly acquainted with bibliometric indicators, including; citation analysis, impact factor, h-index, webometrics and academic social networking sites. In this light, this chapter presents a review of these concepts as it considers relevant theoretical underpinnings that are related to the content marketing of scholars. Therefore, this contribution critically evaluates previous papers that revolve on the subject of academic reputation as it deliberates on the individual researchers’ personal branding. It also explains how metrics are currently being used to rank the academic standing of journals as well as higher educational institutions. In a nutshell, this chapter implies that the scholarly impact depends on a number of factors including accessibility of publications, peer review of academic work as well as social networking among scholars.peer-reviewe

    The pros and cons of the use of altmetrics in research assessment

    Get PDF
    © 2020 The Authors. Published by Levi Library Press. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: http://doi.org/10.29024/sar.10Many indicators derived from the web have been proposed to supplement citation-based indicators in support of research assessments. These indicators, often called altmetrics, are available commercially from Altmetric.com and Elsevier’s Plum Analytics or can be collected directly. These organisations can also deliver altmetrics to support institutional selfevaluations. The potential advantages of altmetrics for research evaluation are that they may reflect important non-academic impacts and may appear before citations when an article is published, thus providing earlier impact evidence. Their disadvantages often include susceptibility to gaming, data sparsity, and difficulties translating the evidence into specific types of impact. Despite these limitations, altmetrics have been widely adopted by publishers, apparently to give authors, editors and readers insights into the level of interest in recently published articles. This article summarises evidence for and against extending the adoption of altmetrics to research evaluations. It argues that whilst systematicallygathered altmetrics are inappropriate for important formal research evaluations, they can play a role in some other contexts. They can be informative when evaluating research units that rarely produce journal articles, when seeking to identify evidence of novel types of impact during institutional or other self-evaluations, and when selected by individuals or groups to support narrative-based non-academic claims. In addition, Mendeley reader counts are uniquely valuable as early (mainly) scholarly impact indicators to replace citations when gaming is not possible and early impact evidence is needed. Organisations using alternative indicators need recruit or develop in-house expertise to ensure that they are not misused, however

    Applied Evaluative Informetrics: Part 1

    Full text link
    This manuscript is a preprint version of Part 1 (General Introduction and Synopsis) of the book Applied Evaluative Informetrics, to be published by Springer in the summer of 2017. This book presents an introduction to the field of applied evaluative informetrics, and is written for interested scholars and students from all domains of science and scholarship. It sketches the field's history, recent achievements, and its potential and limits. It explains the notion of multi-dimensional research performance, and discusses the pros and cons of 28 citation-, patent-, reputation- and altmetrics-based indicators. In addition, it presents quantitative research assessment as an evaluation science, and focuses on the role of extra-informetric factors in the development of indicators, and on the policy context of their application. It also discusses the way forward, both for users and for developers of informetric tools.Comment: The posted version is a preprint (author copy) of Part 1 (General Introduction and Synopsis) of a book entitled Applied Evaluative Bibliometrics, to be published by Springer in the summer of 201

    Advantages and Disadvantages of the Webometrics Ranking System

    Get PDF
    Today, there are several well-known global ranking lists for ranking universities in the world. While some of them ranked only a few hundreds of best and most influential universities, there are those that include a much larger number of ranking scientific institutions. One such global list which ranks the largest number of scientific institutions and scientists in the world is called Webometrics list. This list is very important for less developed economies and developing countries which have not established a sufficient quality control system of higher education quality, so this list serves as a corrective to the international evaluation of a wide range of universities in the world. In such a complex IT system of ranking an extremely large number of institutions and scientists, this system shows some disadvantages when ranking, which of course can be overcome by introducing certain improvements within the system of ranking. Systems that perform the collection, analysis, and indexing data have their advantages and disadvantages, which can sometimes lead to a misinterpretation of the data collected. Among other things, we will consider the possible solutions which would improve the rating system and prevent possible manipulation and uncertainty in the presentation of current and final results ranking

    Webometrics Ranking and Its Relationship to Quality Education and Research inAcademic Institutions in Kenya

    Get PDF
    Examine the relationship between webometrics ranking and the quality of education and research in academic institutions in Kenya; and suggest appropriate solutions to enhance the practice. Descriptive survey design applied quantitative and qualitative research where structured questionnaire and document reviews were used to collect data from the respondents. Webometrics ranking promotes quality of education and research in academic institutions through visibility that showcases research activities and enriches knowledge. Collaboration and partnerships was the leading strategy for high webometrics ranking practice and performance in academic institutions. Numerous strategies were recommended for maintaining and improving webometrics ranking performance including use of web champions, marketing and awareness. Practice of webometrics ranking as a basis for evaluating performance in institutions of higher learning and education is gaining momentum worldwide. Excellence performance based on webometrics matrix provides practical lessons, suggestions and local solutions for other institutions of higher learning. Webometrics ranking promotes quality research and education in institutions of higher learning; provides public information on academic standing and performance; fosters competition in academic institutions; provides evidence that stimulates the evolution of centres of excellence; and enhances additional rationale for allocation of funds. Practice supports teaching, research related activities and scholarly communications that promotes the need for development of standards and policies in higher education and learning. Fundamental in fostering competition and promoting quality education and research through enrichment of knowledge and content repositories. Strategies for sustainable webometrics ranking performance need to be stipulated in achieving excellence performance in higher education and learning

    Estimating Open Access Mandate Effectiveness: The MELIBEA Score

    Get PDF
    MELIBEA is a Spanish database that uses a composite formula with eight weighted conditions to estimate the effectiveness of Open Access mandates (registered in ROARMAP). We analyzed 68 mandated institutions for publication years 2011-2013 to determine how well the MELIBEA score and its individual conditions predict what percentage of published articles indexed by Web of Knowledge is deposited in each institution's OA repository, and when. We found a small but significant positive correlation (0.18) between MELIBEA score and deposit percentage. We also found that for three of the eight MELIBEA conditions (deposit timing, internal use, and opt-outs), one value of each was strongly associated with deposit percentage or deposit latency (immediate deposit required, deposit required for performance evaluation, unconditional opt-out allowed for the OA requirement but no opt-out for deposit requirement). When we updated the initial values and weights of the MELIBEA formula for mandate effectiveness to reflect the empirical association we had found, the score's predictive power doubled (.36). There are not yet enough OA mandates to test further mandate conditions that might contribute to mandate effectiveness, but these findings already suggest that it would be useful for future mandates to adopt these three conditions so as to maximize their effectiveness, and thereby the growth of OA.Comment: 27 pages, 13 figures, 3 tables, 40 references, 7761 word

    Research Output and Sustainable Development: Webometric Analysis of Scopus Indexed Publications 2008 -2014

    Get PDF
    The study evaluated research output indexed by Scopus between 2008 and 2014 in the month of April 2015. It assessed the quantity of research publication within the period, top ten published universities, fields, journals and their impact factors, cited authors, and top ten published countries. This was with a view to determining author relevance, institutional priority, the extent of sustainable development in the fields and host countries. Findings revealed that Li, Wei, of Harbin Institute of Technology China, Wang, Wei of Beijing University of Chemical Technology (BUCT) China, Zhang, Wei of Tsinghua University China, and Li, Hui Technische University Berlin, Germany were foremost among others during the period and their institutions and countries had more publications. Therefore, it is recommended that scholars, universities, and other institutions of higher learning especially in the developing nations should emulate these few to realize the yearnings of the world (sustainable global development)

    Relationship between Webometrics University Rankings and Research Gate Scores, Scopus and Web of Science

    Get PDF
    Interest in academic ranking systems increased substantially in the last two decades. The majority of existing ranking systems are highly exclusive and cover up to 1500 best-positioned world universities. An exception to these ranking systems is the Webometrics ranking, which ranks more than 31000 universities throughout the world. In this study, we wanted to examine what factors best predict the Webometrics rankings. The sample for this study consisted of 102 European universities, with the Webometrics ranks ranging from 18th position to 6969th position. We examined the effects of the number of Web of Science publications, Scopus publications, and ResearchGate-related data on Webometrics ranking. Data retrieved from the academic social network site ResearchGate predicted 72% of the variance in the Webometrics ranking. The number of Scopus publications was the single best determinant of whether the university will be positioned in the top 1000 ranked universities. These results indicate the potential use of ResearchGate scores in the rankings of universities and serve as a proxy for universities’ excellence. This, in turn, can be useful to government policymakers and university leaders in creating better strategies for enhancing the reputation of universities. https://dorl.net/dor/20.1001.1.20088302.2022.20.3.1.8
    corecore