388 research outputs found

    Unraveling the dynamics of growth, aging and inflation for citations to scientific articles from specific research fields

    Full text link
    We analyze the time evolution of citations acquired by articles from journals of the American Physical Society (PRA, PRB, PRC, PRD, PRE and PRL). The observed change over time in the number of papers published in each journal is considered an exogenously caused variation in citability that is accounted for by a normalization. The appropriately inflation-adjusted citation rates are found to be separable into a preferential-attachment-type growth kernel and a purely obsolescence-related (i.e., monotonously decreasing as a function of time since publication) aging function. Variations in the empirically extracted parameters of the growth kernels and aging functions associated with different journals point to research-field-specific characteristics of citation intensity and knowledge flow. Comparison with analogous results for the citation dynamics of technology-disaggregated cohorts of patents provides deeper insight into the basic principles of information propagation as indicated by citing behavior.Comment: 13 pages, 6 figures, Elsevier style, v2: revised version to appear in J. Informetric

    Quantifying Success in Science: An Overview

    Get PDF
    Quantifying success in science plays a key role in guiding funding allocations, recruitment decisions, and rewards. Recently, a significant amount of progresses have been made towards quantifying success in science. This lack of detailed analysis and summary continues a practical issue. The literature reports the factors influencing scholarly impact and evaluation methods and indices aimed at overcoming this crucial weakness. We focus on categorizing and reviewing the current development on evaluation indices of scholarly impact, including paper impact, scholar impact, and journal impact. Besides, we summarize the issues of existing evaluation methods and indices, investigate the open issues and challenges, and provide possible solutions, including the pattern of collaboration impact, unified evaluation standards, implicit success factor mining, dynamic academic network embedding, and scholarly impact inflation. This paper should help the researchers obtaining a broader understanding of quantifying success in science, and identifying some potential research directions

    Quantifying success in science : an overview

    Get PDF
    Quantifying success in science plays a key role in guiding funding allocations, recruitment decisions, and rewards. Recently, a significant amount of progresses have been made towards quantifying success in science. This lack of detailed analysis and summary continues a practical issue. The literature reports the factors influencing scholarly impact and evaluation methods and indices aimed at overcoming this crucial weakness. We focus on categorizing and reviewing the current development on evaluation indices of scholarly impact, including paper impact, scholar impact, and journal impact. Besides, we summarize the issues of existing evaluation methods and indices, investigate the open issues and challenges, and provide possible solutions, including the pattern of collaboration impact, unified evaluation standards, implicit success factor mining, dynamic academic network embedding, and scholarly impact inflation. This paper should help the researchers obtaining a broader understanding of quantifying success in science, and identifying some potential research directions. © 2013 IEEE.This work was supported in part by the Liaoning Provincial Key Research and Development Guidance Project under Grant 2018104021, and in part by the Liaoning Provincial Natural Fund Guidance Plan under Grant 20180550011

    Aerospace Medicine and Biology, a continuing bibliography with indexes

    Get PDF
    This bibliography lists 197 reports, articles and other documents introduced into the NASA scientific and technical information system in November 1984

    Celebration of Brockport Faculty & Staff Scholarship : 2005-2010

    Get PDF
    This bibliography represents nearly 500 faculty/staff publications published at The College at Brockport during 2005‐2010. It includes citations from all of the schools and many academic departments. Citations were gathered from a number of sources including a call to authors and the use of online databases. The bibliography is primarily, but not exclusively, composed of books, book chapters, DVD/films, and refereed scholarly articles.https://digitalcommons.brockport.edu/bookshelf/1430/thumbnail.jp

    Hurricanes and hashtags: Characterizing online collective attention for natural disasters

    Full text link
    We study collective attention paid towards hurricanes through the lens of nn-grams on Twitter, a social media platform with global reach. Using hurricane name mentions as a proxy for awareness, we find that the exogenous temporal dynamics are remarkably similar across storms, but that overall collective attention varies widely even among storms causing comparable deaths and damage. We construct `hurricane attention maps' and observe that hurricanes causing deaths on (or economic damage to) the continental United States generate substantially more attention in English language tweets than those that do not. We find that a hurricane's Saffir-Simpson wind scale category assignment is strongly associated with the amount of attention it receives. Higher category storms receive higher proportional increases of attention per proportional increases in number of deaths or dollars of damage, than lower category storms. The most damaging and deadly storms of the 2010s, Hurricanes Harvey and Maria, generated the most attention and were remembered the longest, respectively. On average, a category 5 storm receives 4.6 times more attention than a category 1 storm causing the same number of deaths and economic damage.Comment: 31 pages (14 main, 17 Supplemental), 19 figures (5 main, 14 appendix

    Unlocking the power of generative AI models and systems such asGPT-4 and ChatGPT for higher education

    Get PDF
    Generative AI technologies, such as large language models, have the potential to revolutionize much of our higher education teaching and learning. ChatGPT is an impressive, easy-to-use, publicly accessible system demonstrating the power of large language models such as GPT-4. Other compa- rable generative models are available for text processing, images, audio, video, and other outputs and we expect a massive further performance increase, integration in larger software systems, and diffusion in the coming years. This technological development triggers substantial uncertainty and change in university-level teaching and learning. Students ask questions like: How can ChatGPT or other artificial intelligence tools support me? Am I allowed to use ChatGPT for a seminar or final paper, or is that cheating? How exactly do I use ChatGPT best? Are there other ways to access models such as GPT-4? Given that such tools are here to stay, what skills should I acquire, and what is obsolete? Lecturers ask similar questions from a different perspective: What skills should I teach? How can I test students competencies rather than their ability to prompt generative AI models? How can I use ChatGPT and other systems based on generative AI to increase my efficiency or even improve my students learning experience and outcomes? Even if the current discussion revolves around ChatGPT and GPT-4, these are only the forerunners of what we can expect from future generative AI-based models and tools. So even if you think ChatGPT is not yet technically mature, it is worth looking into its impact on higher education. This is where this whitepaper comes in. It looks at ChatGPT as a contemporary example of a conversational user interface that leverages large language models. The whitepaper looks at ChatGPT from the perspective of students and lecturers. It focuses on everyday areas of higher education: teaching courses, learning for an exam, crafting seminar papers and theses, and assessing students learning outcomes and performance. For this purpose, we consider the chances and concrete application possibilities, the limits and risks of ChatGPT, and the underlying large language models. This serves two purposes: First, we aim to provide concrete examples and guidance for individual students and lecturers to find their way of dealing with ChatGPT and similar tools. Second, this whitepaper shall inform the more extensive organizational sensemaking processes on embracing and enclosing large language models or related tools in higher education. We wrote this whitepaper based on our experience in information systems, computer science, management, and sociology. We have hands-on experience in using generative AI tools. As professors, postdocs, doctoral candidates, and students, we constantly innovate our teaching and learning. Fully embracing the chances and challenges of generative AI requires adding further perspectives from scholars in various other disciplines (focusing on didactics of higher education and legal aspects), university administrations, and broader student groups. Overall, we have a positive picture of generative AI models and tools such as GPT-4 and ChatGPT. As always, there is light and dark, and change is difficult. However, if we issue clear guidelines on the part of the universities, faculties, and individual lecturers, and if lecturers and students use such systems efficiently and responsibly, our higher education system may improve. We see a greatchance for that if we embrace and manage the change appropriately
    corecore