50,158 research outputs found

    Measuring the Global Research Environment: Information Science Challenges for the 21st Century

    Get PDF
    “What does the global research environment look like?” This paper presents a summary look at the results of efforts to address this question using available indicators on global research production. It was surprising how little information is available, how difficult some of it is to access and how flawed the data are. The three most useful data sources were UNESCO (United Nations Educational, Scientific and Cultural Organization) Research and Development data (1996-2002), the Institute of Scientific Information publications listings for January 1998 through March 2003, and the World of Learning 2002 reference volume. The data showed that it is difficult to easily get a good overview of the global research situation from existing sources. Furthermore, inequalities between countries in research capacity are marked and challenging. Information science offers strategies for responding to both of these challenges. In both cases improvements are likely if access to information can be facilitated and the process of integrating information from different sources can be simplified, allowing transformation into effective action. The global research environment thus serves as a case study for the focus of this paper – the exploration of information science responses to challenges in the management, exchange and implementation of knowledge globally

    Evaluate Your Business School’s Writings As If Your Strategy Matters

    Get PDF
    Business school publications are widely criticized for their lack of managerial or teaching relevance. One reason for this criticism is that business school scholarship is typically evaluated purely in terms of one type of work: academic journal articles that are meant to be read by other scholars. However, academics produce multiple types of publications, and business schools serve a wider range of stakeholders. These other stakeholders are often central to the schools’ purposes and may be critical in acquiring resources. These stakeholders probably prefer to see scholarship that is relevant for students or for practitioners. They may prefer scholarship that is ethically relevant or regionally relevant and otherwise different from the model that dominates U.S. journals. Technologies are now available to measure the impact of writings in a much wider range of venues than covered by the Social Sciences Citation Index in the Web of Science. Moreover, a wider range of measures, such as the size of writings’ readership, may be needed. We consider these issues and present some recommendations, arguing that faculty evaluations should follow an intentional strategy and not necessarily conform to the traditional default

    The concordance of field-normalized scores based on Web of Science and Microsoft Academic data: A case study in computer sciences

    Full text link
    In order to assess Microsoft Academic as a useful data source for evaluative bibliometrics it is crucial to know, if citation counts from Microsoft Academic could be used in common normalization procedures and whether the normalized scores agree with the scores calculated on the basis of established databases. To this end, we calculate the field-normalized citation scores of the publications of a computer science institute based on Microsoft Academic and the Web of Science and estimate the statistical concordance of the scores. Our results suggest that field-normalized citation scores can be calculated with Microsoft Academic and that these scores are in good agreement with the corresponding scores from the Web of Science.Comment: 10 pages, 2 figures, 1 tabl

    Report on the Second Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE2)

    Get PDF
    This technical report records and discusses the Second Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE2). The report includes a description of the alternative, experimental submission and review process, two workshop keynote presentations, a series of lightning talks, a discussion on sustainability, and five discussions from the topic areas of exploring sustainability; software development experiences; credit & incentives; reproducibility & reuse & sharing; and code testing & code review. For each topic, the report includes a list of tangible actions that were proposed and that would lead to potential change. The workshop recognized that reliance on scientific software is pervasive in all areas of world-leading research today. The workshop participants then proceeded to explore different perspectives on the concept of sustainability. Key enablers and barriers of sustainable scientific software were identified from their experiences. In addition, recommendations with new requirements such as software credit files and software prize frameworks were outlined for improving practices in sustainable software engineering. There was also broad consensus that formal training in software development or engineering was rare among the practitioners. Significant strides need to be made in building a sense of community via training in software and technical practices, on increasing their size and scope, and on better integrating them directly into graduate education programs. Finally, journals can define and publish policies to improve reproducibility, whereas reviewers can insist that authors provide sufficient information and access to data and software to allow them reproduce the results in the paper. Hence a list of criteria is compiled for journals to provide to reviewers so as to make it easier to review software submitted for publication as a “Software Paper.
    • 

    corecore