2,061 research outputs found
Changes in the LIS Research Front: Time-Sliced Cocitation Analyses of LIS Journal Articles, 1990–2004
Based on articles published 1990-2004 in 21 LIS journals, a set of co-citation analyses were performed to study changes in research fronts over the last 15 years, where LIS is at now; and to discuss where it is heading. To study research fronts, here defined as current and influential co-cited papers, a citations among documents methodology was applied; and to study changes, the analyses were time-sliced into three five year periods. The results show a stable structure of two distinct research fields: informetrics and information seeking and retrieval (ISR). However, experimental retrieval research and user oriented research have merged into one ISR field; and IR and informetrics also show signs of coming closer together, sharing research interests and methodologies, making informetrics research more visible in mainstream LIS research. Furthermore, the focus on the internet, both in ISR research and in informetrics – where webometrics quickly has become a dominating research area – is an important change. The future was discussed in terms of LIS dependency on technology, how integration of research areas as well as technical systems can be expected to continue characterize LIS research, and how webometrics will continue to develop and find its applications
Editorial for the First Workshop on Mining Scientific Papers: Computational Linguistics and Bibliometrics
The workshop "Mining Scientific Papers: Computational Linguistics and
Bibliometrics" (CLBib 2015), co-located with the 15th International Society of
Scientometrics and Informetrics Conference (ISSI 2015), brought together
researchers in Bibliometrics and Computational Linguistics in order to study
the ways Bibliometrics can benefit from large-scale text analytics and sense
mining of scientific papers, thus exploring the interdisciplinarity of
Bibliometrics and Natural Language Processing (NLP). The goals of the workshop
were to answer questions like: How can we enhance author network analysis and
Bibliometrics using data obtained by text analytics? What insights can NLP
provide on the structure of scientific writing, on citation networks, and on
in-text citation analysis? This workshop is the first step to foster the
reflection on the interdisciplinarity and the benefits that the two disciplines
Bibliometrics and Natural Language Processing can drive from it.Comment: 4 pages, Workshop on Mining Scientific Papers: Computational
Linguistics and Bibliometrics at ISSI 201
An evaluation of Bradfordizing effects
The purpose of this paper is to apply and evaluate the bibliometric method Bradfordizing for information retrieval (IR) experiments. Bradfordizing is used for generating core document sets for subject-specific questions and to reorder result sets from distributed searches. The method will be applied and tested in a controlled scenario of scientific literature databases from social and political sciences, economics, psychology and medical science (SOLIS, SoLit, USB Köln Opac, CSA Sociological Abstracts, World Affairs Online, Psyndex and Medline) and 164 standardized topics. An evaluation of the method and its effects is carried out in two laboratory-based information retrieval experiments (CLEF and KoMoHe) using a controlled document corpus and human relevance assessments. The results show that Bradfordizing is a very robust method for re-ranking the main document types (journal articles and monographs) in today’s digital libraries (DL). The IR tests show that relevance distributions after re-ranking improve at a significant level if articles in the core are compared with articles in the succeeding zones. The items in the core are significantly more often assessed as relevant, than items in zone 2 (z2) or zone 3 (z3). The improvements between the zones are statistically significant based on the Wilcoxon signed-rank test and the paired T-Test
Scopus's Source Normalized Impact per Paper (SNIP) versus a Journal Impact Factor based on Fractional Counting of Citations
Impact factors (and similar measures such as the Scimago Journal Rankings)
suffer from two problems: (i) citation behavior varies among fields of science
and therefore leads to systematic differences, and (ii) there are no statistics
to inform us whether differences are significant. The recently introduced SNIP
indicator of Scopus tries to remedy the first of these two problems, but a
number of normalization decisions are involved which makes it impossible to
test for significance. Using fractional counting of citations-based on the
assumption that impact is proportionate to the number of references in the
citing documents-citations can be contextualized at the paper level and
aggregated impacts of sets can be tested for their significance. It can be
shown that the weighted impact of Annals of Mathematics (0.247) is not so much
lower than that of Molecular Cell (0.386) despite a five-fold difference
between their impact factors (2.793 and 13.156, respectively)
Journal Maps, Interactive Overlays, and the Measurement of Interdisciplinarity on the Basis of Scopus Data (1996-2012)
Using Scopus data, we construct a global map of science based on aggregated
journal-journal citations from 1996-2012 (N of journals = 20,554). This base
map enables users to overlay downloads from Scopus interactively. Using a
single year (e.g., 2012), results can be compared with mappings based on the
Journal Citation Reports at the Web-of-Science (N = 10,936). The Scopus maps
are more detailed at both the local and global levels because of their greater
coverage, including, for example, the arts and humanities. The base maps can be
interactively overlaid with journal distributions in sets downloaded from
Scopus, for example, for the purpose of portfolio analysis. Rao-Stirling
diversity can be used as a measure of interdisciplinarity in the sets under
study. Maps at the global and the local level, however, can be very different
because of the different levels of aggregation involved. Two journals, for
example, can both belong to the humanities in the global map, but participate
in different specialty structures locally. The base map and interactive tools
are available online (with instructions) at
http://www.leydesdorff.net/scopus_ovl.Comment: accepted for publication in the Journal of the Association for
Information Science and Technology (JASIST
Science Models as Value-Added Services for Scholarly Information Systems
The paper introduces scholarly Information Retrieval (IR) as a further
dimension that should be considered in the science modeling debate. The IR use
case is seen as a validation model of the adequacy of science models in
representing and predicting structure and dynamics in science. Particular
conceptualizations of scholarly activity and structures in science are used as
value-added search services to improve retrieval quality: a co-word model
depicting the cognitive structure of a field (used for query expansion), the
Bradford law of information concentration, and a model of co-authorship
networks (both used for re-ranking search results). An evaluation of the
retrieval quality when science model driven services are used turned out that
the models proposed actually provide beneficial effects to retrieval quality.
From an IR perspective, the models studied are therefore verified as expressive
conceptualizations of central phenomena in science. Thus, it could be shown
that the IR perspective can significantly contribute to a better understanding
of scholarly structures and activities.Comment: 26 pages, to appear in Scientometric
Network-based ranking in social systems: three challenges
Ranking algorithms are pervasive in our increasingly digitized societies,
with important real-world applications including recommender systems, search
engines, and influencer marketing practices. From a network science
perspective, network-based ranking algorithms solve fundamental problems
related to the identification of vital nodes for the stability and dynamics of
a complex system. Despite the ubiquitous and successful applications of these
algorithms, we argue that our understanding of their performance and their
applications to real-world problems face three fundamental challenges: (i)
Rankings might be biased by various factors; (2) their effectiveness might be
limited to specific problems; and (3) agents' decisions driven by rankings
might result in potentially vicious feedback mechanisms and unhealthy systemic
consequences. Methods rooted in network science and agent-based modeling can
help us to understand and overcome these challenges.Comment: Perspective article. 9 pages, 3 figure
- …