3,199 research outputs found
Soft peer review: social software and distributed scientific evaluation
The debate on the prospects of peer-review in the Internet age and the
increasing criticism leveled against the dominant role of impact factor
indicators are calling for new measurable criteria to assess scientific quality.
Usage-based metrics offer a new avenue to scientific quality assessment but
face the same risks as first generation search engines that used unreliable
metrics (such as raw traffic data) to estimate content quality. In this article I
analyze the contribution that social bookmarking systems can provide to the
problem of usage-based metrics for scientific evaluation. I suggest that
collaboratively aggregated metadata may help fill the gap between traditional
citation-based criteria and raw usage factors. I submit that bottom-up,
distributed evaluation models such as those afforded by social bookmarking
will challenge more traditional quality assessment models in terms of coverage,
efficiency and scalability. Services aggregating user-related quality indicators
for online scientific content will come to occupy a key function in the scholarly
communication system
Of course we share! Testing Assumptions about Social Tagging Systems
Social tagging systems have established themselves as an important part in
today's web and have attracted the interest from our research community in a
variety of investigations. The overall vision of our community is that simply
through interactions with the system, i.e., through tagging and sharing of
resources, users would contribute to building useful semantic structures as
well as resource indexes using uncontrolled vocabulary not only due to the
easy-to-use mechanics. Henceforth, a variety of assumptions about social
tagging systems have emerged, yet testing them has been difficult due to the
absence of suitable data. In this work we thoroughly investigate three
available assumptions - e.g., is a tagging system really social? - by examining
live log data gathered from the real-world public social tagging system
BibSonomy. Our empirical results indicate that while some of these assumptions
hold to a certain extent, other assumptions need to be reflected and viewed in
a very critical light. Our observations have implications for the design of
future search and other algorithms to better reflect the actual user behavior
Posted, Visited, Exported: Altmetrics in the Social Tagging System BibSonomy
In social tagging systems, like Mendeley, CiteULike, and BibSonomy, users can post, tag, visit, or export scholarly publications. In this paper, we compare citations with metrics derived from users’ activities (altmetrics) in the popular social bookmarking system BibSonomy. Our analysis, using a corpus of more than 250,000 publications published before 2010, reveals that overall, citations and altmetrics in BibSonomy are mildly correlated. Furthermore, grouping publications by user-generated tags results in topic-homogeneous subsets that exhibit higher correlations with citations than the full corpus. We find that posts, exports, and visits of publications are correlated with citations and even bear predictive power over future impact. Machine learning classifiers predict whether the number of citations that a publication receives in a year exceeds the median number of citations in that year, based on the usage counts of the preceding year. In that setup, a Random Forest predictor outperforms the baseline on average by seven percentage points
Networks of reader and country status: An analysis of Mendeley reader statistics
The number of papers published in journals indexed by the Web of Science core
collection is steadily increasing. In recent years, nearly two million new
papers were published each year; somewhat more than one million papers when
primary research papers are considered only (articles and reviews are the
document types where primary research is usually reported or reviewed).
However, who reads these papers? More precisely, which groups of researchers
from which (self-assigned) scientific disciplines and countries are reading
these papers? Is it possible to visualize readership patterns for certain
countries, scientific disciplines, or academic status groups? One popular
method to answer these questions is a network analysis. In this study, we
analyze Mendeley readership data of a set of 1,133,224 articles and 64,960
reviews with publication year 2012 to generate three different kinds of
networks: (1) The network based on disciplinary affiliations of Mendeley
readers contains four groups: (i) biology, (ii) social science and humanities
(including relevant computer science), (iii) bio-medical sciences, and (iv)
natural science and engineering. In all four groups, the category with the
addition "miscellaneous" prevails. (2) The network of co-readers in terms of
professional status shows that a common interest in papers is mainly shared
among PhD students, Master's students, and postdocs. (3) The country network
focusses on global readership patterns: a group of 53 nations is identified as
core to the scientific enterprise, including Russia and China as well as two
thirds of the OECD (Organisation for Economic Co-operation and Development)
countries.Comment: 26 pages, 6 figures (also web-based startable), and 2 table
Genesis of Altmetrics or Article-level Metrics for Measuring Efficacy of Scholarly Communications: Current Perspectives
The article-level metrics (ALMs) or altmetrics becomes a new trendsetter in
recent times for measuring the impact of scientific publications and their
social outreach to intended audiences. The popular social networks such as
Facebook, Twitter, and Linkedin and social bookmarks such as Mendeley and
CiteULike are nowadays widely used for communicating research to larger
transnational audiences. In 2012, the San Francisco Declaration on Research
Assessment got signed by the scientific and researchers communities across the
world. This declaration has given preference to the ALM or altmetrics over
traditional but faulty journal impact factor (JIF)-based assessment of career
scientists. JIF does not consider impact or influence beyond citations count as
this count reflected only through Thomson Reuters' Web of Science database.
Furthermore, JIF provides indicator related to the journal, but not related to
a published paper. Thus, altmetrics now becomes an alternative metrics for
performance assessment of individual scientists and their contributed scholarly
publications. This paper provides a glimpse of genesis of altmetrics in
measuring efficacy of scholarly communications and highlights available
altmetric tools and social platforms linking altmetric tools, which are widely
used in deriving altmetric scores of scholarly publications. The paper thus
argues for institutions and policy makers to pay more attention to altmetrics
based indicators for evaluation purpose but cautions that proper safeguards and
validations are needed before their adoption
Effective Retrieval of Resources in Folksonomies Using a New Tag Similarity Measure
Social (or folksonomic) tagging has become a very popular way to describe
content within Web 2.0 websites. However, as tags are informally defined,
continually changing, and ungoverned, it has often been criticised for
lowering, rather than increasing, the efficiency of searching. To address this
issue, a variety of approaches have been proposed that recommend users what
tags to use, both when labeling and when looking for resources. These
techniques work well in dense folksonomies, but they fail to do so when tag
usage exhibits a power law distribution, as it often happens in real-life
folksonomies. To tackle this issue, we propose an approach that induces the
creation of a dense folksonomy, in a fully automatic and transparent way: when
users label resources, an innovative tag similarity metric is deployed, so to
enrich the chosen tag set with related tags already present in the folksonomy.
The proposed metric, which represents the core of our approach, is based on the
mutual reinforcement principle. Our experimental evaluation proves that the
accuracy and coverage of searches guaranteed by our metric are higher than
those achieved by applying classical metrics.Comment: 6 pages, 2 figures, CIKM 2011: 20th ACM Conference on Information and
Knowledge Managemen
Recommending Items in Social Tagging Systems Using Tag and Time Information
In this work we present a novel item recommendation approach that aims at
improving Collaborative Filtering (CF) in social tagging systems using the
information about tags and time. Our algorithm follows a two-step approach,
where in the first step a potentially interesting candidate item-set is found
using user-based CF and in the second step this candidate item-set is ranked
using item-based CF. Within this ranking step we integrate the information of
tag usage and time using the Base-Level Learning (BLL) equation coming from
human memory theory that is used to determine the reuse-probability of words
and tags using a power-law forgetting function.
As the results of our extensive evaluation conducted on data-sets gathered
from three social tagging systems (BibSonomy, CiteULike and MovieLens) show,
the usage of tag-based and time information via the BLL equation also helps to
improve the ranking and recommendation process of items and thus, can be used
to realize an effective item recommender that outperforms two alternative
algorithms which also exploit time and tag-based information.Comment: 6 pages, 2 tables, 9 figure
- …