4,384 research outputs found
Rivals for the crown: Reply to Opthof and Leydesdorff
We reply to the criticism of Opthof and Leydesdorff [arXiv:1002.2769] on the
way in which our institute applies journal and field normalizations to citation
counts. We point out why we believe most of the criticism is unjustified, but
we also indicate where we think Opthof and Leydesdorff raise a valid point
Solutions to Detect and Analyze Online Radicalization : A Survey
Online Radicalization (also called Cyber-Terrorism or Extremism or
Cyber-Racism or Cyber- Hate) is widespread and has become a major and growing
concern to the society, governments and law enforcement agencies around the
world. Research shows that various platforms on the Internet (low barrier to
publish content, allows anonymity, provides exposure to millions of users and a
potential of a very quick and widespread diffusion of message) such as YouTube
(a popular video sharing website), Twitter (an online micro-blogging service),
Facebook (a popular social networking website), online discussion forums and
blogosphere are being misused for malicious intent. Such platforms are being
used to form hate groups, racist communities, spread extremist agenda, incite
anger or violence, promote radicalization, recruit members and create virtual
organi- zations and communities. Automatic detection of online radicalization
is a technically challenging problem because of the vast amount of the data,
unstructured and noisy user-generated content, dynamically changing content and
adversary behavior. There are several solutions proposed in the literature
aiming to combat and counter cyber-hate and cyber-extremism. In this survey, we
review solutions to detect and analyze online radicalization. We review 40
papers published at 12 venues from June 2003 to November 2011. We present a
novel classification scheme to classify these papers. We analyze these
techniques, perform trend analysis, discuss limitations of existing techniques
and find out research gaps
Forecasting Methods for Marketing:* Review of Empirical Research
This paper reviews the empirical research on forecasting in marketing. In addition, it presents results from some small scale surveys. We offer a framework for discussing forecasts in the area of marketing, and then review the literature in light of that framework. Particular emphasis is given to a pragmatic interpretation of the literature and findings. Suggestions are made on what research is needed.forecasting, marketing, methods, review, research
Effective Unsupervised Author Disambiguation with Relative Frequencies
This work addresses the problem of author name homonymy in the Web of
Science. Aiming for an efficient, simple and straightforward solution, we
introduce a novel probabilistic similarity measure for author name
disambiguation based on feature overlap. Using the researcher-ID available for
a subset of the Web of Science, we evaluate the application of this measure in
the context of agglomeratively clustering author mentions. We focus on a
concise evaluation that shows clearly for which problem setups and at which
time during the clustering process our approach works best. In contrast to most
other works in this field, we are sceptical towards the performance of author
name disambiguation methods in general and compare our approach to the trivial
single-cluster baseline. Our results are presented separately for each correct
clustering size as we can explain that, when treating all cases together, the
trivial baseline and more sophisticated approaches are hardly distinguishable
in terms of evaluation results. Our model shows state-of-the-art performance
for all correct clustering sizes without any discriminative training and with
tuning only one convergence parameter.Comment: Proceedings of JCDL 201
Exploiting citation networks for large-scale author name disambiguation
We present a novel algorithm and validation method for disambiguating author
names in very large bibliographic data sets and apply it to the full Web of
Science (WoS) citation index. Our algorithm relies only upon the author and
citation graphs available for the whole period covered by the WoS. A pair-wise
publication similarity metric, which is based on common co-authors,
self-citations, shared references and citations, is established to perform a
two-step agglomerative clustering that first connects individual papers and
then merges similar clusters. This parameterized model is optimized using an
h-index based recall measure, favoring the correct assignment of well-cited
publications, and a name-initials-based precision using WoS metadata and
cross-referenced Google Scholar profiles. Despite the use of limited metadata,
we reach a recall of 87% and a precision of 88% with a preference for
researchers with high h-index values. 47 million articles of WoS can be
disambiguated on a single machine in less than a day. We develop an h-index
distribution model, confirming that the prediction is in excellent agreement
with the empirical data, and yielding insight into the utility of the h-index
in real academic ranking scenarios.Comment: 14 pages, 5 figure
Three practical field normalised alternative indicator formulae for research evaluation
Although altmetrics and other web-based alternative indicators are now commonplace in publishers��� websites, they can be difficult for research evaluators to use because of the time or expense of the data, the need to benchmark in order to assess their values, the high proportion of zeros in some alternative indicators, and the time taken to calculate multiple complex indicators. These problems are addressed here by (a) a field normalisation formula, the Mean Normalised Log-transformed Citation Score (MNLCS) that allows simple confidence limits to be calculated and is similar to a proposal of Lundberg, (b) field normalisation formulae for the proportion of cited articles in a set, the Equalised Mean-based Normalised Proportion Cited (EMNPC) and the Mean-based Normalised Proportion Cited (MNPC), to deal with mostly uncited data sets, (c) a sampling strategy to minimise data collection costs, and (d) free unified software to gather the raw data, implement the sampling strategy, and calculate the indicator formulae and confidence limits. The approach is demonstrated (but not fully tested) by comparing the Scopus citations, Mendeley readers and Wikipedia mentions of research funded by Wellcome, NIH, and MRC in three large fields for 2013���2016. Within the results, statistically significant differences in both citation counts and Mendeley reader counts were found even for sets of articles that were less than six months old. Mendeley reader counts were more precise than Scopus citations for the most recent articles and all three funders could be demonstrated to have an impact in Wikipedia that was significantly above the world average
- …