1,239 research outputs found
Analysing Scientific Collaborations of New Zealand Institutions using Scopus Bibliometric Data
Scientific collaborations are among the main enablers of development in small
national science systems. Although analysing scientific collaborations is a
well-established subject in scientometrics, evaluations of scientific
collaborations within a country remain speculative with studies based on a
limited number of fields or using data too inadequate to be representative of
collaborations at a national level. This study represents a unique view on the
collaborative aspect of scientific activities in New Zealand. We perform a
quantitative study based on all Scopus publications in all subjects for more
than 1500 New Zealand institutions over a period of 6 years to generate an
extensive mapping of scientific collaboration at a national level. The
comparative results reveal the level of collaboration between New Zealand
institutions and business enterprises, government institutions, higher
education providers, and private not for profit organisations in 2010-2015.
Constructing a collaboration network of institutions, we observe a power-law
distribution indicating that a small number of New Zealand institutions account
for a large proportion of national collaborations. Network centrality concepts
are deployed to identify the most central institutions of the country in terms
of collaboration. We also provide comparative results on 15 universities and
Crown research institutes based on 27 subject classifications.Comment: 10 pages, 15 figures, accepted author copy with link to research
data, Analysing Scientific Collaborations of New Zealand Institutions using
Scopus Bibliometric Data. In Proceedings of ACSW 2018: Australasian Computer
Science Week 2018, January 29-February 2, 2018, Brisbane, QLD, Australi
Impact Factor: outdated artefact or stepping-stone to journal certification?
A review of Garfield's journal impact factor and its specific implementation
as the Thomson Reuters Impact Factor reveals several weaknesses in this
commonly-used indicator of journal standing. Key limitations include the
mismatch between citing and cited documents, the deceptive display of three
decimals that belies the real precision, and the absence of confidence
intervals. These are minor issues that are easily amended and should be
corrected, but more substantive improvements are needed. There are indications
that the scientific community seeks and needs better certification of journal
procedures to improve the quality of published science. Comprehensive
certification of editorial and review procedures could help ensure adequate
procedures to detect duplicate and fraudulent submissions.Comment: 25 pages, 12 figures, 6 table
Four Decades of the Journal \u3ci\u3eLaw and Human Behavior\u3c/i\u3e: A Content Analysis
Although still relatively young, the journal Law and Human Behavior (LHB) has amassed a publication history of more than 1300 full-length articles over four decades. Yet, no systematic analysis of the journal has been done until now. The current research coded all full-length articles to examine trends over time, predictors of the number of Google Scholar citations, and predictors of whether an article was cited by a court case. The predictors of interest included article organization, research topics, areas of law, areas of psychology, first-author gender, first-author country of institutional affiliation, and samples employed. Results revealed a vast and varied field that has shown marked diversification over the years. First authors have consistently become more diversified in both gender and country of institutional affiliation. Overall, the most common research topics were jury/judicial decision-making and eyewitness/memory, the most common legal connections were to criminal law and mental health law, and the most common psychology connection was to social-cognitive psychology. Research in psychology and law has the potential to impact both academic researchers and the legal system. Articles published in LHB appear to accomplish both
Construction of a Pragmatic Base Line for Journal Classifications and Maps Based on Aggregated Journal-Journal Citation Relations
A number of journal classification systems have been developed in
bibliometrics since the launch of the Citation Indices by the Institute of
Scientific Information (ISI) in the 1960s. These systems are used to normalize
citation counts with respect to field-specific citation patterns. The best
known system is the so-called "Web-of-Science Subject Categories" (WCs). In
other systems papers are classified by algorithmic solutions. Using the Journal
Citation Reports 2014 of the Science Citation Index and the Social Science
Citation Index (n of journals = 11,149), we examine options for developing a
new system based on journal classifications into subject categories using
aggregated journal-journal citation data. Combining routines in VOSviewer and
Pajek, a tree-like classification is developed. At each level one can generate
a map of science for all the journals subsumed under a category. Nine major
fields are distinguished at the top level. Further decomposition of the social
sciences is pursued for the sake of example with a focus on journals in
information science (LIS) and science studies (STS). The new classification
system improves on alternative options by avoiding the problem of randomness in
each run that has made algorithmic solutions hitherto irreproducible.
Limitations of the new system are discussed (e.g. the classification of
multi-disciplinary journals). The system's usefulness for field-normalization
in bibliometrics should be explored in future studies.Comment: accepted for publication in the Journal of Informetrics, 20 July 201
The Research Space: using the career paths of scholars to predict the evolution of the research output of individuals, institutions, and nations
In recent years scholars have built maps of science by connecting the
academic fields that cite each other, are cited together, or that cite a
similar literature. But since scholars cannot always publish in the fields they
cite, or that cite them, these science maps are only rough proxies for the
potential of a scholar, organization, or country, to enter a new academic
field. Here we use a large dataset of scholarly publications disambiguated at
the individual level to create a map of science-or research space-where links
connect pairs of fields based on the probability that an individual has
published in both of them. We find that the research space is a significantly
more accurate predictor of the fields that individuals and organizations will
enter in the future than citation based science maps. At the country level,
however, the research space and citations based science maps are equally
accurate. These findings show that data on career trajectories-the set of
fields that individuals have previously published in-provide more accurate
predictors of future research output for more focalized units-such as
individuals or organizations-than citation based science maps
A scientometric analysis and review of fall from height research in construction
Fall from height (FFH) in the construction industry has earned much attention among researchers in recent years. The present review-based study introduced a science mapping approach to evaluate the FFH studies related to the construction industry. This study, through an extensive bibliometric and scientometric assessment, recognized the most active journals, keywords and the nations in the field of FFH studies since 2000. Analysis of the authors’ keywords revealed the emerging research topics in the FFH research community. Recent studies have been discovered to pay more attention to the application of Computer and Information Technology (CIT) tools, particularly building information modelling (BIM) in research related to FFH. Other emerging research areas in the domain of FFH include rule checking, and prevention through design. The findings summarized the mainstream research areas (e.g., safety management program), discussed existing research gaps in FFH domain (e.g., the adaptability of safety management system), and suggests future directions in FFH research. The recommended future directions could contribute to improving safety for the FFH research community by evaluating existing fall prevention programs in different contexts; integrating multiple CIT tools in the entire project lifecycle; designing fall safety courses to workers associated with temporary agents and prototype safety knowledge tool development. The current study was restricted to the FFH literature sample included the journal articles published only in English and in Scopus
Empirical Patterns in Google Scholar Citation Counts
Scholarly impact may be metricized using an author's total number of
citations as a stand-in for real worth, but this measure varies in
applicability between disciplines. The detail of the number of citations per
publication is nowadays mapped in much more detail on the Web, exposing certain
empirical patterns. This paper explores those patterns, using the citation data
from Google Scholar for a number of authors.Comment: 6 pages, 8 figures, submitted to Cyberpatterns 201
An academic odyssey: Writing over time
In this paper we present and discuss the results of six enquiries into the first author's academic writing over the last fifty years. Our aim is to assess whether or not his academic writing style has changed with age, experience, and cognitive decline. The results of these studies suggest that the readability of textbook chapters written by Hartley has remained fairly stable for over 50 years, with the later chapters becoming easier to read. The format of the titles used for chapters and papers has also remained much the same, with an increase in the use of titles written in the form of questions. It also appears that the format of the chosen titles had no effect on citation rates, but that papers that obtained the highest citation rates were written with colleagues rather by Hartley alone. Finally it is observed that Hartley's publication rate has remained much the same for over fifty years but that this has been achieved at the expense of other academic activities
Why replication studies are essential: learning from failure and success
Van Witteloostuijn’s (2016) commentary “What happened to Popperian Falsification?” is an excellent summary of the many problems that plague research in the (Social) Sciences in general and (International) Business & Management in particular. As van Witteloostuijn (2016:pp] admits his “[...] diagnosis is anything but new – quite the contrary”, nor is it applicable only to the Social Sciences. When preparing this note, I was reminded of Cargo Cult Science, a 1974 Caltech commencement address by Physicist Richard Feynman (Feynman, 1974), which – more than four decades ago – makes many of the same points, including the pervasive problem of a lack of replication studies, which will be the topic of this short rejoinder.
Conducting replication studies is more difficult in International Business (IB) than it is in many other disciplines. For instance in Psychology – a discipline that favours experimental research – one might be able to replicate a particular study within weeks or, in some cases, even days. However, in IB data collection is typically very time-consuming and fraught with many problems not encountered in purely domestic research (for a summary see Harzing, Reiche & Pudelko, 2013). Moreover, most journals in our field only publish articles with novel research findings and a strong theoretical contribution, and are thus not open to replication studies. To date, most studies in IB are therefore unique and are never replicated. This is regrettable, because even though difficult, replication is even more essential in IB than it is in domestic studies, because differences in cultural and institutional environments might limit generalization from studies conducted in a single home or host country.
Somehow though, pleas for replication studies – however well articulated and however often repeated – seem to be falling on deaf ears. Academics are only human, and many humans learn best from personal stories and examples, especially if they evoke vivid emotions or associations. Hence, in this note, instead of providing yet another essayistic plea for replication, I will attempt to argue “by example”. I present two short case studies from my own research: one in which the lack of replication resulted in the creation of myths, and another in which judicious replication strengthened arguments for a new – less biased – measure of research performance. Finally, I will provide a recommendation on how to move forward that can be implemented immediately without the need for a complete overhaul of our current system of research dissemination
- …
