12 research outputs found
Crowdsourcing Scholarly Discourse Annotations
The number of scholarly publications grows steadily every year and it becomes harder to find, assess and compare scholarly knowledge effectively. Scholarly knowledge graphs have the potential to address these challenges. However, creating such graphs remains a complex task. We propose a method to crowdsource structured scholarly knowledge from paper authors with a web-based user interface supported by artificial intelligence. The interface enables authors to select key sentences for annotation. It integrates multiple machine learning algorithms to assist authors during the annotation, including class recommendation and key sentence highlighting. We envision that the interface is integrated in paper submission processes for which we define three main task requirements: The task has to be . We evaluated the interface with a user study in which participants were assigned the task to annotate one of their own articles. With the resulting data, we determined whether the participants were successfully able to perform the task. Furthermore, we evaluated the interface's usability and the participant's attitude towards the interface with a survey. The results suggest that sentence annotation is a feasible task for researchers and that they do not object to annotate their articles during the submission process
Recommended from our members
Crowdsourcing Scholarly Discourse Annotations
The number of scholarly publications grows steadily every year and it becomes harder to find, assess and compare scholarly knowledge effectively. Scholarly knowledge graphs have the potential to address these challenges. However, creating such graphs remains a complex task. We propose a method to crowdsource structured scholarly knowledge from paper authors with a web-based user interface supported by artificial intelligence. The interface enables authors to select key sentences for annotation. It integrates multiple machine learning algorithms to assist authors during the annotation, including class recommendation and key sentence highlighting. We envision that the interface is integrated in paper submission processes for which we define three main task requirements: The task has to be . We evaluated the interface with a user study in which participants were assigned the task to annotate one of their own articles. With the resulting data, we determined whether the participants were successfully able to perform the task. Furthermore, we evaluated the interface’s usability and the participant’s attitude towards the interface with a survey. The results suggest that sentence annotation is a feasible task for researchers and that they do not object to annotate their articles during the submission process
Challenges as enablers for high quality Linked Data: insights from the Semantic Publishing Challenge
Recommended from our members
A comprehensive quality assessment framework for scientific events
Systematic assessment of scientific events has become increasingly important for research communities. A range of metrics (e.g., citations, h-index) have been developed by different research communities to make such assessments effectual. However, most of the metrics for assessing the quality of less formal publication venues and events have not yet deeply investigated. It is also rather challenging to develop respective metrics because each research community has its own formal and informal rules of communication and quality standards. In this article, we develop a comprehensive framework of assessment metrics for evaluating scientific events and involved stakeholders. The resulting quality metrics are determined with respect to three general categories—events, persons, and bibliometrics. Our assessment methodology is empirically applied to several series of computer science events, such as conferences and workshops, using publicly available data for determining quality metrics. We show that the metrics’ values coincide with the intuitive agreement of the community on its “top conferences”. Our results demonstrate that highly-ranked events share similar profiles, including the provision of outstanding reviews, visiting diverse locations, having reputed people involved, and renowned sponsors
A comprehensive quality assessment framework for scientific events
Systematic assessment of scientific events has become increasingly important for research communities. A range of metrics (e.g., citations, h-index) have been developed by different research communities to make such assessments effectual. However, most of the metrics for assessing the quality of less formal publication venues and events have not yet deeply investigated. It is also rather challenging to develop respective metrics because each research community has its own formal and informal rules of communication and quality standards. In this article, we develop a comprehensive framework of assessment metrics for evaluating scientific events and involved stakeholders. The resulting quality metrics are determined with respect to three general categories—events, persons, and bibliometrics. Our assessment methodology is empirically applied to several series of computer science events, such as conferences and workshops, using publicly available data for determining quality metrics. We show that the metrics’ values coincide with the intuitive agreement of the community on its “top conferences”. Our results demonstrate that highly-ranked events share similar profiles, including the provision of outstanding reviews, visiting diverse locations, having reputed people involved, and renowned sponsors. © 2020, The Author(s)
Challenges as enablers for high quality linked data: Insights from the semantic publishing challenge
While most challenges organized so far in the Semantic Web domain are focused on comparing tools with respect to different criteria such as their features and competencies, or exploiting semantically enriched data, the Semantic Web Evaluation Challenges series, co-located with the ESWC Semantic Web Conference, aims to compare them based on their output, namely the produced dataset. The Semantic Publishing Challenge is one of these challenges. Its goal is to involve participants in extracting data from heterogeneous sources on scholarly publications, and producing Linked Data that can be exploited by the community itself. This paper reviews lessons learned from both (i) the overall organization of the Semantic Publishing Challenge, regarding the definition of the tasks, building the input dataset and forming the evaluation, and (ii) the results produced by the participants, regarding the proposed approaches, the used tools, the preferred vocabularies and the results produced in the three editions of 2014, 2015 and 2016. We compared these lessons to other Semantic Web Evaluation Challenges. In this paper, we (i) distill best practices for organizing such challenges that could be applied to similar events, and (ii) report observations on Linked Data publishing derived from the submitted solutions. We conclude that higher quality may be achieved when Linked Data is produced as a result of a challenge, because the competition becomes an incentive, while solutions become better with respect to Linked Data publishing best practices when they are evaluated against the rules of the challenge
Recommended from our members
Open Research Knowledge Graph
As we mark the fifth anniversary of the alpha release of the Open Research
Knowledge Graph (ORKG), it is both timely and exhilarating to celebrate the significant
strides made in this pioneering project. We designed this book as a tribute
to the evolution and achievements of the ORKG and as a practical guide encapsulating
its essence in a form that resonates with both the general reader and the
specialist.
The ORKG has opened a new era in the way scholarly knowledge is curated, managed,
and disseminated. By transforming vast arrays of unstructured narrative text
into structured, machine-processable knowledge, the ORKG has emerged as an
essential service with sophisticated functionalities. Over the past five years, our
team has developed the ORKG into a vibrant platform that enhances the accessibility
and visibility of scientific research. This book serves as a non-technical guide
and a comprehensive reference for new and existing users that outlines the
ORKG’s approach, technologies, and its role in revolutionizing scholarly communication.
By elucidating how the ORKG facilitates the collection, enhancement, and
sharing of knowledge, we invite readers to appreciate the value and potential of
this groundbreaking digital tool presented in a tangible form.
Looking ahead, we are thrilled to announce the upcoming unveiling of promising
new features and tools at the fifth-year celebration of the ORKG’s alpha release.
These innovations are set to redefine the boundaries of machine assistance enabled
by research knowledge graphs. Among these enhancements, you can expect
more intuitive interfaces that simplify the user experience, and enhanced machine learning
models that improve the automation and accuracy of data curation.
We also included a glossary tailored to clarifying key terms and concepts associated
with the ORKG to ensure that all readers, regardless of their technical background,
can fully engage with and understand the content presented. This book
transcends the boundaries of a typical technical report. We crafted this as an inspiration
for future applications, a testament to the ongoing evolution in scholarly
communication that invites further collaboration and innovation. Let this book serve
as both your guide and invitation to explore the ORKG as it continues to grow and
shape the landscape of scientific inquiry and communication