283 research outputs found
Scientific Knowledge Object Patterns
Web technology is revolutionizing the way diverse scientific knowledge is produced and disseminated. In the past few years, a handful of discourse representation models have been proposed for the externalization of the rhetoric and argumentation captured within scientific publications. However, there hasn’t been a unified interoperable pattern that is commonly used in practice by publishers and individual users yet. In this paper, we introduce the Scientific Knowledge Object Patterns (SKO Patterns) towards a general scientific discourse representation model, especially for managing knowledge in emerging social web and semantic web. © ACM, 2011. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version is going to be published in "Proceedings of 15th European Conference on Pattern Languages of Programs", (2011) http://portal.acm.org/event.cfm?id=RE197&CFID=8795862&CFTOKEN=1476113
Recommended from our members
A short survey of discourse representation models
With the advancement of technology and the wide adoption of ontologies as knowledge representation formats, in the last decade, a handful of models were proposed for the externalization of the rhetoric and argumentation captured within scientific publications. Conceptually, most of these models share a similar representation form of the scientific publication, i.e. as a series of interconnected elementary knowledge items. The main differences are given by the terminology used, the types of rhetorical and/or argumentation relations connecting the knowledge items and the foundational theories supporting these relations. This paper analyzes the state of the art and provides a concise comparative overview of the five most prominent discourse representation models, with the goal of sketching an unified model for discourse representation
Enabling Preserving Bisimulation Equivalence
Most fairness assumptions used for verifying liveness properties are criticised for being too strong or unrealistic. On the other hand, justness, arguably the minimal fairness assumption required for the verification of liveness properties, is not preserved by classical semantic equivalences, such as strong bisimilarity. To overcome this deficiency, we introduce a finer alternative to strong bisimilarity, called enabling preserving bisimilarity. We prove that this equivalence is justness-preserving and a congruence for all standard operators, including parallel composition
A Metadata-Enabled Scientific Discourse Platform
Scientific papers and scientific conferences are still, despite the emergence of several new dissemination technologies, the de-facto standard in which scientific knowledge is consumed and discussed. While there is no shortage of services and platforms that aid this process (e.g. scholarly search engines, websites, blogs, conference management programs), a widely accepted platform used to capture and enrich the interactions of research community has yet to appear. As such, we aim to create new ways for the members and interested people working in research communities to interact; before, during and after their conferences. Furthermore, to serve as a base to these interactions, we want not only to obtain, format and manage a body of legacy and new papers related to this community but also to aggregate several useful information and services to the environment of a discourse platform
Same but Different: Distant Supervision for Predicting and Understanding Entity Linking Difficulty
Entity Linking (EL) is the task of automatically identifying entity mentions
in a piece of text and resolving them to a corresponding entity in a reference
knowledge base like Wikipedia. There is a large number of EL tools available
for different types of documents and domains, yet EL remains a challenging task
where the lack of precision on particularly ambiguous mentions often spoils the
usefulness of automated disambiguation results in real applications. A priori
approximations of the difficulty to link a particular entity mention can
facilitate flagging of critical cases as part of semi-automated EL systems,
while detecting latent factors that affect the EL performance, like
corpus-specific features, can provide insights on how to improve a system based
on the special characteristics of the underlying corpus. In this paper, we
first introduce a consensus-based method to generate difficulty labels for
entity mentions on arbitrary corpora. The difficulty labels are then exploited
as training data for a supervised classification task able to predict the EL
difficulty of entity mentions using a variety of features. Experiments over a
corpus of news articles show that EL difficulty can be estimated with high
accuracy, revealing also latent features that affect EL performance. Finally,
evaluation results demonstrate the effectiveness of the proposed method to
inform semi-automated EL pipelines.Comment: Preprint of paper accepted for publication in the 34th ACM/SIGAPP
Symposium On Applied Computing (SAC 2019
Knowledge and Artifact Representation in the Scientific Lifecycle
This thesis introduces SKOs (Scientific Knowledge Object) a specification for capturing the knowledge and artifacts that are produced by the scientific research processes. Aiming to address the current existing limitations of scientific production this specification is focused on reducing the work overhead of scientific creation, being composable and reusable, allow continuous evolution and facilitate collaboration and discovery among researchers. To do so it introduces four layers that capture different aspects of the scientific knowledge: content, meaning, ordering and visualization
Recommended from our members
Proceedings ICPW'07: 2nd International Conference on the Pragmatic Web, 22-23 Oct. 2007, Tilburg: NL
Proceedings ICPW'07: 2nd International Conference on the Pragmatic Web, 22-23 Oct. 2007, Tilburg: N
Automatic Generation of Test Cases Using Document Analysis Techniques
In software maintenance, software testing consumes 55% of the total software maintenance work. The problem is how to reduce the software testing work while still insuring high quality software.nbsp Some solutions involve software execution automation tools, outsourcing the testing tasks at lower labor rates. Such solutions still depend upon individual skills in generation of the test cases. In contrast, we focused on generation of test cases rather than the skills and developed a method for the automatic generation of test cases by using our natural language document analysis techniques which use text parsers for extracting and complementing parameter values from documents. We applied the method to Internet banking system maintenance projects and insurance system maintenance projects.nbsp In this paper, we discuss our method and techniques for automatic generation of test cases and their use in these industry case studies.nbsp Our document analysis tool helped automatically generate 95% of the required test cases from the design documents. The work of creating test cases was reduced by 48% in our case studies
- …