2,944 research outputs found

    Annotations for Rule-Based Models

    Full text link
    The chapter reviews the syntax to store machine-readable annotations and describes the mapping between rule-based modelling entities (e.g., agents and rules) and these annotations. In particular, we review an annotation framework and the associated guidelines for annotating rule-based models of molecular interactions, encoded in the commonly used Kappa and BioNetGen languages, and present prototypes that can be used to extract and query the annotations. An ontology is used to annotate models and facilitate their description

    The Computer Science Ontology: A Large-Scale Taxonomy of Research Areas

    Get PDF
    Ontologies of research areas are important tools for characterising, exploring, and analysing the research landscape. Some fields of research are comprehensively described by large-scale taxonomies, e.g., MeSH in Biology and PhySH in Physics. Conversely, current Computer Science taxonomies are coarse-grained and tend to evolve slowly. For instance, the ACM classification scheme contains only about 2K research topics and the last version dates back to 2012. In this paper, we introduce the Computer Science Ontology (CSO), a large-scale, automatically generated ontology of research areas, which includes about 26K topics and 226K semantic relationships. It was created by applying the Klink-2 algorithm on a very large dataset of 16M scientific articles. CSO presents two main advantages over the alternatives: i) it includes a very large number of topics that do not appear in other classifications, and ii) it can be updated automatically by running Klink-2 on recent corpora of publications. CSO powers several tools adopted by the editorial team at Springer Nature and has been used to enable a variety of solutions, such as classifying research publications, detecting research communities, and predicting research trends. To facilitate the uptake of CSO we have developed the CSO Portal, a web application that enables users to download, explore, and provide granular feedback on CSO at different levels. Users can use the portal to rate topics and relationships, suggest missing relationships, and visualise sections of the ontology. The portal will support the publication of and access to regular new releases of CSO, with the aim of providing a comprehensive resource to the various communities engaged with scholarly data

    Ontology and medical terminology: Why description logics are not enough

    Get PDF
    Ontology is currently perceived as the solution of first resort for all problems related to biomedical terminology, and the use of description logics is seen as a minimal requirement on adequate ontology-based systems. Contrary to common conceptions, however, description logics alone are not able to prevent incorrect representations; this is because they do not come with a theory indicating what is computed by using them, just as classical arithmetic does not tell us anything about the entities that are added or subtracted. In this paper we shall show that ontology is indeed an essential part of any solution to the problems of medical terminology – but only if it is understood in the right sort of way. Ontological engineering, we shall argue, should in every case go hand in hand with a sound ontological theory

    Combining link and content-based information in a Bayesian inference model for entity search

    No full text
    An architectural model of a Bayesian inference network to support entity search in semantic knowledge bases is presented. The model supports the explicit combination of primitive data type and object-level semantics under a single computational framework. A flexible query model is supported capable to reason with the availability of simple semantics in querie

    A Review on Information Accessing Systems Based on Fuzzy Linguistic Modelling

    Get PDF
    This paper presents a survey of some fuzzy linguistic information access systems. The review shows information retrieval systems, filtering systems, recommender systems, and web quality evaluation tools, which are based on tools of fuzzy linguistic modelling. The fuzzy linguistic modelling allows us to represent and manage the subjectivity, vagueness and imprecision that is intrinsic and characteristic of the processes of information searching, and, in such a way, the developed systems allow users the access to quality information in a flexible and user-adapted way.European Union (EU) TIN2007-61079 PET2007-0460Ministry of Public Works 90/07Excellence Andalusian Project TIC529

    Sensemaking on the Pragmatic Web: A Hypermedia Discourse Perspective

    Get PDF
    The complexity of the dilemmas we face on an organizational, societal and global scale forces us into sensemaking activity. We need tools for expressing and contesting perspectives flexible enough for real time use in meetings, structured enough to help manage longer term memory, and powerful enough to filter the complexity of extended deliberation and debate on an organizational or global scale. This has been the motivation for a programme of basic and applied action research into Hypermedia Discourse, which draws on research in hypertext, information visualization, argumentation, modelling, and meeting facilitation. This paper proposes that this strand of work shares a key principle behind the Pragmatic Web concept, namely, the need to take seriously diverse perspectives and the processes of meaning negotiation. Moreover, it is argued that the hypermedia discourse tools described instantiate this principle in practical tools which permit end-user control over modelling approaches in the absence of consensus

    Mental distress detection and triage in forum posts: the LT3 CLPsych 2016 shared task system

    Get PDF
    This paper describes the contribution of LT3 for the CLPsych 2016 Shared Task on automatic triage of mental health forum posts. Our systems use multiclass Support Vector Machines (SVM), cascaded binary SVMs and ensembles with a rich feature set. The best systems obtain macro-averaged F-scores of 40% on the full task and 80% on the green versus alarming distinction. Multiclass SVMs with all features score best in terms of F-score, whereas feature filtering with bi-normal separation and classifier ensembling are found to improve recall of alarming posts
    corecore