258 research outputs found

    The integration of OntoClean in WebODE

    Get PDF
    Enterprises will only be interested in the use of ontologies if such ontologies are evaluated enough. Therefore, the development of ontology evaluation tools is a crucial matter. We have built the ODEClean module in the workbench for building ontologies named WebODE. ODEClean allows cleaning taxonomies following the OntoClean method, and WebODE provides technical support to the Methontology methodology for building ontologies. We approached the development of this module in two steps. Firstly, we have integrated the OntoClean method into the conceptualisation activity of Methontology. Secondly, we have designed and implemented ODEClean using a declarative approach for specifying the knowledge to be used on the evaluation. ODEClean uses: (a) the Top Level of Universals, (b) metaproperties based on philosophical notions, and (c) OntoClean evaluation axioms. The main advantage of this approach is that the system could easily allow the user relax or stress the evaluation of the taxonomy just selecting more or less meta-properties

    Facets, Tiers and Gems: Ontology Patterns for Hypernormalisation

    Get PDF
    There are many methodologies and techniques for easing the task of ontology building. Here we describe the intersection of two of these: ontology normalisation and fully programmatic ontology development. The first of these describes a standardized organisation for an ontology, with singly inherited self-standing entities, and a number of small taxonomies of refining entities. The former are described and defined in terms of the latter and used to manage the polyhierarchy of the self-standing entities. Fully programmatic development is a technique where an ontology is developed using a domain-specific language within a programming language, meaning that as well defining ontological entities, it is possible to add arbitrary patterns or new syntax within the same environment. We describe how new patterns can be used to enable a new style of ontology development that we call hypernormalisation

    Ontology construction from online ontologies

    Get PDF
    One of the main hurdles towards a wide endorsement of ontologies is the high cost of constructing them. Reuse of existing ontologies offers a much cheaper alternative than building new ones from scratch, yet tools to support such reuse are still in their infancy. However, more ontologies are becoming available on the web, and online libraries for storing and indexing ontologies are increasing in number and demand. Search engines have also started to appear, to facilitate search and retrieval of online ontologies. This paper presents a fresh view on constructing ontologies automatically, by identifying, ranking, and merging fragments of online ontologies

    Engineering the scientific corpus: routine semantic work in (re)constructing a biological ontology

    Get PDF
    In face of the burgeoning interest in ‘ontology’ in science studies, Michael Lynch (2008) called for a move toward ‘ontography’, to talking about ontologies by way of studies in which ontologies (or at least, an ontology) are of demonstrable relevance to the doings of those being studied. This paper provides an ontography, or some part of one, in that it reports on work in ontology development being done by a group of researchers in bioinformatics, drawing its examples largely from a workshop in which some members of that group were participant and which was organised by a research network to which they belonged. Methodologies for building ‘good’ ontologies were part of the interests of this wider research group and were a motivation for the work undertaken. What is evident from our study is the fact that methods to be applied, avenues to be explored and even fundamental purposes were all in the event ‘up for grabs’ and formed a closely interlinked and mutually explicating part of the ‘logic in practice’ deployed. We will show how this research work was undertaken with reference to an existing body of knowledge, yet requiring distinctive courses of ‘discovering work’, concerning both method and substantive content. How were its results examined and reexamined in the light of ongoing, evolving and unanticipated considerations? Describing how the involved participants go about their work is, then, ‘an ontography’ in precisely the sense that Lynch proposes

    Towards a clinical trial ontology using a concern-oriented approach

    Get PDF
    Not yet availablePer ridurre i costi e migliorare la qualita\u27 della ricerca nei trial clinici (CT) e\u27 necessario un approccio piu\u27 sistematico all\u27automazione dei CT per rinforzare l\u27interoperabilita\u27 a vari livelli del processo di ricerca. Per questo scopo e\u27 stato sviluppato un modello concettuale di CT. Alla base di ogni approccio di modellizzazione ci sono criteri di partizione che ci permettono di dominare la complessita\u27 dell\u27universo da modellare. In questo rapporto noi introduciamo un metodo originale di analisi basato sui concern degli stakeholder per partizionare il domino concettuale dei CT in sotto-domini orientati agli stakeholder. Le rappresentazioni mentali degli stakeholder relative a ciascun concern sono identificati come cluster di concetti collegati ad altri concetti. Noi consideriamo ciascun cluster come una base razionale per il relativo concern. I concetti trovati nelle basi razionali popolano l\u27universo del discorso specifico per ogni stakeholder e compongono il vocabolario degli stakeholder. Alcuni concetti sono condivisi con altri stakeholder, mentre altri sono specifici di uno stakehoder; alcuni concetti sono specifici dei CT, mentre altri sono concetti medici o generali. In questo modo un\u27ontologia orientata ai concern per i CT puo\u27 essere creata. Il metodo e\u27 illustrato utilizzando i criteri di selezione dei soggetti, una componente di un progetto di CT, ma puo\u27 essere usato per ogni altra componente del protocollo del CT. La tassonomia del vocabolario dei concetti dei CT e la rete delle relative basi razionali ci fornisce una struttura possibile per lo sviluppo del software specialmente se si adotta una soluzione basata su architetture orientate ai servizi

    A core ontological model for semantic sensor web infrastructures

    Get PDF
    Semantic Sensor Web infrastructures use ontology-based models to represent the data that they manage; however, up to now, these ontological models do not allow representing all the characteristics of distributed, heterogeneous, and web-accessible sensor data. This paper describes a core ontological model for Semantic Sensor Web infrastructures that covers these characteristics and that has been built with a focus on reusability. This ontological model is composed of different modules that deal, on the one hand, with infrastructure data and, on the other hand, with data from a specific domain, that is, the coastal flood emergency planning domain. The paper also presents a set of guidelines, followed during the ontological model development, to satisfy a common set of requirements related to modelling domain-specific features of interest and properties. In addition, the paper includes the results obtained after an exhaustive evaluation of the developed ontologies along different aspects (i.e., vocabulary, syntax, structure, semantics, representation, and context)

    Towards a core ontology for information integration

    Get PDF
    In this paper, we argue that a core ontology is one of the key building blocks necessary to enable the scalable assimilation of information from diverse sources. A complete and extensible ontology that expresses the basic concepts that are common across a variety of domains and can provide the basis for specialization into domain-specific concepts and vocabularies, is essential for well-defined mappings between domain-specific knowledge representations (i.e., metadata vocabularies) and the subsequent building of a variety of services such as cross-domain searching, browsing, data mining and knowledge extraction. This paper describes the results of a series of three workshops held in 2001 and 2002 which brought together representatives from the cultural heritage and digital library communities with the goal of harmonizing their knowledge perspectives and producing a core ontology. The knowledge perspectives of these two communities were represented by the CIDOC/CRM [31], an ontology for information exchange in the cultural heritage and museum community, and the ABC ontology [33], a model for the exchange and integration of digital library information. This paper describes the mediation process between these two different knowledge biases and the results of this mediation - the harmonization of the ABC and CIDOC/CRM ontologies, which we believe may provide a useful basis for information integration in the wider scope of the involved communities
    corecore