875 research outputs found

    Populous: A tool for populating ontology templates

    Full text link
    We present Populous, a tool for gathering content with which to populate an ontology. Domain experts need to add content, that is often repetitive in its form, but without having to tackle the underlying ontological representation. Populous presents users with a table based form in which columns are constrained to take values from particular ontologies; the user can select a concept from an ontology via its meaningful label to give a value for a given entity attribute. Populated tables are mapped to patterns that can then be used to automatically generate the ontology's content. Populous's contribution is in the knowledge gathering stage of ontology development. It separates knowledge gathering from the conceptualisation and also separates the user from the standard ontology authoring environments. As a result, Populous can allow knowledge to be gathered in a straight-forward manner that can then be used to do mass production of ontology content.Comment: in Adrian Paschke, Albert Burger begin_of_the_skype_highlighting end_of_the_skype_highlighting, Andrea Splendiani, M. Scott Marshall, Paolo Romano: Proceedings of the 3rd International Workshop on Semantic Web Applications and Tools for the Life Sciences, Berlin,Germany, December 8-10, 201

    User and Developer Interaction with Editable and Readable Ontologies

    Get PDF
    The process of building ontologies is a difficult task that involves collaboration between ontology developers and domain experts and requires an ongoing interaction between them. This collaboration is made more difficult, because they tend to use different tool sets, which can hamper this interaction. In this paper, we propose to decrease this distance between domain experts and ontology developers by creating more readable forms of ontologies, and further to enable editing in normal office environments. Building on a programmatic ontology development environment, such as Tawny-OWL, we are now able to generate these readable/editable from the raw ontological source and its embedded comments. We have this translation to HTML for reading; this environment provides rich hyperlinking as well as active features such as hiding the source code in favour of comments. We are now working on translation to a Word document that also enables editing. Taken together this should provide a significant new route for collaboration between the ontologist and domain specialist.Comment: 5 pages, 5 figures, accepted at ICBO 2017, License update

    Standard International Trade Classification - From Spreadsheet to OWL-2 Ontology

    Get PDF
    Trade classifications are a necessary prerequisite for the compilation of trade statistics, and they should – beyond that – be regarded as a valuable base for the definition of shared controlled vocabularies for linked business data that deal with import, export etc. The Standard International Trade Classification (SITC) provided by the UN Statistics Division is a widely used classification mostly applied for scientific and analytical purposes. SITC – as most other trade classifications – is available today only in text or spreadsheet formats. These formats reveal the inner hierarchical structure of SITC to the human reader, because SITC trade codes are built according to the decimal classification scheme, but unfortunately, SITC’s inner structure is opaque to computer applications in text and spreadsheet formats. The paper discusses an approach to set up an OWL-2 ontology for SITC that states subsumption relations between classes of goods. This kind of semantic underpinning of SITC is suited to ease both checking and extending SITC and to derive from it a shared controlled vocabulary for business linked data. Some problems of today’s SITC (among them missing inner nodes of the trade code hierarchy) are carefully discussed, and the paper motivates several decisions that were taken for ontology design. Finally, the study introduces the semantic reasoner as a tool for the (at least partial) automatic derivation of structural information for SITC from the trade code building rule. The paper reports on reasoner runtimes observed for different versions of the SITC ontology and for different versions of the Pellet reasoner

    Creation and extension of ontologies for describing communications in the context of organizations

    Get PDF
    Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfillment of the requirements for the degree of Master in Computer ScienceThe use of ontologies is nowadays a sufficiently mature and solid field of work to be considered an efficient alternative in knowledge representation. With the crescent growth of the Semantic Web, it is expectable that this alternative tends to emerge even more in the near future. In the context of a collaboration established between FCT-UNL and the R&D department of a national software company, a new solution entitled ECC – Enterprise Communications Center was developed. This application provides a solution to manage the communications that enter, leave or are made within an organization, and includes intelligent classification of communications and conceptual search techniques in a communications repository. As specificity may be the key to obtain acceptable results with these processes, the use of ontologies becomes crucial to represent the existing knowledge about the specific domain of an organization. This work allowed us to guarantee a core set of ontologies that have the power of expressing the general context of the communications made in an organization, and of a methodology based upon a series of concrete steps that provides an effective capability of extending the ontologies to any business domain. By applying these steps, the minimization of the conceptualization and setup effort in new organizations and business domains is guaranteed. The adequacy of the core set of ontologies chosen and of the methodology specified is demonstrated in this thesis by its effective application to a real case-study, which allowed us to work with the different types of sources considered in the methodology and the activities that support its construction and evolution

    Interoperability and FAIRness through a novel combination of Web technologies

    Get PDF
    Data in the life sciences are extremely diverse and are stored in a broad spectrum of repositories ranging from those designed for particular data types (such as KEGG for pathway data or UniProt for protein data) to those that are general-purpose (such as FigShare, Zenodo, Dataverse or EUDAT). These data have widely different levels of sensitivity and security considerations. For example, clinical observations about genetic mutations in patients are highly sensitive, while observations of species diversity are generally not. The lack of uniformity in data models from one repository to another, and in the richness and availability of metadata descriptions, makes integration and analysis of these data a manual, time-consuming task with no scalability. Here we explore a set of resource-oriented Web design patterns for data discovery, accessibility, transformation, and integration that can be implemented by any general- or special-purpose repository as a means to assist users in finding and reusing their data holdings. We show that by using off-the-shelf technologies, interoperability can be achieved atthe level of an individual spreadsheet cell. We note that the behaviours of this architecture compare favourably to the desiderata defined by the FAIR Data Principles, and can therefore represent an exemplar implementation of those principles. The proposed interoperability design patterns may be used to improve discovery and integration of both new and legacy data, maximizing the utility of all scholarly outputs

    Semantic data mapping technology to solve semantic data problem on heterogeneity aspect

    Get PDF
    The diversity of applications developed with different programming languages, application/data architectures, database systems and representation of data/information leads to heterogeneity issues. One of the problem challenges in the problem of heterogeneity is about heterogeneity data in term of semantic aspect. The semantic aspect is about data that has the same name with different meaning or data that has a different name with the same meaning. The semantic data mapping process is the best solution in the current days to solve semantic data problem. There are many semantic data mapping technologies that have been used in recent years. This research aims to compare and analyze existing semantic data mapping technology using five criteria’s. After comparative and analytical process, this research provides recommendations of appropriate semantic data mapping technology based on several criteria’s. Furthermore, at the end of this research we apply the recommended semantic data mapping technology to be implemented with the real data in the specific application. The result of this research is the semantic data mapping file that contains all data structures in the application data source. This semantic data mapping file can be used to map, share and integrate with other semantic data mapping from other applications and can also be used to integrate with the ontology language

    Semantic Assistance for Data Utilization and Curation

    Get PDF
    We propose that most data stores for large organizations are ill-designed for the future, due to limited searchability of the databases. The study of the Semantic Web has been an emerging technology since first proposed by Berners-Lee. New vocabularies have emerged, such as FOAF, Dublin Core, and PROV-O ontologies. These vocabularies, combined, can relate people, places, things, and events. Technologies developed for the Semantic Web, namely the standardized vocabularies for expressing metadata, will make data easier to utilize. We gathered use cases for various data sources, from human resources to big enterprise. Most of our use cases reflect real-world data. We developed a software package for transforming data into these semantic vocabularies, and developed a method of querying via graphical constructs. The development and testing proved itself to be useful. We conclude that data can be preserved or revived through the use of the metadata techniques for the Semantic Web

    Semantic Assistance for Data Utilization and Curation

    Get PDF
    We propose that most data stores for large organizations are ill-designed for the future, due to limited searchability of the databases. The study of the Semantic Web has been an emerging technology since first proposed by Berners-Lee. New vocabularies have emerged, such as FOAF, Dublin Core, and PROV-O ontologies. These vocabularies, combined, can relate people, places, things, and events. Technologies developed for the Semantic Web, namely the standardized vocabularies for expressing metadata, will make data easier to utilize. We gathered use cases for various data sources, from human resources to big enterprise. Most of our use cases reflect real-world data. We developed a software package for transforming data into these semantic vocabularies, and developed a method of querying via graphical constructs. The development and testing proved itself to be useful. We conclude that data can be preserved or revived through the use of the metadata techniques for the Semantic Web

    Approaches to ontology development by non ontology experts

    Get PDF
    Untrained users in the development of ontologies are challenged by the formal representation languages that underlie the most common ontology editing tools. To reduce that barrier, many efforts have gone in the creation of Controlled Languages (CL) translatable into ontology structures. However, CLs fall short of addressing a more profound problem: the selection of the most appropriate ontology modelling component for a certain modelling problem, regardless of the underlying representation paradigm. With the aim of approaching non ontology expert's difficulties in selecting the most appropriate modelling solution, we propose a Natural Language (NL) guided approach based on a repository of Lexico-Syntactic Patterns associated to consensual modelling solutions, i.e., Ontology Design Patterns. By relying on this repository, untrained users can formulate in NL what they want to model in the ontology, and obtain the corresponding design pattern for the modelling issue
    • …
    corecore