7 research outputs found

    Managing educational information on university websites : a proposal for Unibo.it

    Full text link
    This article is focused on the complexity of finding and analyzing the totality of educational information shared by the University of Bologna on its website during the last twenty years. It specifically emphasizes some issues related to the use of the Wayback Machine, the most important international web archive, and the need for a different research tool which would guarantee more solid analyses of the corpus. This tool could initially be characterized by the use of standard Natural Language Processing techniques (such as tokenization, stop-words removal, parsing, etc.) but we also have to take into consideration more complex solutions, such as text mining analyses, WordNet integration and an ontological representation of knowledge. Thanks to approaches like the one here presented, future historians will be able to efficiently study the evolution of a university website

    Evaluating QualiCO. An ontology to facilitate qualitative methods sharing to support open science

    Get PDF
    Qualitative science methods have largely been omitted from discussions of open science. Platforms focused on qualitative science that support open science data and method sharing are rare. Sharing and exchanging coding schemas has great potential for supporting traceability in qualitative research as well as for facilitating the reuse of coding schemas. In this study, we present and evaluate QualiCO, an ontology to describe qualitative coding schemas. Twenty qualitative researchers used QualiCO to complete two coding tasks. In our findings, we present task performance and interview data that focus participants\u27 attention on the ontology. Participants used QualiCO to complete the coding tasks, decreasing time on task, while improving accuracy, signifying that QualiCO enabled the reuse of qualitative coding schemas. Our discussion elaborates some issues that participants had and highlights how conceptual and prior practice frames their interpretation of how QualiCO can be used. (DIPF/Orig.

    An Automated System for the Assessment and Ranking of Domain Ontologies

    Get PDF
    As the number of intelligent software applications and the number of semantic websites continue to expand, ontologies are needed to formalize shared terms. Often it is necessary to either find a previously used ontology for a particular purpose, or to develop a new one to meet a specific need. Because of the challenge involved in creating a new ontology from scratch, the latter option is often preferable. The ability of a user to select an appropriate, high-quality domain ontology from a set of available options would be most useful in knowledge engineering and in developing intelligent applications. Being able to assess an ontology\u27s quality and suitability is also important when an ontology is developed from the beginning. These capabilities, however, require good quality assessment mechanisms as well as automated support when there are a large number of ontologies from which to make a selection. This thesis provides an in-depth analysis of the current research in domain ontology evaluation, including the development of a taxonomy to categorize the numerous directions the research has taken. Based on the lessons learned from the literature review, an approach to the automatic assessment of domain ontologies is selected and a suite of ontology quality assessment metrics grounded in semiotic theory is presented. The metrics are implemented in a Domain Ontology Rating System (DoORS), which is made available as an open source web application. An additional framework is developed that would incorporate this rating system as part of a larger system to find ontology libraries on the web, retrieve ontologies from them, and assess them to select the best ontology for a particular task. An empirical evaluation in four phases shows the usefulness of the work, including a more stringent evaluation of the metrics that assess how well an ontology fits its domain and how well an ontology is regarded within its community of users

    A Knowledge Multidimensional Representation Model for Automatic Text Analysis and Generation: Applications for Cultural Heritage

    Get PDF
    Knowledge is information that has been contextualized in a certain domain, where it can be used and applied. Natural Language provides a most direct way to transfer knowledge at different levels of conceptual density. The opportunity provided by the evolution of the technologies of Natural Language Processing is thus of making more fluid and universal the process of knowledge transfer. Indeed, unfolding domain knowledge is one way to bring to larger audiences contents that would be otherwise restricted to specialists. This has been done so far in a totally manual way through the skills of divulgators and popular science writers. Technology provides now a way to make this transfer both less expensive and more widespread. Extracting knowledge and then generating from it suitably communicable text in natural language are the two related subtasks that need be fulfilled in order to attain the general goal. To this aim, two fields from information technology have achieved the needed maturity and can therefore be effectively combined. In fact, on the one hand Information Extraction and Retrieval (IER) can extract knowledge from texts and map it into a neutral, abstract form, hence liberating it from the stylistic constraints into which it was originated. From there, Natural Language Generation can take charge, by regenerating automatically, or semi-automatically, the extracted knowledge into texts targeting new communities. This doctoral thesis provides a contribution to making substantial this combination through the definition and implementation of a novel multidimensional model for the representation of conceptual knowledge and of a workflow that can produce strongly customized textual descriptions. By exploiting techniques for the generation of paraphrases and by profiling target users, applications and domains, a target-driven approach is proposed to automatically generate multiple texts from the same information core. An extended case study is described to demonstrate the effectiveness of the proposed model and approach in the Cultural Heritage application domain, so as to compare and position this contribution within the current state of the art and to outline future directions

    Collaborative Research Practices and Shared Infrastructures for Humanities Computing

    Get PDF
    The volume collect the proceedings of the 2nd Annual Conference of the Italian Association for Digital Humanities (Aiucd 2013), which took place at the Department of Information Engineering of the University of Padua, 11-12 December 2013. The general theme of Aiucd 2013 was “Collaborative Research Practices and Shared Infrastructures for Humanities Computing” so we particularly welcomed submissions on interdisciplinary work and new developments in the field, encouraging proposals relating to the theme of the conference, or more specifically: interdisciplinarity and multidisciplinarity, legal and economic issues, tools and collaborative methodologies, measurement and impact of collaborative methodologies, sharing and collaboration methods and approaches, cultural institutions and collaborative facilities, infrastructures and digital libraries as collaborative environments, data resources and technologies sharing

    Collaborative Research Practices and Shared Infrastructures for Humanities Computing

    Get PDF
    The volume collect the proceedings of the 2nd Annual Conference of the Italian Association for Digital Humanities (Aiucd 2013), which took place at the Department of Information Engineering of the University of Padua, 11-12 December 2013. The general theme of Aiucd 2013 was “Collaborative Research Practices and Shared Infrastructures for Humanities Computing” so we particularly welcomed submissions on interdisciplinary work and new developments in the field, encouraging proposals relating to the theme of the conference, or more specifically: interdisciplinarity and multidisciplinarity, legal and economic issues, tools and collaborative methodologies, measurement and impact of collaborative methodologies, sharing and collaboration methods and approaches, cultural institutions and collaborative facilities, infrastructures and digital libraries as collaborative environments, data resources and technologies sharing

    IoT to enhance understanding of cultural heritage: Fedro authoring platform, artworks telling their fables

    No full text
    Cultural Heritage has got great importance in recent years, in order to preserve countries history and traditions and to support social and economic improvements. Typical IoT smart technologies represent an effective mean to support understanding of Cultural Heritage, by their capability to involve different users and to catch their explicit and implicit preferences, behaviors and contributions. This paper presents FEDRO, an authoring platform, as part of the intelligent infrastructures developed in DATABENC to support a cultural exhibition of “talking” sculptures held in the Southern Italy, in 2015. FEDRO aims to automatically generate textual and users profiled artworks biographies, employed to feed a smart app for guiding visitors during the exhibition. A preliminary experimentation revealed a tangible improvement in the users’ experience appreciation during the visit. Quality estimations of generated output were also computed exploiting users’ feedbacks, collected through a manual questionnaire, subscribed at the end of their visit
    corecore