18,178 research outputs found

    Ontology technology for the development and deployment of learning technology systems - a survey

    Get PDF
    The World-Wide Web is undergoing dramatic changes at the moment. The Semantic Web is an initiative to bring meaning to the Web. The Semantic Web is based on ontology technology – a knowledge representation framework – at its core. We illustrate the importance of this evolutionary development. We survey five scenarios demonstrating different forms of applications of ontology technologies in the development and deployment of learning technology systems. Ontology technologies are highly useful to organise, personalise, and publish learning content and to discover, generate, and compose learning objects

    Transformation Techniques for OCL Constraints

    Get PDF
    Constraints play a key role in the definition of conceptual schemas. In the UML, constraints are usually specified by means of invariants written in the OCL. However, due to the high expressiveness of the OCL, the designer has different syntactic alternatives to express each constraint. The techniques presented in this paper assist the designer during the definition of the constraints by means of generating equivalent alternatives for the initially defined ones. Moreover, in the context of the MDA, transformations between these different alternatives are required as part of the PIM-to-PIM, PIM-to-PSM or PIM-to-code transformations of the original conceptual schema

    A unified view of data-intensive flows in business intelligence systems : a survey

    Get PDF
    Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges.Peer ReviewedPostprint (author's final draft

    Cloud service localisation

    Get PDF
    The essence of cloud computing is the provision of software and hardware services to a range of users in dierent locations. The aim of cloud service localisation is to facilitate the internationalisation and localisation of cloud services by allowing their adaption to dierent locales. We address the lingual localisation by providing service-level language translation techniques to adopt services to dierent languages and regulatory localisation by providing standards-based mappings to achieve regulatory compliance with regionally varying laws, standards and regulations. The aim is to support and enforce the explicit modelling of aspects particularly relevant to localisation and runtime support consisting of tools and middleware services to automating the deployment based on models of locales, driven by the two localisation dimensions. We focus here on an ontology-based conceptual information model that integrates locale specication in a coherent way

    The role of metaphor in shaping the identity and agenda of the United Nations: the imagining of an international community and international threat

    Get PDF
    This article examines the representation of the United Nations in speeches delivered by its Secretary General. It focuses on the role of metaphor in constructing a common ‘imagining’ of international diplomacy and legitimising an international organisational identity. The SG legitimises the organisation, in part, through the delegitimisation of agents/actions/events constructed as threatening to the international community and to the well-being of mankind. It is a desire to combat the forces of menace or evil which are argued to motivate and determine the organisational agenda. This is predicated upon an international ideology of humanity in which difference is silenced and ‘working towards the common good’ is emphasised. This is exploited to rouse emotions and legitimise institutional power. Polarisation and antithesis are achieved through the employment of metaphors designed to enhance positive and negative evaluations. The article further points to the constitutive, persuasive and edifying power of topic and situationally-motivated metaphors in speech-making

    CROEQS: Contemporaneous Role Ontology-based Expanded Query Search: implementation and evaluation

    Get PDF
    Searching annotated items in multimedia databases becomes increasingly important. The traditional approach is to build a search engine based on textual metadata. However, in manually annotated multimedia databases, the conceptual level of what is searched for might differ from the high-levelness of the annotations of the items. To address this problem, we present CROEQS, a semantically enhanced search engine. It allows the user to query the annotated persons not only on their name, but also on their roles at the time the multimedia item was broadcast. We also present the ontology used to expand such queries: it allows us to semantically represent the domain knowledge on people fulfilling a role during a temporal interval in general, and politicians holding a political office specifically. The evaluation results show that query expansion using data retrieved from an ontology considerably filters the result set, although there is a performance penalty

    Achieving interoperability between the CARARE schema for monuments and sites and the Europeana Data Model

    Full text link
    Mapping between different data models in a data aggregation context always presents significant interoperability challenges. In this paper, we describe the challenges faced and solutions developed when mapping the CARARE schema designed for archaeological and architectural monuments and sites to the Europeana Data Model (EDM), a model based on Linked Data principles, for the purpose of integrating more than two million metadata records from national monument collections and databases across Europe into the Europeana digital library.Comment: The final version of this paper is openly published in the proceedings of the Dublin Core 2013 conference, see http://dcevents.dublincore.org/IntConf/dc-2013/paper/view/17

    Exploring manuscripts: sharing ancient wisdoms across the semantic web

    Get PDF
    Recent work in digital humanities has seen researchers in-creasingly producing online editions of texts and manuscripts, particularly in adoption of the TEI XML format for online publishing. The benefits of semantic web techniques are un-derexplored in such research, however, with a lack of sharing and communication of research information. The Sharing Ancient Wisdoms (SAWS) project applies linked data prac-tices to enhance and expand on what is possible with these digital text editions. Focussing on Greek and Arabic col-lections of ancient wise sayings, which are often related to each other, we use RDF to annotate and extract seman-tic information from the TEI documents as RDF triples. This allows researchers to explore the conceptual networks that arise from these interconnected sayings. The SAWS project advocates a semantic-web-based methodology, en-hancing rather than replacing current workflow processes, for digital humanities researchers to share their findings and collectively benefit from each other’s work

    Encoding models for scholarly literature

    Get PDF
    We examine the issue of digital formats for document encoding, archiving and publishing, through the specific example of "born-digital" scholarly journal articles. We will begin by looking at the traditional workflow of journal editing and publication, and how these practices have made the transition into the online domain. We will examine the range of different file formats in which electronic articles are currently stored and published. We will argue strongly that, despite the prevalence of binary and proprietary formats such as PDF and MS Word, XML is a far superior encoding choice for journal articles. Next, we look at the range of XML document structures (DTDs, Schemas) which are in common use for encoding journal articles, and consider some of their strengths and weaknesses. We will suggest that, despite the existence of specialized schemas intended specifically for journal articles (such as NLM), and more broadly-used publication-oriented schemas such as DocBook, there are strong arguments in favour of developing a subset or customization of the Text Encoding Initiative (TEI) schema for the purpose of journal-article encoding; TEI is already in use in a number of journal publication projects, and the scale and precision of the TEI tagset makes it particularly appropriate for encoding scholarly articles. We will outline the document structure of a TEI-encoded journal article, and look in detail at suggested markup patterns for specific features of journal articles
    corecore