1,774 research outputs found

    Cloud service localisation

    Get PDF
    The essence of cloud computing is the provision of software and hardware services to a range of users in dierent locations. The aim of cloud service localisation is to facilitate the internationalisation and localisation of cloud services by allowing their adaption to dierent locales. We address the lingual localisation by providing service-level language translation techniques to adopt services to dierent languages and regulatory localisation by providing standards-based mappings to achieve regulatory compliance with regionally varying laws, standards and regulations. The aim is to support and enforce the explicit modelling of aspects particularly relevant to localisation and runtime support consisting of tools and middleware services to automating the deployment based on models of locales, driven by the two localisation dimensions. We focus here on an ontology-based conceptual information model that integrates locale specication in a coherent way

    Bridging the Semantic Gap with SQL Query Logs in Natural Language Interfaces to Databases

    Full text link
    A critical challenge in constructing a natural language interface to database (NLIDB) is bridging the semantic gap between a natural language query (NLQ) and the underlying data. Two specific ways this challenge exhibits itself is through keyword mapping and join path inference. Keyword mapping is the task of mapping individual keywords in the original NLQ to database elements (such as relations, attributes or values). It is challenging due to the ambiguity in mapping the user's mental model and diction to the schema definition and contents of the underlying database. Join path inference is the process of selecting the relations and join conditions in the FROM clause of the final SQL query, and is difficult because NLIDB users lack the knowledge of the database schema or SQL and therefore cannot explicitly specify the intermediate tables and joins needed to construct a final SQL query. In this paper, we propose leveraging information from the SQL query log of a database to enhance the performance of existing NLIDBs with respect to these challenges. We present a system Templar that can be used to augment existing NLIDBs. Our extensive experimental evaluation demonstrates the effectiveness of our approach, leading up to 138% improvement in top-1 accuracy in existing NLIDBs by leveraging SQL query log information.Comment: Accepted to IEEE International Conference on Data Engineering (ICDE) 201

    Quarry: A user-centered big data integration platform

    Get PDF
    Obtaining valuable insights and actionable knowledge from data requires cross-analysis of domain data typically coming from various sources. Doing so, inevitably imposes burdensome processes of unifying different data formats, discovering integration paths, and all this given specific analytical needs of a data analyst. Along with large volumes of data, the variety of formats, data models, and semantics drastically contribute to the complexity of such processes. Although there have been many attempts to automate various processes along the Big Data pipeline, no unified platforms accessible by users without technical skills (like statisticians or business analysts) have been proposed. In this paper, we present a Big Data integration platform (Quarry) that uses hypergraph-based metadata to facilitate (and largely automate) the integration of domain data coming from a variety of sources, and provides an intuitive interface to assist end users both in: (1) data exploration with the goal of discovering potentially relevant analysis facets, and (2) consolidation and deployment of data flows which integrate the data, and prepare them for further analysis (descriptive or predictive), visualization, and/or publishing. We validate Quarry’s functionalities with the use case of World Health Organization (WHO) epidemiologists and data analysts in their fight against Neglected Tropical Diseases (NTDs).This work is partially supported by GENESIS project, funded by the Spanish Ministerio de Ciencia, Innovación y Universidades under project TIN2016-79269-R.Peer ReviewedPostprint (author's final draft

    Shape Expressions Schemas

    Full text link
    We present Shape Expressions (ShEx), an expressive schema language for RDF designed to provide a high-level, user friendly syntax with intuitive semantics. ShEx allows to describe the vocabulary and the structure of an RDF graph, and to constrain the allowed values for the properties of a node. It includes an algebraic grouping operator, a choice operator, cardinalitiy constraints for the number of allowed occurrences of a property, and negation. We define the semantics of the language and illustrate it with examples. We then present a validation algorithm that, given a node in an RDF graph and a constraint defined by the ShEx schema, allows to check whether the node satisfies that constraint. The algorithm outputs a proof that contains trivially verifiable associations of nodes and the constraints that they satisfy. The structure can be used for complex post-processing tasks, such as transforming the RDF graph to other graph or tree structures, verifying more complex constraints, or debugging (w.r.t. the schema). We also show the inherent difficulty of error identification of ShEx

    Proceedings of the 4th Workshop of the MPM4CPS COST Action

    Get PDF
    Proceedings of the 4th Workshop of the MPM4CPS COST Action with the presentations delivered during the workshop and papers with extended versions of some of them
    corecore