207 research outputs found

    Declarative Rules for Annotated Expert Knowledge in Change Management

    Get PDF
    In this paper, we use declarative and domain-specific languages for representing expert knowledge in the field of change management in organisational psychology. Expert rules obtained in practical case studies are represented as declarative rules in a deductive database. The expert rules are annotated by information describing their provenance and confidence. Additional provenance information for the whole - or parts of the - rule base can be given by ontologies. Deductive databases allow for declaratively defining the semantics of the expert knowledge with rules; the evaluation of the rules can be optimised and the inference mechanisms could be changed, since they are specified in an abstract way. As the logical syntax of rules had been a problem in previous applications of deductive databases, we use specially designed domain-specific languages to make the rule syntax easier for non-programmers. The semantics of the whole knowledge base is declarative. The rules are written declaratively in an extension datalogs of the well-known deductive database language datalog on the data level, and additional datalogs rules can configure the processing of the annotated rules and the ontologies

    Polyflow: a Polystore-compliant mechanism to provide interoperability to heterogeneous provenance graphs

    Get PDF
    Many scientific experiments are modeled as workflows. Workflows usually output massive amounts of data. To guarantee the reproducibility of workflows, they are usually orchestrated by Workflow Management Systems (WfMS), that capture provenance data. Provenance represents the lineage of a data fragment throughout its transformations by activities in a workflow. Provenance traces are usually represented as graphs. These graphs allows scientists to analyze and evaluate results produced by a workflow. However, each WfMS has a proprietary format for provenance and do it in different granularity levels. Therefore, in more complex scenarios in which the scientist needs to interpret provenance graphs generated by multiple WfMSs and workflows, a challenge arises. To first understand the research landscape, we conduct a Systematic Literature Mapping, assessing existing solutions under several different lenses. With a clearer understanding of the state of the art, we propose a tool called Polyflow, which is based on the concept of Polystore systems, integrating several databases of heterogeneous origin by adopting a global ProvONE schema. Polyflow allows scientists to query multiple provenance graphs in an integrated way. Polyflow was evaluated by experts using provenance data collected from real experiments that generate phylogenetic trees through workflows. The experiment results suggest that Polyflow is a viable solution for interoperating heterogeneous provenance data generated by different WfMSs, from both a usability and performance standpoint.Muitos experimentos científicos são modelados como workflows (fluxos de trabalho). Workflows produzem comumente um grande volume de dados. De forma a garantir a reprodutibilidade desses workflows, estes geralmente são orquestrados por Sistemas de Gerência de Workflows (SGWfs), garantindo que dados de proveniência sejam capturados. Dados de proveniência representam o histórico de derivação de um dado ao longo da execução do workflow. Assim, o histórico de derivação dos dados pode ser representado por meio de um grafo de proveniência. Este grafo possibilita aos cientistas analisarem e avaliarem resultados produzidos por um workflow. Todavia, cada SGWf tem seu formato proprietário de representação para dados de proveniência, e os armazenam em diferentes granularidades. Consequentemente, em cenários mais complexos em que um cientista precisa analisar de forma integrada grafos de proveniência gerados por múltiplos workflows, isso se torna desafiador. Primeiramente, para entender o campo de pesquisa, realizamos um Mapeamento Sistemático da Literatura, avaliando soluções existentes sob diferentes lentes. Com uma compreensão mais clara do atual estado da arte, propomos uma ferramenta chamada Polyflow, inspirada em conceitos de sistemas Polystore, possibilitando a integração de várias bases de dados heterogêneas por meio de uma interface de consulta única que utiliza o ProvONE como schema global. Polyflow permite que cientistas submetam consultas em múltiplos grafos de proveniência de maneira integrada. Polyflow foi avaliado em conjunto com especialistas usando dados de proveniência coletados de workflows reais que apoiam o estudo de geração de árvores filogenéticas. O resultado da avaliação mostrou a viabilidade do Polyflow para interoperar semanticamente dados de proveniência gerado por distintos SGWfs, tanto do ponto de vista de desempenho quanto de usabilidade

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Survey over Existing Query and Transformation Languages

    Get PDF
    A widely acknowledged obstacle for realizing the vision of the Semantic Web is the inability of many current Semantic Web approaches to cope with data available in such diverging representation formalisms as XML, RDF, or Topic Maps. A common query language is the first step to allow transparent access to data in any of these formats. To further the understanding of the requirements and approaches proposed for query languages in the conventional as well as the Semantic Web, this report surveys a large number of query languages for accessing XML, RDF, or Topic Maps. This is the first systematic survey to consider query languages from all these areas. From the detailed survey of these query languages, a common classification scheme is derived that is useful for understanding and differentiating languages within and among all three areas

    Four Lessons in Versatility or How Query Languages Adapt to the Web

    Get PDF
    Exposing not only human-centered information, but machine-processable data on the Web is one of the commonalities of recent Web trends. It has enabled a new kind of applications and businesses where the data is used in ways not foreseen by the data providers. Yet this exposition has fractured the Web into islands of data, each in different Web formats: Some providers choose XML, others RDF, again others JSON or OWL, for their data, even in similar domains. This fracturing stifles innovation as application builders have to cope not only with one Web stack (e.g., XML technology) but with several ones, each of considerable complexity. With Xcerpt we have developed a rule- and pattern based query language that aims to give shield application builders from much of this complexity: In a single query language XML and RDF data can be accessed, processed, combined, and re-published. Though the need for combined access to XML and RDF data has been recognized in previous work (including the W3C’s GRDDL), our approach differs in four main aspects: (1) We provide a single language (rather than two separate or embedded languages), thus minimizing the conceptual overhead of dealing with disparate data formats. (2) Both the declarative (logic-based) and the operational semantics are unified in that they apply for querying XML and RDF in the same way. (3) We show that the resulting query language can be implemented reusing traditional database technology, if desirable. Nevertheless, we also give a unified evaluation approach based on interval labelings of graphs that is at least as fast as existing approaches for tree-shaped XML data, yet provides linear time and space querying also for many RDF graphs. We believe that Web query languages are the right tool for declarative data access in Web applications and that Xcerpt is a significant step towards a more convenient, yet highly efficient data access in a “Web of Data”

    Web and Semantic Web Query Languages

    Get PDF
    A number of techniques have been developed to facilitate powerful data retrieval on the Web and Semantic Web. Three categories of Web query languages can be distinguished, according to the format of the data they can retrieve: XML, RDF and Topic Maps. This article introduces the spectrum of languages falling into these categories and summarises their salient aspects. The languages are introduced using common sample data and query types. Key aspects of the query languages considered are stressed in a conclusion

    Knowledge Components and Methods for Policy Propagation in Data Flows

    Get PDF
    Data-oriented systems and applications are at the centre of current developments of the World Wide Web (WWW). On the Web of Data (WoD), information sources can be accessed and processed for many purposes. Users need to be aware of any licences or terms of use, which are associated with the data sources they want to use. Conversely, publishers need support in assigning the appropriate policies alongside the data they distribute. In this work, we tackle the problem of policy propagation in data flows - an expression that refers to the way data is consumed, manipulated and produced within processes. We pose the question of what kind of components are required, and how they can be acquired, managed, and deployed, to support users on deciding what policies propagate to the output of a data-intensive system from the ones associated with its input. We observe three scenarios: applications of the Semantic Web, workflow reuse in Open Science, and the exploitation of urban data in City Data Hubs. Starting from the analysis of Semantic Web applications, we propose a data-centric approach to semantically describe processes as data flows: the Datanode ontology, which comprises a hierarchy of the possible relations between data objects. By means of Policy Propagation Rules, it is possible to link data flow steps and policies derivable from semantic descriptions of data licences. We show how these components can be designed, how they can be effectively managed, and how to reason efficiently with them. In a second phase, the developed components are verified using a Smart City Data Hub as a case study, where we developed an end-to-end solution for policy propagation. Finally, we evaluate our approach and report on a user study aimed at assessing both the quality and the value of the proposed solution

    GENERIC AND ADAPTIVE METADATA MANAGEMENT FRAMEWORK FOR SCIENTIFIC DATA REPOSITORIES

    Get PDF
    Der rapide technologische Fortschritt hat in verschiedenen Forschungsdisziplinen zu vielfältigen Weiterentwicklungen in Datenakquise und -verarbeitung geführt. Hi- eraus wiederum resultiert ein immenses Wachstum an Daten und Metadaten, gener- iert durch wissenschaftliche Experimente. Unabhängig vom konkreten Forschungs- gebiet ist die wissenschaftliche Praxis immer stärker durch Daten und Metadaten gekennzeichnet. In der Folge intensivieren Universitäten, Forschungsgemeinschaften und Förderagenturen ihre Bemühungen, wissenschaftliche Daten effizient zu sichten, zu speichern und auszuwerten. Die wesentlichen Ziele wissenschaftlicher Daten- Repositorien sind die Etablierung von Langzeitspeicher, der Zugriff auf Daten, die Bereitstellung von Daten für die Wiederverwendung und deren Referenzierung, die Erfassung der Datenquelle zur Reproduzierbarkeit sowie die Bereitstellung von Meta- daten, Anmerkungen oder Verweisen zur Vermittlung domänenspezifischen Wis- sens, das zur Interpretation der Daten notwendig ist. Wissenschaftliche Datenspe- icher sind hochkomplexe Systeme, bestehend aus Elementen aus unterschiedlichen Forschungsfeldern, wie z. B. Algorithmen für Datenkompression und Langzeit- datenarchivierung, Frameworks für das Metadaten- und Annotations-management, Workflow-Provenance und Provenance-Interoperabilität zwischen heterogenen Work- flowsystemen, Autorisierungs und Authentifizierungsinfrastrukturen sowie Visual- isierungswerkzeuge für die Dateninterpretation. Die vorliegende Arbeit beschreibt eine modulare Architektur für ein wis- senschaftliches Datenarchiv, die Forschungsgemeinschaften darin unterstützt, ihre Daten und Metadaten gezielt über den jeweiligen Lebenszyklus hinweg zu orchestri- eren. Diese Architektur besteht aus Komponenten, die vier Forschungsfelder repräsen- tieren. Die erste Komponente ist ein Client zur Datenübertragung (“data transfer client”). Er bietet eine generische Schnittstelle für die Erfassung von Daten und den Zugriff auf Daten aus wissenschaftlichen Datenakquisesystemen. Die zweite Komponente ist das MetaStore-Framework, ein adaptives Metadaten- Management-Framework, das die Handhabung sowohl statischer als auch dynamis- cher Metadatenmodelle ermöglicht. Um beliebige Metadatenschemata behandeln zu können, basiert die Entwicklung des MetaStore-Frameworks auf dem komponen- tenbasierten dynamischen Kompositions-Entwurfsmuster (component-based dynamic composition design pattern). Der MetaStore ist außerdem mit einem Annotations- framework für die Handhabung von dynamischen Metadaten ausgestattet. Die dritte Komponente ist eine Erweiterung des MetaStore-Frameworks zur au- tomatisierten Behandlung von Provenance-Metadaten für BPEL-basierte Workflow- Management-Systeme. Der von uns entworfene und implementierte Prov2ONE Al- gorithmus übersetzt dafür die Struktur und Ausführungstraces von BPEL-Workflow- Definitionen automatisch in das Provenance-Modell ProvONE. Hierbei ermöglicht die Verfügbarkeit der vollständigen BPEL-Provenance-Daten in ProvONE nicht nur eine aggregierte Analyse der Workflow-Definition mit ihrem Ausführungstrace, sondern gewährleistet auch die Kompatibilität von Provenance-Daten aus unterschiedlichen Spezifikationssprachen. Die vierte Komponente unseres wissenschaftlichen Datenarchives ist das Provenance-Interoperabilitätsframework ProvONE - Provenance Interoperability Framework (P-PIF). Dieses gewährleistet die Interoperabilität von Provenance-Daten heterogener Provenance-Modelle aus unterschiedlichen Workflowmanagementsyste- men. P-PIF besteht aus zwei Komponenten: dem Prov2ONE-Algorithmus für SCUFL und MoML Workflow-Spezifikationen und Workflow-Management-System- spezifischen Adaptern zur Extraktion, Übersetzung und Modellierung retrospektiver Provenance-Daten in das ProvONE-Provenance-Modell. P-PIF kann sowohl Kon- trollfluss als auch Datenfluss nach ProvONE übersetzen. Die Verfügbarkeit hetero- gener Provenance-Traces in ProvONE ermöglicht das Vergleichen, Analysieren und Anfragen von Provenance-Daten aus unterschiedlichen Workflowsystemen. Wir haben die Komponenten des in dieser Arbeit vorgestellten wissenschaftlichen Datenarchives wie folgt evaluiert: für den Client zum Datentrasfer haben wir die Daten-übertragungsleistung mit dem Standard-Protokoll für Nanoskopie-Datensätze untersucht. Das MetaStore-Framework haben wir hinsichtlich der folgenden bei- den Aspekte evaluiert. Zum einen haben wir die Metadatenaufnahme und Voll- textsuchleistung unter verschiedenen Datenbankkonfigurationen getestet. Zum an- deren zeigen wir die umfassende Abdeckung der Funktionalitäten von MetaStore durch einen funktionsbasierten Vergleich von MetaStore mit bestehenden Metadaten- Management-Systemen. Für die Evaluation von P-PIF haben wir zunächst die Korrek- theit und Vollständigkeit unseres Prov2ONE-Algorithmus bewiesen und darüber hin- aus die vom Prov2ONE BPEL-Algorithmus generierten Prognose-Graphpattern aus ProvONE gegen bestehende BPEL-Kontrollflussmuster ausgewertet. Um zu zeigen, dass P-PIF ein nachhaltiges Framework ist, das sich an Standards hält, vergle- ichen wir außerdem die Funktionen von P-PIF mit denen bestehender Provenance- Interoperabilitätsframeworks. Diese Auswertungen zeigen die Überlegenheit und die Vorteile der einzelnen in dieser Arbeit entwickelten Komponenten gegenüber ex- istierenden Systemen
    corecore