1,171 research outputs found

    Peer Data Management

    Get PDF
    Peer Data Management (PDM) deals with the management of structured data in unstructured peer-to-peer (P2P) networks. Each peer can store data locally and define relationships between its data and the data provided by other peers. Queries posed to any of the peers are then answered by also considering the information implied by those mappings. The overall goal of PDM is to provide semantically well-founded integration and exchange of heterogeneous and distributed data sources. Unlike traditional data integration systems, peer data management systems (PDMSs) thereby allow for full autonomy of each member and need no central coordinator. The promise of such systems is to provide flexible data integration and exchange at low setup and maintenance costs. However, building such systems raises many challenges. Beside the obvious scalability problem, choosing an appropriate semantics that can deal with arbitrary, even cyclic topologies, data inconsistencies, or updates while at the same time allowing for tractable reasoning has been an area of active research in the last decade. In this survey we provide an overview of the different approaches suggested in the literature to tackle these problems, focusing on appropriate semantics for query answering and data exchange rather than on implementation specific problems

    Peer-to-peer systems for simple and flexible information sharing

    Get PDF
    Includes abstract.Includes bibliographical references (leaves 76-80).Peer to peer computing (P2P) is an architecture that enables applications to access shared resources, with peers having similar capabilities and responsibilities. The ubiquity of P2P computing and its increasing adoption for a decentralized data sharing mechanism have fueled my research interests. P2P networks are useful for sharing content files containing audio, video, and data. This research aims to address the problem of simple and flexible access to data from a variety of data sources across peers with different operating systems, databases and hardware. The proposed architecture makes use of SQL queries, web services, heterogeneous database servers and XML data transformation for the peer to peer data sharing prototype. SQL queries and web services provide a data sharing mechanism that allows both simple and flexible data access

    RDF-Based Data Integration for Workflow Systems

    Get PDF
    To meet the requirements of interoperability, the enactment of workflow systems for processes should tackle the problem of data integration for effective data sharing and exchange. This paper aims at flexibly describing workflow entities and relationships by innovative ontology engineering, which are emerging in process-centred environments, supported by Resource Description Framework (RDF) based languages and tools. Our novel framework takes into consideration to position the ontology level in the data integration dimension. Having taken a more realistic approach towards interoperability, we present basic constructs of a workflow specific ontology, with a suite of classes and properties selectively created. In particular, we demonstrate an example description of Event Condition Action (ECA) rules by extensions of RDF. As an inter-lingua, the proposed vocabulary and semantics can be mapped onto other process description languages as well as the simple XML-based data representation of our earlier workflow prototype

    Protocols for Integrity Constraint Checking in Federated Databases

    Get PDF
    A federated database is comprised of multiple interconnected database systems that primarily operate independently but cooperate to a certain extent. Global integrity constraints can be very useful in federated databases, but the lack of global queries, global transaction mechanisms, and global concurrency control renders traditional constraint management techniques inapplicable. This paper presents a threefold contribution to integrity constraint checking in federated databases: (1) The problem of constraint checking in a federated database environment is clearly formulated. (2) A family of protocols for constraint checking is presented. (3) The differences across protocols in the family are analyzed with respect to system requirements, properties guaranteed by the protocols, and processing and communication costs. Thus, our work yields a suite of options from which a protocol can be chosen to suit the system capabilities and integrity requirements of a particular federated database environment

    Research for REGI - Lessons Learnt from the Closure of the 2007-13 Programming Period

    Get PDF
    This study analyses the closure process for programmes funded under the European Regional Development Fund and the Cohesion Fund in 2007-13. It details the regulatory provisions, guidance and support provided for closure in 2007-13 and assesses the closure experiences of programme authorities before drawing lessons and developing conclusions and recommendations for EU-level institutions and programme authorities

    Supplemental report: Chronic disease epidemiology capacity findings and recommendations

    Get PDF
    "In 2007, CSTE passed a position statement on state-level chronic disease epidemiology capacity. This position statement defined the minimum recommended chronic disease epidemiology workforce as a) at least one senior CDE (doctoral degree with at least 5 years' experience in chronic disease epidemiology or master's degree with at least 10 years' experience in chronic disease epidemiology); b) at least one CDE who is responsible for coordinating/integrating activities across categorical programs; and c) at least five full-time CDEs, at least one of whom has a doctoral degree. Key steps recommended to monitor state chronic disease epidemiology capacity included developing a list of capacity indicators that correspond to the capacity domains described in the 2003 chronic disease epidemiology capacity assessment and developing and conducting an online rapid assessment tool to measure these key indicators every 2 years. In 2009, in follow-up to the position statement, CSTE conducted a second assessment of chronic disease capacity as a supplement to the overall Epidemiology Capacity Assessment (ECA). The results of both the supplement and data collected on chronic disease programs from the 2009 ECA core assessment are presented in this report. In addition, where comparable information was obtained, trends in chronic disease capacity from the 2001, 2004, and 2006 core ECAs and CSTE's 2003 National Assessment of Epidemiologic Capacity in Chronic Disease are described." - p. 9This publication was supported by CDC Cooperative Agreement # 5U38HM0004145U38HM00041

    Final and Cumulative Annual Report for Alfred P. Sloan Foundation Grant G-2015-13903 “The Economics of Socially-Efficient Privacy and Confidentiality Management for Statistical Agencies”

    Get PDF
    Final and Cumulative Annual Report, finalized May 2019Goal: To study the economics of socially efficient protocols for managing research databases containing private information. Metrics 1. At least four peer-reviewed articles that are published in journals read by economists, statisticians, and other social scientists. 2. A library of socially efficient algorithms that other researchers can readily implement 3. A policy handbook or brief to inform key statistical agencies on managing the tradeoffs between enabling data access and maintaining privacy 4. At least one graduate equipped with unique research and computational skills.Alfred P. Sloan Foundation Grant G-2015-1390

    Web Service Composition Processes: A Comparative Study

    Full text link
    corecore