31,302 research outputs found

    A model for ranking and selecting integrity tests in a distributed database

    Get PDF
    Checking the consistency of a database state generally involves the execution of integrity tests on the database, which verify whether the database is satisfying its constraints or not. This paper presents the various types of integrity tests as reported in previous works and discusses how these tests can significantly improve the performance of the constraint checking mechanisms without limiting to a certain type of test. Having these test alternatives and selecting the most suitable test is an issue that needs to be tackled. In this regard, the authors propose a model to rank and select the suitable test to be evaluated given several alternative tests. The model uses the amount of data transferred across the network, the number of sites involved, and the amount of data accessed as the parameters in deciding the suitable test. Several analyses have been performed to evaluate the proposed model, and results show that the model achieves a higher percentage of local processing as compared to the previous selected strategies

    Improving Integrity Constraints Checking In Distributed Databases by Exploiting Local Checking

    Get PDF
    Integrity constraints are important tools and useful for specifying consistent states of a database. Checking integrity constraints has proven to be extremely difficult to implement, particularly in distributed database. The main issue concerning checking the integrity constraints in distributed database system is how to derive a set of integrity tests (simplified forms) that will reduce the amount of data transferred, the amount of data accessed, and the number of sites involved during the constraint checking process. Most of the previous approaches derive integrity tests (simplified forms) from the initial integrity constraints with the sufficiency property, since the sufficient test is known to be cheaper to execute than the complete test as it involved less data to be transferred across the network and always can be evaluated at the target site, i.e. only one site is involved during the checking process thus, achieving local checking. The previous approaches assume that an update operation will be executed at a site where the relation specified in the update operation is located (target site), which is not always true. If the update operation is submitted at a different site, the sufficient test is no longer local as it will definitely access data from the remote sites. Therefore, an approach is needed so that local checking can be performed regardless the location of the submitted update operation. In this thesis we proposed an approach for checking integrity constraints in a distributed database system by utilizing as much as possible the information stored at the target site. The proposed constraints simplification approach produces support tests and this is integrated with complete and sufficient tests which are proposed by previous researchers. It uses the initial integrity constraint, the update template, and the other integrity constraints to generate the support tests. The proposed constraints simplification approach adopted the substitution technique and the absorption rules to derive the tests. Since the constraint simplification approach derives several different types of integrity tests for a given update operation and integrity constraint, therefore a strategy to select the most suitable test is needed. We proposed a model to rank and select the suitable test to be checked based on the properties of the tests, the amount of data transferred across the network, the number of sites participated, and the amount of data accessed. Three analyses have been performed to evaluate the proposed checking integrity constraints approach. The first analysis shows that applying different types of integrity tests gives different impacts to the performance of the constraint checking, with respect to the amount of data transferred across the network which is considered as the most critical factor that influences the performance of the checking mechanism. Integrating these various types of integrity tests during constraint checking has enhanced the performance of the constraint mechanisms. The second analysis shows that the cost of checking integrity constraints is reduced when various combinations of integrity tests are selected. The third analysis shows that in most cases localizing integrity checking can be achieved regardless of the location where the update operation is executed when various types of integrity tests are considered

    Development and selection of operational management strategies to achieve policy objectives

    Get PDF
    Since the reform of the EU Common Fisheries Policy in 2002, effort has been devoted to addressing the governance, scientific, social and economic issues required to introduce an ecosystem approach to fisheries management (EAFM) in Europe. Fisheries management needs to support the three pillars of sustainability (ecological, social and economic) and Fisheries Ecosystem Plans (FEPs) have been developed as a tool to assist managers considering the ecological, social and economic implications of their decision. Building upon previous studies (e.g. the FP5-funded European Fisheries Ecosystem Plan project), the core concept of the Making the European Fisheries Ecosystem Plan Operational (MEFEPO) project is to deliver operational frameworks (FEPs) for three regional seas. The project focus is on how best to make current institutional frameworks responsive to an EAFM at regional and pan-European levels in accordance with the principles of good governance. The regional seas selected for the project are the North Sea (NS), North Western Waters (NWW) and South Western Waters (SWW) RAC regions. The aim of this work package (WP5) was to develop operational objectives to achieve the ecological objectives identified for the 3 regional seas in WP2. This report describes the development and implementation of a transparent and formal process that should lead to identification of the “best” operational management strategies for an EAFM, based on sound scientific information and stakeholder involvement (e.g. regional industry groups, citizen groups, managers and other interest groups)

    Detecting malicious data injections in event detection wireless sensor networks

    Get PDF

    Human evaluation of Kea, an automatic keyphrasing system.

    Get PDF
    This paper describes an evaluation of the Kea automatic keyphrase extraction algorithm. Tools that automatically identify keyphrases are desirable because document keyphrases have numerous applications in digital library systems, but are costly and time consuming to manually assign. Keyphrase extraction algorithms are usually evaluated by comparison to author-specified keywords, but this methodology has several well-known shortcomings. The results presented in this paper are based on subjective evaluations of the quality and appropriateness of keyphrases by human assessors, and make a number of contributions. First, they validate previous evaluations of Kea that rely on author keywords. Second, they show Kea's performance is comparable to that of similar systems that have been evaluated by human assessors. Finally, they justify the use of author keyphrases as a performance metric by showing that authors generally choose good keywords

    BlogForever D5.2: Implementation of Case Studies

    Get PDF
    This document presents the internal and external testing results for the BlogForever case studies. The evaluation of the BlogForever implementation process is tabulated under the most relevant themes and aspects obtained within the testing processes. The case studies provide relevant feedback for the sustainability of the platform in terms of potential users’ needs and relevant information on the possible long term impact
    • 

    corecore