30,735 research outputs found

    A Review of integrity constraint maintenance and view updating techniques

    Get PDF
    Two interrelated problems may arise when updating a database. On one hand, when an update is applied to the database, integrity constraints may become violated. In such case, the integrity constraint maintenance approach tries to obtain additional updates to keep integrity constraints satisfied. On the other hand, when updates of derived or view facts are requested, a view updating mechanism must be applied to translate the update request into correct updates of the underlying base facts. This survey reviews the research performed on integrity constraint maintenance and view updating. It is proposed a general framework to classify and to compare methods that tackle integrity constraint maintenance and/or view updating. Then, we analyze some of these methods in more detail to identify their actual contribution and the main limitations they may present.Postprint (published version

    Structuring the process of integrity maintenance (extended version)

    Get PDF
    Two different approaches have been traditionally considered for dealing with the process of integrity constraints enforcement: integrity checking and integrity maintenance. However, while previous research in the first approach has mainly addressed efficiency issues, research in the second approach has been mainly concentrated in being able to generate all possible repairs that falsify an integrity constraint violation. In this paper we address efficiency issues during the process of integrity maintenance. In this sense, we propose a technique which improves efficiency of existing methods by defining the order in which maintenance of integrity constraints should be performed. Moreover, we use also this technique for being able to handle in an integrated way the integrity constraintsPostprint (published version

    Intelligent Self-Repairable Web Wrappers

    Get PDF
    The amount of information available on the Web grows at an incredible high rate. Systems and procedures devised to extract these data from Web sources already exist, and different approaches and techniques have been investigated during the last years. On the one hand, reliable solutions should provide robust algorithms of Web data mining which could automatically face possible malfunctioning or failures. On the other, in literature there is a lack of solutions about the maintenance of these systems. Procedures that extract Web data may be strictly interconnected with the structure of the data source itself; thus, malfunctioning or acquisition of corrupted data could be caused, for example, by structural modifications of data sources brought by their owners. Nowadays, verification of data integrity and maintenance are mostly manually managed, in order to ensure that these systems work correctly and reliably. In this paper we propose a novel approach to create procedures able to extract data from Web sources -- the so called Web wrappers -- which can face possible malfunctioning caused by modifications of the structure of the data source, and can automatically repair themselves.\u

    Enhance maintenance problem recognition techniques and its application to palm oil mills

    Get PDF
    This paper discusses the application of enhanced maintenance problem recognition techniques. The main contribution of this study is the proposed combined techniques, namely snapshot model, failure mode, effect and criticality analysis (FMECA), Pareto analysis, and decision analysis by using information technology (IT). The snapshot model is part of the maintenance modelling technique while FMECA, Pareto analysis, and decision analysis are part of maintenance reliability techniques. Each of the techniques and the proposed combined techniques is explained. The case study used for this enhanced technique was the palm oil mills maintenance problem. The result and possible further enhancement is also discussed

    Alpha Entanglement Codes: Practical Erasure Codes to Archive Data in Unreliable Environments

    Full text link
    Data centres that use consumer-grade disks drives and distributed peer-to-peer systems are unreliable environments to archive data without enough redundancy. Most redundancy schemes are not completely effective for providing high availability, durability and integrity in the long-term. We propose alpha entanglement codes, a mechanism that creates a virtual layer of highly interconnected storage devices to propagate redundant information across a large scale storage system. Our motivation is to design flexible and practical erasure codes with high fault-tolerance to improve data durability and availability even in catastrophic scenarios. By flexible and practical, we mean code settings that can be adapted to future requirements and practical implementations with reasonable trade-offs between security, resource usage and performance. The codes have three parameters. Alpha increases storage overhead linearly but increases the possible paths to recover data exponentially. Two other parameters increase fault-tolerance even further without the need of additional storage. As a result, an entangled storage system can provide high availability, durability and offer additional integrity: it is more difficult to modify data undetectably. We evaluate how several redundancy schemes perform in unreliable environments and show that alpha entanglement codes are flexible and practical codes. Remarkably, they excel at code locality, hence, they reduce repair costs and become less dependent on storage locations with poor availability. Our solution outperforms Reed-Solomon codes in many disaster recovery scenarios.Comment: The publication has 12 pages and 13 figures. This work was partially supported by Swiss National Science Foundation SNSF Doc.Mobility 162014, 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN
    • …
    corecore