472,680 research outputs found

    A note on data base integrity

    Get PDF

    iPDA: An Integrity-Protecting Private Data Aggregation Scheme for Wireless Sensor Networks

    Get PDF
    Data aggregation is an efficient mechanism widely used in wireless sensor networks (WSN) to collect statistics about data of interests. However, the shared-medium nature of communication makes the WSNs are vulnerable to eavesdropping and packet tampering/injection by adversaries. Hence, how to protect data privacy and data integrity are two major challenges for data aggregation in wireless sensor networks. In this paper, we present iPDA??????an integrity-protecting private data aggregation scheme. In iPDA, data privacy is achieved through data slicing and assembling technique; and data integrity is achieved through redundancy by constructing disjoint aggregation paths/trees to collect data of interests. In iPDA, the data integrity-protection and data privacy-preservation mechanisms work synergistically. We evaluate the iPDA scheme in terms of the efficacy of privacy preservation, communication overhead, and data aggregation accuracy, comparing with a typical data aggregation scheme--- TAG, where no integrity protection and privacy preservation is provided. Both theoretical analysis and simulation results show that iPDA achieves the design goals while still maintains the efficiency of data aggregation

    Structuring the process of integrity maintenance (extended version)

    Get PDF
    Two different approaches have been traditionally considered for dealing with the process of integrity constraints enforcement: integrity checking and integrity maintenance. However, while previous research in the first approach has mainly addressed efficiency issues, research in the second approach has been mainly concentrated in being able to generate all possible repairs that falsify an integrity constraint violation. In this paper we address efficiency issues during the process of integrity maintenance. In this sense, we propose a technique which improves efficiency of existing methods by defining the order in which maintenance of integrity constraints should be performed. Moreover, we use also this technique for being able to handle in an integrated way the integrity constraintsPostprint (published version

    Estimating ToE Risk Level using CVSS

    Get PDF
    Security management is about calculated risk and requires continuous evaluation to ensure cost, time and resource effectiveness. Parts of which is to make future-oriented, cost-benefit investments in security. Security investments must adhere to healthy business principles where both security and financial aspects play an important role. Information on the current and potential risk level is essential to successfully trade-off security and financial aspects. Risk level is the combination of the frequency and impact of a potential unwanted event, often referred to as a security threat or misuse. The paper presents a risk level estimation model that derives risk level as a conditional probability over frequency and impact estimates. The frequency and impact estimates are derived from a set of attributes specified in the Common Vulnerability Scoring System (CVSS). The model works on the level of vulnerabilities (just as the CVSS) and is able to compose vulnerabilities into service levels. The service levels define the potential risk levels and are modelled as a Markov process, which are then used to predict the risk level at a particular time

    CONFLLVM: A Compiler for Enforcing Data Confidentiality in Low-Level Code

    Full text link
    We present an instrumenting compiler for enforcing data confidentiality in low-level applications (e.g. those written in C) in the presence of an active adversary. In our approach, the programmer marks secret data by writing lightweight annotations on top-level definitions in the source code. The compiler then uses a static flow analysis coupled with efficient runtime instrumentation, a custom memory layout, and custom control-flow integrity checks to prevent data leaks even in the presence of low-level attacks. We have implemented our scheme as part of the LLVM compiler. We evaluate it on the SPEC micro-benchmarks for performance, and on larger, real-world applications (including OpenLDAP, which is around 300KLoC) for programmer overhead required to restructure the application when protecting the sensitive data such as passwords. We find that performance overheads introduced by our instrumentation are moderate (average 12% on SPEC), and the programmer effort to port OpenLDAP is only about 160 LoC.Comment: Technical report for CONFLLVM: A Compiler for Enforcing Data Confidentiality in Low-Level Code, appearing at EuroSys 201
    corecore