22,227 research outputs found

    DESIGN AND DEVELOPMENT OF KEY REPRESENTATION AUDITING SCHEME FOR SECURE ONLINE AND DYNAMIC STATISTICAL DATABASES

    Get PDF
    A statistical database (SDB) publishes statistical queries (such as sum, average, count, etc.) on subsets of records. Sometimes by stitching the answers of some statistics, a malicious user (snooper) may be able to deduce confidential information about some individuals. When a user submits a query to statistical database, the difficult problem is how to decide whether the query is answerable or not; to make a decision, past queries must be taken into account, which is called SDB auditing. One of the major drawbacks of the auditing, however, is its excessive CPU time and storage requirements to find and retrieve the relevant records from the SDB. The key representation auditing scheme (KRAS) is proposed to guarantee the security of online and dynamic SDBs. The core idea is to convert the original database into a key representation database (KRDB), also this scheme involves converting each new user query from a string representation into a key representation query (KRQ) and storing it in the Audit Query table (AQ table). Three audit stages are proposed to repel the attacks of the snooper to the confidentiality of the individuals. Also, efficient algorithms for these stages are presented, namely the First Stage Algorithm (FSA), the Second Stage Algorithm (SSA) and the Third Stage Algorithm (TSA). These algorithms enable the key representation auditor (KRA) to conveniently specify the illegal queries which could lead to disclosing the SDB. A comparative study is made between the new scheme and the existing methods, namely a cost estimation and a statistical analysis are performed, and it illustrates the savings in block accesses (CPU time) and storage space that are attainable when a KRDB is used. Finally, an implementation of the new scheme is performed and all the components of the proposed system are discussed

    A model of security monitoring

    Get PDF
    A model of security monitoring is presented that distinguishes between two types of logging and auditing. Implications for the design and use of security monitoring mechanisms are drawn from this model. The usefulness of the model is then demonstrated by analyzing several different monitoring mechanisms

    Provenance-based trust for grid computing: Position Paper

    No full text
    Current evolutions of Internet technology such as Web Services, ebXML, peer-to-peer and Grid computing all point to the development of large-scale open networks of diverse computing systems interacting with one another to perform tasks. Grid systems (and Web Services) are exemplary in this respect and are perhaps some of the first large-scale open computing systems to see widespread use - making them an important testing ground for problems in trust management which are likely to arise. From this perspective, today's grid architectures suffer from limitations, such as lack of a mechanism to trace results and lack of infrastructure to build up trust networks. These are important concerns in open grids, in which "community resources" are owned and managed by multiple stakeholders, and are dynamically organised in virtual organisations. Provenance enables users to trace how a particular result has been arrived at by identifying the individual services and the aggregation of services that produced such a particular output. Against this background, we present a research agenda to design, conceive and implement an industrial-strength open provenance architecture for grid systems. We motivate its use with three complex grid applications, namely aerospace engineering, organ transplant management and bioinformatics. Industrial-strength provenance support includes a scalable and secure architecture, an open proposal for standardising the protocols and data structures, a set of tools for configuring and using the provenance architecture, an open source reference implementation, and a deployment and validation in industrial context. The provision of such facilities will enrich grid capabilities by including new functionalities required for solving complex problems such as provenance data to provide complete audit trails of process execution and third-party analysis and auditing. As a result, we anticipate that a larger uptake of grid technology is likely to occur, since unprecedented possibilities will be offered to users and will give them a competitive edge

    Supporting the clinical trial recruitment process through the grid

    Get PDF
    Patient recruitment for clinical trials and studies is a large-scale task. To test a given drug for example, it is desirable that as large a pool of suitable candidates is used as possible to support reliable assessment of often moderate effects of the drugs. To make such a recruitment campaign successful, it is necessary to efficiently target the petitioning of these potential subjects. Because of the necessarily large numbers involved in such campaigns, this is a problem that naturally lends itself to the paradigm of Grid technology. However the accumulation and linkage of data sets across clinical domain boundaries poses challenges due to the sensitivity of the data involved that are atypical of other Grid domains. This includes handling the privacy and integrity of data, and importantly the process by which data can be collected and used, and ensuring for example that patient involvement and consent is dealt with appropriately throughout the clinical trials process. This paper describes a Grid infrastructure developed as part of the MRC funded VOTES project (Virtual Organisations for Trials and Epidemiological Studies) at the National e-Science Centre in Glasgow that supports these processes and the different security requirements specific to this domain

    The Informatics Audit

    Get PDF
    The demand for qualitative and reliable information in order to support decision-making is continuously increasing. On the other hand, the cost of software production and maintenance is raising dramatically as a consequence of the increasing complexity of software systems and the need for better designed and user friendly programs. The huge amount of data the organizations face needs human, financial, and material resources to collect, checks, analyze and use it. All these aspects impose to develop activities in order to obtain better outcomes with less resources. The Informatics Audit is one of such kind of activities. This paper presents some Informatics Audit basic concepts.it audit, software cost, maintenance, system complexity

    Context-Awareness Enhances 5G Multi-Access Edge Computing Reliability

    Get PDF
    The fifth generation (5G) mobile telecommunication network is expected to support Multi- Access Edge Computing (MEC), which intends to distribute computation tasks and services from the central cloud to the edge clouds. Towards ultra-responsive, ultra-reliable and ultra-low-latency MEC services, the current mobile network security architecture should enable a more decentralized approach for authentication and authorization processes. This paper proposes a novel decentralized authentication architecture that supports flexible and low-cost local authentication with the awareness of context information of network elements such as user equipment and virtual network functions. Based on a Markov model for backhaul link quality, as well as a random walk mobility model with mixed mobility classes and traffic scenarios, numerical simulations have demonstrated that the proposed approach is able to achieve a flexible balance between the network operating cost and the MEC reliability.Comment: Accepted by IEEE Access on Feb. 02, 201

    Understanding Database Reconstruction Attacks on Public Data

    Get PDF
    In 2020 the U.S. Census Bureau will conduct the Constitutionally mandated decennial Census of Population and Housing. Because a census involves collecting large amounts of private data under the promise of confidentiality, traditionally statistics are published only at high levels of aggregation. Published statistical tables are vulnerable to DRAs (database reconstruction attacks), in which the underlying microdata is recovered merely by finding a set of microdata that is consistent with the published statistical tabulations. A DRA can be performed by using the tables to create a set of mathematical constraints and then solving the resulting set of simultaneous equations. This article shows how such an attack can be addressed by adding noise to the published tabulations, so that the reconstruction no longer results in the original data

    Shared and Searchable Encrypted Data for Untrusted Servers

    Get PDF
    Current security mechanisms pose a risk for organisations that outsource their data management to untrusted servers. Encrypting and decrypting sensitive data at the client side is the normal approach in this situation but has high communication and computation overheads if only a subset of the data is required, for example, selecting records in a database table based on a keyword search. New cryptographic schemes have been proposed that support encrypted queries over encrypted data but all depend on a single set of secret keys, which implies single user access or sharing keys among multiple users, with key revocation requiring costly data re-encryption. In this paper, we propose an encryption scheme where each authorised user in the system has his own keys to encrypt and decrypt data. The scheme supports keyword search which enables the server to return only the encrypted data that satisfies an encrypted query without decrypting it. We provide two constructions of the scheme giving formal proofs of their security. We also report on the results of a prototype implementation. This research was supported by the UK’s EPSRC research grant EP/C537181/1. The authors would like to thank the members of the Policy Research Group at Imperial College for their support
    corecore