5,360 research outputs found

    Community Trust Stores for Peer-to-Peer e-Commerce Applications

    Get PDF

    State of Alaska Election Security Project Phase 2 Report

    Get PDF
    A laska’s election system is among the most secure in the country, and it has a number of safeguards other states are now adopting. But the technology Alaska uses to record and count votes could be improved— and the state’s huge size, limited road system, and scattered communities also create special challenges for insuring the integrity of the vote. In this second phase of an ongoing study of Alaska’s election security, we recommend ways of strengthening the system—not only the technology but also the election procedures. The lieutenant governor and the Division of Elections asked the University of Alaska Anchorage to do this evaluation, which began in September 2007.Lieutenant Governor Sean Parnell. State of Alaska Division of Elections.List of Appendices / Glossary / Study Team / Acknowledgments / Introduction / Summary of Recommendations / Part 1 Defense in Depth / Part 2 Fortification of Systems / Part 3 Confidence in Outcomes / Conclusions / Proposed Statement of Work for Phase 3: Implementation / Reference

    Ensuring compliance with data privacy and usage policies in online services

    Get PDF
    Online services collect and process a variety of sensitive personal data that is subject to complex privacy and usage policies. Complying with the policies is critical, often legally binding for service providers, but it is challenging as applications are prone to many disclosure threats. We present two compliance systems, Qapla and Pacer, that ensure efficient policy compliance in the face of direct and side-channel disclosures, respectively. Qapla prevents direct disclosures in database-backed applications (e.g., personnel management systems), which are subject to complex access control, data linking, and aggregation policies. Conventional methods inline policy checks with application code. Qapla instead specifies policies directly on the database and enforces them in a database adapter, thus separating compliance from the application code. Pacer prevents network side-channel leaks in cloud applications. A tenant’s secrets may leak via its network traffic shape, which can be observed at shared network links (e.g., network cards, switches). Pacer implements a cloaked tunnel abstraction, which hides secret-dependent variation in tenant’s traffic shape, but allows variations based on non-secret information, enabling secure and efficient use of network resources in the cloud. Both systems require modest development efforts, and incur moderate performance overheads, thus demonstrating their usability.Onlinedienste sammeln und verarbeiten eine Vielzahl sensibler persönlicher Daten, die komplexen Datenschutzrichtlinien unterliegen. Die Einhaltung dieser Richtlinien ist häufig rechtlich bindend für Dienstanbieter und gleichzeitig eine Herausforderung, da Fehler in Anwendungsprogrammen zu einer unabsichtlichen Offenlegung führen können. Wir präsentieren zwei Compliance-Systeme, Qapla und Pacer, die Richtlinien effizient einhalten und gegen direkte und indirekte Offenlegungen durch Seitenkanäle schützen. Qapla verhindert direkte Offenlegungen in datenbankgestützten Anwendungen. Herkömmliche Methoden binden Richtlinienprüfungen in Anwendungscode ein. Stattdessen gibt Qapla Richtlinien direkt in der Datenbank an und setzt sie in einem Datenbankadapter durch. Die Konformität ist somit vom Anwendungscode getrennt. Pacer verhindert Netzwerkseitenkanaloffenlegungen in Cloud-Anwendungen. Geheimnisse eines Nutzers können über die Form des Netzwerkverkehr offengelegt werden, die bei gemeinsam genutzten Netzwerkelementen (z. B. Netzwerkkarten, Switches) beobachtet werden kann. Pacer implementiert eine Tunnelabstraktion, die Geheimnisse im Netzwerkverkehr des Nutzers verbirgt, jedoch Variationen basier- end auf nicht geheimen Informationen zulässt und eine sichere und effiziente Nutzung der Netzwerkressourcen in der Cloud ermöglicht. Beide Systeme erfordern geringen Entwicklungsaufwand und verursachen einen moderaten Leistungsaufwand, wodurch ihre Nützlichkeit demonstriert wird

    Standard operating procedures (SOPS) for health and demographic research data quality assurance: the case of VADU HDSS site

    Get PDF
    A research report submitted to the Faculty of Health Sciences, University of the Witwatersrand in partial fulfilment of the requirements for the degree of Masters of Science In Epidemiology (Research Data Management) Johannesburg, September 2016The idea of data quality assurance and security control is to monitor the quality of research data generated from any research activity. This consists of a thorough collection of documentation regarding all aspects of the research. Data management procedures of health and demographic research constantly changes or emerges through the iterative processes of data collection and analysis and requires that the investigator make frequent decisions that can alter the course of the study. As a result, audit trails that provides justification for these actions will be vital for future analysis. The audit trail provides a mechanism for retroactive assessment of the conduct of the inquiry and a means to address issues related to authenticity of the research datasets. This research seeks to develop an Information Assurance Policy and Standard Operating Procedures for Vadu Health and Demographic Surveillance System Site using ISACA/COBIT 5 family products and ISO/IEC ISMS as benchmark. The work proposes data assurance and security controls and measures for any given research project. To develop such SOP, there is a need to identify existing gaps and inconsistencies within the data management life cycle at VRHP site. This will allow us to establish the areas of focus for the SOP. We used an interview-based approach to identify the existing gaps associated with data management life cycle at VRHP site. The study population included key members of the data management team. The study was conducted utilizing a self-administered questionnaire with structured and open ended questions. Purposive sampling method used to enrol 21 data management team members consisting of 13 Field Research Assistants, 4 Field Research Supervisors, 1 Field Coordinator, 1 Software Application Developer, 1 Head of Data Management and 1 Data Manager. Unstructured interviews were conducted to gather information on respective roles and responsibilities of the members to ensure maximum open interactions. Data gathering and analyses were done concurrently. Two themes arose from the data: Current lapses in data collection at Vadu HDSS and current lapses in data management at Vadu HDSS. The response rate was 95.5%. We adopted the ISACA/COBIT 5 guidelines and ISO/IEC ISMS as benchmark to develop SOPs to guide data management life cycle activities in enforcing data quality assurance. We also included some guidelines that can be used in replicating the SOP at other research institution.MT201

    The InfoSec Handbook

    Get PDF
    Computer scienc
    • …
    corecore