92 research outputs found

    Building regulatory compliant storage systems

    Full text link
    In the past decade, informational records have become entirely digital. These include financial statements, health care records, student records, private consumer information and other sensitive data. Because of the delicate nature of the data these records contain, Congress and the courts have begun to recognize the importance of properly storing and securing electronic records. Examples of legislation in-clude the Health Insurance Portability and Accountabilit

    DBKnot: A Transparent and Seamless, Pluggable Tamper Evident Database

    Get PDF
    Database integrity is crucial to organizations that rely on databases of important data. They suffer from the vulnerability to internal fraud. Database tampering by internal malicious employees with high technical authorization to their infrastructure or even compromised by externals is one of the important attack vectors. This thesis addresses such challenge in a class of problems where data is appended only and is immutable. Examples of operations where data does not change is a) financial institutions (banks, accounting systems, stock market, etc., b) registries and notary systems where important data is kept but is never subject to change, and c) system logs that must be kept intact for performance and forensic inspection if needed. The target of the approach is implementation seamlessness with little-or-no changes required in existing systems. Transaction tracking for tamper detection is done by utilizing a common hashtable that serially and cumulatively hashes transactions together while using an external time-stamper and signer to sign such linkages together. This allows transactions to be tracked without any of the organizations’ data leaving their premises and going to any third-party which also reduces the performance impact of tracking. This is done so by adding a tracking layer and embedding it inside the data workflow while keeping it as un-invasive as possible. DBKnot implements such features a) natively into databases, or b) embedded inside Object Relational Mapping (ORM) frameworks, and finally c) outlines a direction of implementing it as a stand-alone microservice reverse-proxy. A prototype ORM and database layer has been developed and tested for seamlessness of integration and ease of use. Additionally, different models of optimization by implementing pipelining parallelism in the hashing/signing process have been tested in order to check their impact on performance. Stock-market information was used for experimentation with DBKnot and the initial results gave a slightly less than 100% increase in transaction time by using the most basic, sequential, and synchronous version of DBKnot. Signing and hashing overhead does not show significant increase per record with the increased amount of data. A number of different alternate optimizations were done to the design that via testing have resulted in significant increase in performance

    Optimized Model of Ledger Database Management to handle Vehicle Registration

    Get PDF
    In recent years, versioning of business data has become increasingly important in enterprise solutions. In the context of big data, where the 5Vs (Volume, Velocity, Variety, Value, and Veracity) play a pivotal role, the management of data versioning gains even greater significance. As enterprises grapple with massive volumes of data generated at varying velocities and exhibiting diverse formats, ensuring the accuracy, completeness, and consistency of data throughout its lifecycle becomes paramount. System of record solutions, which cover most enterprise solutions, require the management of data life history in both system and business time. This means that information must be stored in a way that covers past, present, and future states, such as a contract with a start date in the past and an end date in the future that may require correction at any point during its lifetime. While some systems offer transaction time rollback features, they do not address the business life history dimension of a contract or asset, which requires the developer to code the business rules of the requirements. The relational data model is unable to inherently use relational constraints where a business time dimension of the data is required, as it is a “current view” and not designed for this purpose. Therefore, there is a need for better autonomous capabilities for version control of data, which will bring new functionality and cost reduction in application development and maintenance, reduce coding complexity, and increase productivity. This paper presents an approach to relational data management that relieves the developer from the need to code the business rules for versioning. The framework, called Ld8a, works with a standard Oracle database that keeps the developer in the “current view” paradigm but allows them to specify the point in time that logical insert, update, and delete events take place with the infrastructure autonomously maintaining relational correctness of the dataset across time. The Ld8a framework has been used to address the vehicle registration scenario used by AWS in presenting the capabilities of the Quantum Ledger Database product. This approach offers a solution that maintains referential integrity by the infrastructure across time, making version control of data easier and more efficient for developers

    Securing Logs in Operation-based Collaborative Editing

    Get PDF
    The Twelfth International Workshop on Collaborative Editing Systems, CSCW'12International audienceIn recent years collaborative editing systems such as wikis, GoogleDocs and version control systems became very popular. In order to improve reliability, fault-tolerance and availability shared data is replicated in these systems. User misbehaviors can make the system inconsistent or bring corrupted updates to replicated data. Solutions to secure data history of state-based replication exist, however they are hardly applied to operation-based replication. In this paper we propose an approach to secure log in operation-based optimistic replication system. authenticators based on hash values and digital signatures are generated each time a site shares or receives new updates on replicas. authenticators secure logs with security properties of integrity and authenticity. We present in detail algorithms to construct and verify authenticators and we analyse their complexities

    Authenticating Operation-based History in Collaborative Systems

    Get PDF
    International audienceWithin last years multi-synchronous collaborative editing systems became widely used. Multi-synchronous collaboration maintains multiple, simultaneous streams of activity which continually diverge and synchronized. These streams of activity are represented by means of logs of operations, i.e. user modifications. A malicious user might tamper his log of operations. At the moment of synchronization with other streams, the tampered log might generate wrong results. In this paper, we propose a solution relying on hash-chain based authenticators for authenticating logs that ensure the authenticity, the integrity of logs, and the user accountability. We present algorithms to construct authenticators and verify logs. We prove their correctness and provide theoretical and practical evaluations

    SmartQC: An Extensible DLT-Based Framework for Trusted Data Workflows in Smart Manufacturing

    Full text link
    Recent developments in Distributed Ledger Technology (DLT), including Blockchain offer new opportunities in the manufacturing domain, by providing mechanisms to automate trust services (digital identity, trusted interactions, and auditable transactions) and when combined with other advanced digital technologies (e.g. machine learning) can provide a secure backbone for trusted data flows between independent entities. This paper presents an DLT-based architectural pattern and technology solution known as SmartQC that aims to provide an extensible and flexible approach to integrating DLT technology into existing workflows and processes. SmartQC offers an opportunity to make processes more time efficient, reliable, and robust by providing two key features i) data integrity through immutable ledgers and ii) automation of business workflows leveraging smart contracts. The paper will present the system architecture, extensible data model and the application of SmartQC in the context of example smart manufacturing applications.Comment: 33 Pages, 9 Figures, Under Peer Review Proces

    An N-version electronic voting system

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 103-109).The ballot battles of the 2000 US Presidential Election clearly indicate that existing voting technologies and processes are not sufficient to guarantee that every eligible voter is granted their right to vote and implicitly to have that vote counted, as per the fifteenth, nineteenth, twenty fourth and twenty sixth amendments to the US constitution [1-3]. Developing a voting system that is secure, correct, reliable and trustworthy is a significant challenge to current technology [3, 4]. The Secure Architecture for Voting Electronically (SAVE) demonstrates that N-version programming increases the reliability and security of its systems, and can be used to increase the trustworthiness of systems. Further, SAVE demonstrates how a viable practical approach to voting can be created using N-version programming. SAVE represents a significant contribution to voting technology research because of its design, and also because it demonstrates the benefits of N-version programming and introduces these benefits to the field of voting technology.by Soyini D. Liburd.M.Eng
    corecore