402 research outputs found
DBKnot: A Transparent and Seamless, Pluggable Tamper Evident Database
Database integrity is crucial to organizations that rely on databases of important data. They suffer from the vulnerability to internal fraud. Database tampering by internal malicious employees with high technical authorization to their infrastructure or even compromised by externals is one of the important attack vectors.
This thesis addresses such challenge in a class of problems where data is appended only and is immutable. Examples of operations where data does not change is a) financial institutions (banks, accounting systems, stock market, etc., b) registries and notary systems where important data is kept but is never subject to change, and c) system logs that must be kept intact for performance and forensic inspection if needed. The target of the approach is implementation seamlessness with little-or-no changes required in existing systems.
Transaction tracking for tamper detection is done by utilizing a common hashtable that serially and cumulatively hashes transactions together while using an external time-stamper and signer to sign such linkages together. This allows transactions to be tracked without any of the organizations’ data leaving their premises and going to any third-party which also reduces the performance impact of tracking. This is done so by adding a tracking layer and embedding it inside the data workflow while keeping it as un-invasive as possible.
DBKnot implements such features a) natively into databases, or b) embedded inside Object Relational Mapping (ORM) frameworks, and finally c) outlines a direction of implementing it as a stand-alone microservice reverse-proxy. A prototype ORM and database layer has been developed and tested for seamlessness of integration and ease of use. Additionally, different models of optimization by implementing pipelining parallelism in the hashing/signing process have been tested in order to check their impact on performance.
Stock-market information was used for experimentation with DBKnot and the initial results gave a slightly less than 100% increase in transaction time by using the most basic, sequential, and synchronous version of DBKnot. Signing and hashing overhead does not show significant increase per record with the increased amount of data. A number of different alternate optimizations were done to the design that via testing have resulted in significant increase in performance
Towards Secure Cloud Data Management
This paper explores the security challenges posed by data-intensive applications deployed in cloud environments that span administrative and network domains. We propose a data-centric view of cloud security and discuss data management challenges in the areas of secure distributed data processing, end-to-end query result verification, and cross-user trust policy management. In addition, we describe our current and future efforts to investigate security challenges in cloud data management using the Declarative Secure Distributed Systems (DS2) platform, a declarative infrastructure for specifying, analyzing, and deploying secure information systems
A Canonical Form for PROV Documents and its Application to Equality, Signature, and Validation
We present a canonical form for
prov
that is a normalized way of representing
prov
documents as mathematical expressions. As opposed to the normal form specified by the
prov-constraints
recommendation, the canonical form we present is defined for all
prov
documents, irrespective of their validity, and it can be serialized in a unique way. The article makes the case for a canonical form for
prov
and its potential uses, namely comparison of
prov
documents in different formats, validation, and signature of
prov
documents. A signature of a
prov
document allows the integrity and the author of provenance to be ascertained; since the signature is based on the canonical form, these checks are not tied to a particular encoding, but can be performed on any representation of
prov
.
</jats:p
Securing configuration, management and migration of virtual network functions using blockchain
The current technologies of network functions virtualization and network service function chaining increase service provision agility and add intelligence at the core of the network. However, the network core programmability and the provision of services by multiple providers brings new vulnerabilities to this scenario. The need for secure provisioning of virtual network service functions (VNFs) becomes even more critical, since simple modifications at the network core can affect multiple network users. This work proposes a blockchain-based architecture for secure management, configuration and migration of VNFs. This architecture ensures the immutability, non-repudiation, and auditability of VNF configuration and the management histories. In addition, the proposed architecture preserves the anonymity of VNFs, tenants, and configuration information, to mitigate the possibilities of targeted attack. A prototype designed for the OPNFV (Open Platform for NFV) platform was developed, and the proposed architecture performance was evaluated in terms of parameters trade-offs and bottlenecks.As tecnologias de virtualização de funções de rede e de encadeamento de funções de serviço de rede aumentam a agilidade na provisão de serviços e acrescentam inteligência no núcleo da rede. No entanto, a programabilidade do n´núcleo da rede e a oferta de serviços por múltiplos fornecedores provocam novas vulnerabilidades neste ambiente. A necessidade de provisão de funções virtuais de serviço de rede (VNFs) seguras torna-se ainda mais crítica, uma vez que uma simples modificação no núcleo da rede pode afetar múltiplos usuários. Este trabalho propõe uma arquitetura baseada em correntes de blocos para gerenciamento seguro, configuração emigração de VNFs. Esta arquitetura garante a imutabilidade, não repúdio e auditabilidade da configuração de VNF e do histórico de gerenciamento de VNFs. Além disso, a arquitetura proposta preserva o anonimato das VNFs, dos inquilinos e das informações de configuração, a fim de evitar que estes se tornem alvos de ataques. Foi desenvolvido um protótipo concebido para a plataforma OPNFV (Open Platform for NFV) e foi avaliado o desempenho em relação ao custo benefício de parâmetros e aos gargalos da arquitetura proposta
Constructing provenance-aware distributed systems with data propagation
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 93-96).Is it possible to construct a heterogeneous distributed computing architecture capable of solving interesting complex problems? Can we easily use this architecture to maintain a detailed history or provenance of the data processed by it? Most existing distributed architectures can perform only one operation at a time. While they are capable of tracing possession of data, these architectures do not always track the network of operations used to synthesize new data. This thesis presents a distributed implementation of data propagation, a computational model that provides for concurrent processing that is not constrained to a single distributed operation. This system is capable of distributing computation across a heterogeneous network. It allows for the division of multiple simultaneous operations in a single distributed system. I also identify four constraints that may be placed on general-purpose data propagation to allow for deterministic computation in such a distributed propagation network. This thesis also presents an application of distributed propagation by illustrating how a generic transformation may be applied to existing propagator networks to allow for the maintenance of data provenance. I show that the modular structure of data propagation permits the simple modification of a propagator network design to maintain the histories of data.by Ian Campbell Jacobi.S.M
IaaS-cloud security enhancement: an intelligent attribute-based access control model and implementation
The cloud computing paradigm introduces an efficient utilisation of huge computing
resources by multiple users with minimal expense and deployment effort
compared to traditional computing facilities. Although cloud computing has incredible
benefits, some governments and enterprises remain hesitant to transfer
their computing technology to the cloud as a consequence of the associated security
challenges. Security is, therefore, a significant factor in cloud computing
adoption. Cloud services consist of three layers: Software as a Service (SaaS), Platform
as a Service (PaaS), and Infrastructure as a Service (IaaS). Cloud computing
services are accessed through network connections and utilised by multi-users who
can share the resources through virtualisation technology. Accordingly, an efficient
access control system is crucial to prevent unauthorised access.
This thesis mainly investigates the IaaS security enhancement from an access
control point of view. [Continues.
Recommended from our members
Global Data Plane: A Widely Distributed Storage and Communication Infrastructure
With the advancement of technology, richer computation devices are making their way into everyday life. However, such smarter devices merely act as a source and sink of information; the storage of information is highly centralized in data-centers in today’s world. Even though such data-centers allow for amortization of cost per bit of information, the density and distribution of such data-centers is not necessarily representative of human population density. This disparity of where the information is produced and consumed vs where it is stored only slightly affects the applications of today, but it will be the limiting factor for applications of tomorrow.The computation resources at the edge are more powerful than ever, and present an opportunity to address this disparity. We envision that a seamless combination of these edge-resources with the data-center resources is the way forward. However, the resulting issues of trust and data-security are not easy to solve in a world full of complexity. Toward this vision of a federated infrastructure composed of resources at the edge as well as those in data-centers, we describe the architecture and design of a widely distributed system for data storage and communication that attempts to alleviate some of these data security challenges; we call this system the Global Data Plane (GDP).The key abstraction in the GDP is a secure cohesive container of information called a DataCapsule, which provides a layer of uniformity on top of a heterogeneous infrastructure. A DataCapsule represents a secure history of transactions in a persistent form that can be used for building other applications on top. Existing applications can be refactored to use DataCapsules as the ground truth of persistent state; such a refactoring enables cleaner application design that allows for better security analysis of information flows. Not only cleaner design, the GDP also enables locality of access for performance and data privacy—an ever growing concern in the information age.The DataCapsules are enabled by an underlying routing fabric, called the GDP network, which provides secure routing for datagrams in a flat namespace. The GDP network is a core component of the GDP that enables various GDP components to interact with each other. In addition to the DataCapsules, this underlying network is available to applications for native communication as well. Flat namespace networks are known to provide a number of desirable properties, such as location independence, built-in multicast, etc. However, existing architectures for such networks suffer from routing security issues, typically because malicious entities can claim to possess arbitrary names and thus, receive traffic intended for arbitrary destinations. GDP network takes a different approach by defining an ownership of the name and the associated mechanisms for participants to delegate routing for such names to others. By directly integrating with GDP network, applications can enjoy the benefits of flat namespace networks without compromising routing security.The Global Data Plane and DataCapsules together represent our vision for secure ubiquitous storage. As opposed to the current approach of perimeter security for infrastructure, i.e. drawing a perimeter around parts of infrastructure and trusting everything inside it, our vision is to use cryptographic tools to enable intrinsic security for the information itself regardless of the context in which such information lives. In this dissertation, we show how to make this vision a reality, and how to adapt real world applications to reap the benefits of secure ubiquitous storage
An empirical analysis of smart contracts: platforms, applications, and design patterns
Smart contracts are computer programs that can be consistently executed by a
network of mutually distrusting nodes, without the arbitration of a trusted
authority. Because of their resilience to tampering, smart contracts are
appealing in many scenarios, especially in those which require transfers of
money to respect certain agreed rules (like in financial services and in
games). Over the last few years many platforms for smart contracts have been
proposed, and some of them have been actually implemented and used. We study
how the notion of smart contract is interpreted in some of these platforms.
Focussing on the two most widespread ones, Bitcoin and Ethereum, we quantify
the usage of smart contracts in relation to their application domain. We also
analyse the most common programming patterns in Ethereum, where the source code
of smart contracts is available.Comment: WTSC 201
- …