9 research outputs found

    Making the Distribution Subsystem Secure

    Get PDF
    This report presents how the Distribution Subsystem is made secure. A set of different security threats to a shared data programming system are identifed. The report presents the extensions nessesary to the DSS in order to cope with the identified security threats by maintaining reference security. A reference to a shared data structure cannot be forged or guessed; only by proper delegation can a thread acquire access to data originating at remote processes. Referential security is a requirement for secure distributed applications. By programmatically restricting access to distributed data to trusted nodes, a distributed application can be made secure. However, for this to be true, referential security must be supported on the level of the implementation

    KeyForge: Mitigating Email Breaches with Forward-Forgeable Signatures

    Full text link
    Email breaches are commonplace, and they expose a wealth of personal, business, and political data that may have devastating consequences. The current email system allows any attacker who gains access to your email to prove the authenticity of the stolen messages to third parties -- a property arising from a necessary anti-spam / anti-spoofing protocol called DKIM. This exacerbates the problem of email breaches by greatly increasing the potential for attackers to damage the users' reputation, blackmail them, or sell the stolen information to third parties. In this paper, we introduce "non-attributable email", which guarantees that a wide class of adversaries are unable to convince any third party of the authenticity of stolen emails. We formally define non-attributability, and present two practical system proposals -- KeyForge and TimeForge -- that provably achieve non-attributability while maintaining the important protection against spam and spoofing that is currently provided by DKIM. Moreover, we implement KeyForge and demonstrate that that scheme is practical, achieving competitive verification and signing speed while also requiring 42% less bandwidth per email than RSA2048

    Integrita: Protecting View-Consistency in Online Social Network with Federated Servers

    Get PDF
    Current designs of Online Social Networks (OSN) deploy centralized architecture where a central OSN provider handles all the users’ read and write requests over the shared data (e.g., Facebook wall or a group page). The historical incidents demonstrate that such centralization is leveraged for censorship and violating view consistency; a corrupted provider deliberately displays different views of the shared data to the users. Integrita provides a data-sharing mechanism that protects view consistency by replacing the centralized architecture with the federated-server model consisting of N malicious providers, N − 1 of which can be colluding. The state of the shared data is modeled by an append-only data structure, stored at the servers side, which contains the history of all the operations performed by the users. The consistency of users’ views towards shared data depends on their accessibility to the intact log of operations. Integrita guarantees that the servers cannot manipulate the log without being detected by the users. Unlike the state-of-the-art, Integrita accomplishes this neither by using storage inefficient data replication nor by requiring users to exchange their views. Every user, without relying on the presence of other users, can verify whether his operation has been added to the log and is visible to the rest of the users. We introduce and achieve a new level of view consistency named q-detectable consistency, where any inconsistency between users’ views cannot remain undetected for more than q operations where q is a function of the number of the servers. This level of consistency is stronger than what centralized counterparts offer. Also, our proposal reduces the storage overhead imposed by replication-based solutions by the multiplicative factor of 1/N. Furthermore, the application of Integrita is not limited to OSNs, and can be integrated into any log-based systems e.g., versioning control system as well

    Wide-address operating system elements

    Get PDF

    FRAMEWORK FOR ANONYMIZED COVERT COMMUNICATIONS: A BLOCKCHAIN-BASED PROOF-OF-CONCEPT

    Get PDF
    In this dissertation, we present an information hiding approach incorporating anonymity that builds on existing classical steganographic models. Current security definitions are not sufficient to analyze the proposed information hiding approach as steganography offers data privacy by hiding the existence of data, a property that is distinct from confidentiality (data existence is known but access is restricted) and authenticity (data existence is known but manipulation is restricted). Combinations of the latter two properties are common in analyses, such as Authenticated Encryption with Associated Data (AEAD), yet there is a lack of research on combinations with steganography. This dissertation also introduces the security definition of Authenticated Stegotext with Associated Data (ASAD), which captures steganographic properties even when there is contextual information provided alongside the hidden data. We develop a hierarchical framework of ASAD variants, corresponding to different channel demands. We present a real-world steganographic embedding scheme, Authenticated SteGotex with Associated tRansaction Data (ASGARD), that leverages a blockchain-based application as a medium for sending hidden data. We analyze ASGARD in our framework and show that it meets Level-4 ASAD security. Finally, we implement ASGARD on the Ethereum platform as a proof-of-concept and analyze some of the ways an adversary might detect our embedding activity by analyzing historical Ethereum data.Lieutenant, United States NavyApproved for public release. Distribution is unlimited

    Analysis Design & Applications of Cryptographic Building Blocks

    Get PDF
    This thesis deals with the basic design and rigorous analysis of cryptographic schemes and primitives, especially of authenticated encryption schemes, hash functions, and password-hashing schemes. In the last decade, security issues such as the PS3 jailbreak demonstrate that common security notions are rather restrictive, and it seems that they do not model the real world adequately. As a result, in the first part of this work, we introduce a less restrictive security model that is closer to reality. In this model it turned out that existing (on-line) authenticated encryption schemes cannot longer beconsidered secure, i.e. they can guarantee neither data privacy nor data integrity. Therefore, we present two novel authenticated encryption scheme, namely COFFE and McOE, which are not only secure in the standard model but also reasonably secure in our generalized security model, i.e. both preserve full data inegrity. In addition, McOE preserves a resonable level of data privacy. The second part of this thesis starts with proposing the hash function Twister-Pi, a revised version of the accepted SHA-3 candidate Twister. We not only fixed all known security issues of Twister, but also increased the overall soundness of our hash-function design. Furthermore, we present some fundamental groundwork in the area of password-hashing schemes. This research was mainly inspired by the medial omnipresence of password-leakage incidences. We show that the password-hashing scheme scrypt is vulnerable against cache-timing attacks due to the existence of a password-dependent memory-access pattern. Finally, we introduce Catena the first password-hashing scheme that is both memory-consuming and resistant against cache-timing attacks

    Evolving a secure grid-enabled, distributed data warehouse : a standards-based perspective

    Get PDF
    As digital data-collection has increased in scale and number, it becomes an important type of resource serving a wide community of researchers. Cross-institutional data-sharing and collaboration introduce a suitable approach to facilitate those research institutions that are suffering the lack of data and related IT infrastructures. Grid computing has become a widely adopted approach to enable cross-institutional resource-sharing and collaboration. It integrates a distributed and heterogeneous collection of locally managed users and resources. This project proposes a distributed data warehouse system, which uses Grid technology to enable data-access and integration, and collaborative operations across multi-distributed institutions in the context of HV/AIDS research. This study is based on wider research into OGSA-based Grid services architecture, comprising a data-analysis system which utilizes a data warehouse, data marts, and near-line operational database that are hosted by distributed institutions. Within this framework, specific patterns for collaboration, interoperability, resource virtualization and security are included. The heterogeneous and dynamic nature of the Grid environment introduces a number of security challenges. This study also concerns a set of particular security aspects, including PKI-based authentication, single sign-on, dynamic delegation, and attribute-based authorization. These mechanisms, as supported by the Globus Toolkit’s Grid Security Infrastructure, are used to enable interoperability and establish trust relationship between various security mechanisms and policies within different institutions; manage credentials; and ensure secure interactions

    Signaling and Reciprocity:Robust Decentralized Information Flows in Social, Communication, and Computer Networks

    Get PDF
    Complex networks exist for a number of purposes. The neural, metabolic and food networks ensure our survival, while the social, economic, transportation and communication networks allow us to prosper. Independently of the purposes and particularities of the physical embodiment of the networks, one of their fundamental functions is the delivery of information from one part of the network to another. Gossip and diseases diffuse in the social networks, electrochemical signals propagate in the neural networks and data packets travel in the Internet. Engineering networks for robust information flows is a challenging task. First, the mechanism through which the network forms and changes its topology needs to be defined. Second, within a given topology, the information must be routed to the appropriate recipients. Third, both the network formation and the routing mechanisms need to be robust against a wide spectrum of failures and adversaries. Fourth, the network formation, routing and failure recovery must operate under the resource constraints, either intrinsic or extrinsic to the network. Finally, the autonomously operating parts of the network must be incentivized to contribute their resources to facilitate the information flows. This thesis tackles the above challenges within the context of several types of networks: 1) peer-to-peer overlays – computers interconnected over the Internet to form an overlay in which participants provide various services to one another, 2) mobile ad-hoc networks – mobile nodes distributed in physical space communicating wirelessly with the goal of delivering data from one part of the network to another, 3) file-sharing networks – networks whose participants interconnect over the Internet to exchange files, 4) social networks – humans disseminating and consuming information through the network of social relationships. The thesis makes several contributions. Firstly, we propose a general algorithm, which given a set of nodes embedded in an arbitrary metric space, interconnects them into a network that efficiently routes information. We apply the algorithm to the peer-to-peer overlays and experimentally demonstrate its high performance, scalability as well as resilience to continuous peer arrivals and departures. We then shift our focus to the problem of the reliability of routing in the peer-to-peer overlays. Each overlay peer has limited resources and when they are exhausted this ultimately leads to delayed or lost overlay messages. All the solutions addressing this problem rely on message redundancy, which significantly increases the resource costs of fault-tolerance. We propose a bandwidth-efficient single-path Forward Feedback Protocol (FFP) for overlay message routing in which successfully delivered messages are followed by a feedback signal to reinforce the routing paths. Internet testbed evaluation shows that FFP uses 2-5 times less network bandwidth than the existing protocols relying on message redundancy, while achieving comparable fault-tolerance levels under a variety of failure scenarios. While the Forward Feedback Protocol is robust to message loss and delays, it is vulnerable to malicious message injection. We address this and other security problems by proposing Castor, a variant of FFP for mobile ad-hoc networks (MANETs). In Castor, we use the same general mechanism as in FFP; each time a message is routed, the routing path is either enforced or weakened by the feedback signal depending on whether the routing succeeded or not. However, unlike FFP, Castor employs cryptographic mechanisms for ensuring the integrity and authenticity of the messages. We compare Castor to four other MANET routing protocols. Despite Castor's simplicity, it achieves up to 40% higher packet delivery rates than the other protocols and recovers at least twice as fast as the other protocols in a wide range of attacks and failure scenarios. Both of our protocols, FFP and Castor, rely on simple signaling to improve the routing robustness in peer-to-peer and mobile ad-hoc networks. Given the success of the signaling mechanism in shaping the information flows in these two types of networks, we examine if signaling plays a similar crucial role in the on-line social networks. We characterize the propagation of URLs in the social network of Twitter. The data analysis uncovers several statistical regularities in the user activity, the social graph, the structure of the URL cascades as well as the communication and signaling dynamics. Based on these results, we propose a propagation model that accurately predicts which users are likely to mention which URLs. We outline a number of applications where the social network information flow modelling would be crucial: content ranking and filtering, viral marketing and spam detection. Finally, we consider the problem of freeriding in peer-to-peer file-sharing applications, when users can download data from others, but never reciprocate by uploading. To address the problem, we propose a variant of the BitTorrent system in which two peers are only allowed to connect if their owners know one another in the real world. When the users know which other users their BitTorrent client connects to, they are more likely to cooperate. The social network becomes the content distribution network and the freeriding problem is solved by leveraging the social norms and reciprocity to stabilize cooperation rather than relying on technological means. Our extensive simulation shows that the social network topology is an efficient and scalable content distribution medium, while at the same time provides robustness to freeriding

    Security design analysis

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore