47 research outputs found

    MUST, SHOULD, DON'T CARE: TCP Conformance in the Wild

    Full text link
    Standards govern the SHOULD and MUST requirements for protocol implementers for interoperability. In case of TCP that carries the bulk of the Internets' traffic, these requirements are defined in RFCs. While it is known that not all optional features are implemented and nonconformance exists, one would assume that TCP implementations at least conform to the minimum set of MUST requirements. In this paper, we use Internet-wide scans to show how Internet hosts and paths conform to these basic requirements. We uncover a non-negligible set of hosts and paths that do not adhere to even basic requirements. For example, we observe hosts that do not correctly handle checksums and cases of middlebox interference for TCP options. We identify hosts that drop packets when the urgent pointer is set or simply crash. Our publicly available results highlight that conformance to even fundamental protocol requirements should not be taken for granted but instead checked regularly

    An erasure-resilient and compute-efficient coding scheme for storage applications

    Get PDF
    Driven by rapid technological advancements, the amount of data that is created, captured, communicated, and stored worldwide has grown exponentially over the past decades. Along with this development it has become critical for many disciplines of science and business to being able to gather and analyze large amounts of data. The sheer volume of the data often exceeds the capabilities of classical storage systems, with the result that current large-scale storage systems are highly distributed and are comprised of a high number of individual storage components. As with any other electronic device, the reliability of storage hardware is governed by certain probability distributions, which in turn are influenced by the physical processes utilized to store the information. The traditional way to deal with the inherent unreliability of combined storage systems is to replicate the data several times. Another popular approach to achieve failure tolerance is to calculate the block-wise parity in one or more dimensions. With better understanding of the different failure modes of storage components, it has become evident that sophisticated high-level error detection and correction techniques are indispensable for the ever-growing distributed systems. The utilization of powerful cyclic error-correcting codes, however, comes with a high computational penalty, since the required operations over finite fields do not map very well onto current commodity processors. This thesis introduces a versatile coding scheme with fully adjustable fault-tolerance that is tailored specifically to modern processor architectures. To reduce stress on the memory subsystem the conventional table-based algorithm for multiplication over finite fields has been replaced with a polynomial version. This arithmetically intense algorithm is better suited to the wide SIMD units of the currently available general purpose processors, but also displays significant benefits when used with modern many-core accelerator devices (for instance the popular general purpose graphics processing units). A CPU implementation using SSE and a GPU version using CUDA are presented. The performance of the multiplication depends on the distribution of the polynomial coefficients in the finite field elements. This property has been used to create suitable matrices that generate a linear systematic erasure-correcting code which shows a significantly increased multiplication performance for the relevant matrix elements. Several approaches to obtain the optimized generator matrices are elaborated and their implications are discussed. A Monte-Carlo-based construction method allows it to influence the specific shape of the generator matrices and thus to adapt them to special storage and archiving workloads. Extensive benchmarks on CPU and GPU demonstrate the superior performance and the future application scenarios of this novel erasure-resilient coding scheme

    TEDDI: Tamper Event Detection on Distributed Cyber-Physical Systems

    Get PDF
    Edge devices, or embedded devices installed along the periphery of a power grid SCADA network, pose a significant threat to the grid, as they give attackers a convenient entry point to access and cause damage to other essential equipment in substations and control centers. Grid defenders would like to protect these edge devices from being accessed and tampered with, but they are hindered by the grid defender\u27s dilemma; more specifically, the range and nature of tamper events faced by the grid (particularly distributed events), the prioritization of grid availability, the high costs of improper responses, and the resource constraints of both grid networks and the defenders that run them makes prior work in the tamper and intrusion protection fields infeasible to apply. In this thesis, we give a detailed description of the grid defender\u27s dilemma, and introduce TEDDI (Tamper Event Detection on Distributed Infrastructure), a distributed, sensor-based tamper protection system built to solve this dilemma. TEDDI\u27s distributed architecture and use of a factor graph fusion algorithm gives grid defenders the power to detect and differentiate between tamper events, and also gives defenders the flexibility to tailor specific responses for each event. We also propose the TEDDI Generation Tool, which allows us to capture the defender\u27s intuition about tamper events, and assists defenders in constructing a custom TEDDI system for their network. To evaluate TEDDI, we collected and constructed twelve different tamper scenarios, and show how TEDDI can detect all of these events and solve the grid defender\u27s dilemma. In our experiments, TEDDI demonstrated an event detection accuracy level of over 99% at both the information and decision point levels, and could process a 99-node factor graph in under 233 microseconds. We also analyzed the time and resources needed to use TEDDI, and show how it requires less up-front configuration effort than current tamper protection solutions

    Sixth Goddard Conference on Mass Storage Systems and Technologies Held in Cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems

    Get PDF
    This document contains copies of those technical papers received in time for publication prior to the Sixth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems at the University of Maryland-University College Inn and Conference Center March 23-26, 1998. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, tape optimization, new technology, performance, standards, site reports, vendor solutions. Tutorials will be available on shared file systems, file system backups, data mining, and the dynamics of obsolescence

    Defending against Return-Oriented Programming

    Get PDF
    Return-oriented programming (ROP) has become the primary exploitation technique for system compromise in the presence of non-executable page protections. ROP exploits are facilitated mainly by the lack of complete address space randomization coverage or the presence of memory disclosure vulnerabilities, necessitating additional ROP-specific mitigations. Existing defenses against ROP exploits either require source code or symbolic debugging information, or impose a significant runtime overhead, which limits their applicability for the protection of third-party applications. We propose two novel techniques to prevent ROP exploits on third-party applications without requiring their source code or debug symbols, while at the same time incurring a minimal performance overhead. Their effectiveness is based on breaking an invariant of ROP attacks: knowledge of the code layout, and a common characteristic: unrestricted use of indirect branches. When combined, they still retain their applicability and efficiency, while maximizing the protection coverage against ROP. The first technique, in-place code randomization, uses narrow-scope code transformations that can be applied statically, without changing the location of basic blocks, allowing the safe randomization of stripped binaries even with partial disassembly coverage. These transformations effectively eliminate 10%, and probabilistically break 80% of the useful instruction sequences found in a large set of PE files. Since no additional code is inserted, in-place code randomization does not incur any measurable runtime overhead, enabling it to be easily used in tandem with existing exploit mitigations such as address space layout randomization. Our evaluation using publicly available ROP exploits and two ROP code generation toolkits demonstrates that our technique prevents the exploitation of the tested vulnerable Windows 7 applications, including Adobe Reader, as well as the automated construction of alternative ROP payloads that aim to circumvent in-place code randomization using solely any remaining unaffected instruction sequences. The second technique is based on the detection of abnormal control transfers that take place during ROP code execution. This is achieved using hardware features of commodity processors, which incur negligible runtime overhead and allow for completely transparent operation without requiring any modifications to the protected applications. Our implementation for Windows 7, named kBouncer, can be selectively enabled for installed programs in the same fashion as user-friendly mitigation toolkits like Microsoft's EMET. The results of our evaluation demonstrate that kBouncer has low runtime overhead of up to 4%, when stressed with specially crafted workloads that continuously trigger its core detection component, while it has negligible overhead for actual user applications. In our experiments with in-the-wild ROP exploits, kBouncer successfully protected all tested applications, including Internet Explorer, Adobe Flash Player, and Adobe Reader. In addition, we introduce a technique that enables ASLR for executables with stripped relocation information by incrementally adjusting stale absolute addresses at runtime. The technique relies on runtime monitoring of memory accesses and control flow transfers to the original location of a module using page table manipulation. We have implemented a prototype of the proposed technique for Windows 8, which is transparently applicable to third-party stripped binaries. Our results demonstrate that incremental runtime relocation patching is practical, incurs a runtime overhead of up to 83% in most of the cases for initial runs of protected programs, and has a low runtime overhead of 5% on subsequent runs

    Lightweight Protocols and Applications for Memory-Based Intrinsic Physically Unclonable Functions on Commercial Off-The-Shelve Devices

    Get PDF
    We are currently living in the era in which through the ever-increasing dissemination of inter-connected embedded devices, the Internet-of-Things manifests. Although such end-point devices are commonly labeled as ``smart gadgets'' and hence they suggest to implement some sort of intelligence, from a cyber-security point of view, more then often the opposite holds. The market force in the branch of commercial embedded devices leads to minimizing production costs and time-to-market. This widespread trend has a direct, disastrous impact on the security properties of such devices. The majority of currently used devices or those that will be produced in the future do not implement any or insufficient security mechanisms. Foremost the lack of secure hardware components often mitigates the application of secure protocols and applications. This work is dedicated to a fundamental solution statement, which allows to retroactively secure commercial off-the-shelf devices, which otherwise are exposed to various attacks due to the lack of secure hardware components. In particular, we leverage the concept of Physically Unclonable Functions (PUFs), to create hardware-based security anchors in standard hardware components. For this purpose, we exploit manufacturing variations in Static Random-Access Memory (SRAM) and Dynamic Random-Access Memory modules to extract intrinsic memory-based PUF instances and building on that, to develop secure and lightweight protocols and applications. For this purpose, we empirically evaluate selected and representative device types towards their PUF characteristics. In a further step, we use those device types, which qualify due to the existence of desired PUF instances for subsequent development of security applications and protocols. Subsequently, we present various software-based security solutions which are specially tailored towards to the characteristic properties of embedded devices. More precisely, the proposed solutions comprise a secure boot architecture as well as an approach to protect the integrity of the firmware by binding it to the underlying hardware. Furthermore, we present a lightweight authentication protocol which leverages a novel DRAM-based PUF type. Finally, we propose a protocol, which allows to securely verify the software state of remote embedded devices
    corecore