12,040 research outputs found

    Transparent code authentication at the processor level

    Get PDF
    The authors present a lightweight authentication mechanism that verifies the authenticity of code and thereby addresses the virus and malicious code problems at the hardware level eliminating the need for trusted extensions in the operating system. The technique proposed tightly integrates the authentication mechanism into the processor core. The authentication latency is hidden behind the memory access latency, thereby allowing seamless on-the-fly authentication of instructions. In addition, the proposed authentication method supports seamless encryption of code (and static data). Consequently, while providing the software users with assurance for authenticity of programs executing on their hardware, the proposed technique also protects the software manufacturers’ intellectual property through encryption. The performance analysis shows that, under mild assumptions, the presented technique introduces negligible overhead for even moderate cache sizes

    Sequential Circuit Design for Embedded Cryptographic Applications Resilient to Adversarial Faults

    Get PDF
    In the relatively young field of fault-tolerant cryptography, the main research effort has focused exclusively on the protection of the data path of cryptographic circuits. To date, however, we have not found any work that aims at protecting the control logic of these circuits against fault attacks, which thus remains the proverbial Achilles’ heel. Motivated by a hypothetical yet realistic fault analysis attack that, in principle, could be mounted against any modular exponentiation engine, even one with appropriate data path protection, we set out to close this remaining gap. In this paper, we present guidelines for the design of multifault-resilient sequential control logic based on standard Error-Detecting Codes (EDCs) with large minimum distance. We introduce a metric that measures the effectiveness of the error detection technique in terms of the effort the attacker has to make in relation to the area overhead spent in implementing the EDC. Our comparison shows that the proposed EDC-based technique provides superior performance when compared against regular N-modular redundancy techniques. Furthermore, our technique scales well and does not affect the critical path delay

    Autonomic management of virtualized resources in cloud computing

    Get PDF
    The last five years have witnessed a rapid growth of cloud computing in business, governmental and educational IT deployment. The success of cloud services depends critically on the effective management of virtualized resources. A key requirement of cloud management is the ability to dynamically match resource allocations to actual demands, To this end, we aim to design and implement a cloud resource management mechanism that manages underlying complexity, automates resource provisioning and controls client-perceived quality of service (QoS) while still achieving resource efficiency. The design of an automatic resource management centers on two questions: when to adjust resource allocations and how much to adjust. In a cloud, applications have different definitions on capacity and cloud dynamics makes it difficult to determine a static resource to performance relationship. In this dissertation, we have proposed a generic metric that measures application capacity, designed model-independent and adaptive approaches to manage resources and built a cloud management system scalable to a cluster of machines. To understand web system capacity, we propose to use a metric of productivity index (PI), which is defined as the ratio of yield to cost, to measure the system processing capability online. PI is a generic concept that can be applied to different levels to monitor system progress in order to identify if more capacity is needed. We applied the concept of PI to the problem of overload prevention in multi-tier websites. The overload predictor built on the PI metric shows more accurate and responsive overload prevention compared to conventional approaches. To address the issue of the lack of accurate server model, we propose a model-independent fuzzy control based approach for CPU allocation. For adaptive and stable control performance, we embed the controller with self-tuning output amplification and flexible rule selection. Finally, we build a QoS provisioning framework that supports multi-objective QoS control and service differentiation. Experiments on a virtual cluster with two service classes show the effectiveness of our approach in both performance and power control. To address the problems of complex interplay between resources and process delays in fine-grained multi-resource allocation, we consider capacity management as a decision-making problem and employ reinforcement learning (RL) to optimize the process. The optimization depends on the trial-and-error interactions with the cloud system. In order to improve the initial management performance, we propose a model-based RL algorithm. The neural network based environment model, which is learned from previous management history, generates simulated resource allocations for the RL agent. Experiment results on heterogeneous applications show that our approach makes efficient use of limited interactions and find near optimal resource configurations within 7 steps. Finally, we present a distributed reinforcement learning approach to the cluster-wide cloud resource management. We decompose the cluster-wide resource allocation problem into sub-problems concerning individual VM resource configurations. The cluster-wide allocation is optimized if individual VMs meet their SLA with a high resource utilization. For scalability, we develop an efficient reinforcement learning approach with continuous state space. For adaptability, we use VM low-level runtime statistics to accommodate workload dynamics. Prototyped in a iBalloon system, the distributed learning approach successfully manages 128 VMs on a 16-node close correlated cluster

    Biophotonic Tools in Cell and Tissue Diagnostics.

    Get PDF
    In order to maintain the rapid advance of biophotonics in the U.S. and enhance our competitiveness worldwide, key measurement tools must be in place. As part of a wide-reaching effort to improve the U.S. technology base, the National Institute of Standards and Technology sponsored a workshop titled "Biophotonic tools for cell and tissue diagnostics." The workshop focused on diagnostic techniques involving the interaction between biological systems and photons. Through invited presentations by industry representatives and panel discussion, near- and far-term measurement needs were evaluated. As a result of this workshop, this document has been prepared on the measurement tools needed for biophotonic cell and tissue diagnostics. This will become a part of the larger measurement road-mapping effort to be presented to the Nation as an assessment of the U.S. Measurement System. The information will be used to highlight measurement needs to the community and to facilitate solutions

    Deep Space Network information system architecture study

    Get PDF
    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control

    Attack Resilience and Recovery using Physical Challenge Response Authentication for Active Sensors Under Integrity Attacks

    Get PDF
    Embedded sensing systems are pervasively used in life- and security-critical systems such as those found in airplanes, automobiles, and healthcare. Traditional security mechanisms for these sensors focus on data encryption and other post-processing techniques, but the sensors themselves often remain vulnerable to attacks in the physical/analog domain. If an adversary manipulates a physical/analog signal prior to digitization, no amount of digital security mechanisms after the fact can help. Fortunately, nature imposes fundamental constraints on how these analog signals can behave. This work presents PyCRA, a physical challenge-response authentication scheme designed to protect active sensing systems against physical attacks occurring in the analog domain. PyCRA provides security for active sensors by continually challenging the surrounding environment via random but deliberate physical probes. By analyzing the responses to these probes, and by using the fact that the adversary cannot change the underlying laws of physics, we provide an authentication mechanism that not only detects malicious attacks but provides resilience against them. We demonstrate the effectiveness of PyCRA through several case studies using two sensing systems: (1) magnetic sensors like those found wheel speed sensors in robotics and automotive, and (2) commercial RFID tags used in many security-critical applications. Finally, we outline methods and theoretical proofs for further enhancing the resilience of PyCRA to active attacks by means of a confusion phase---a period of low signal to noise ratio that makes it more difficult for an attacker to correctly identify and respond to PyCRA's physical challenges. In doing so, we evaluate both the robustness and the limitations of PyCRA, concluding by outlining practical considerations as well as further applications for the proposed authentication mechanism.Comment: Shorter version appeared in ACM ACM Conference on Computer and Communications (CCS) 201
    corecore