51 research outputs found

    Recent trends in applying TPM to cloud computing

    Get PDF
    Trusted platform modules (TPM) have become important safe‐guards against variety of software‐based attacks. By providing a limited set of cryptographic services through a well‐defined interface, separated from the software itself, TPM can serve as a root of trust and as a building block for higher‐level security measures. This article surveys the literature for applications of TPM in the cloud‐computing environment, with publication dates comprised between 2013 and 2018. It identifies the current trends and objectives of this technology in the cloud, and the type of threats that it mitigates. Toward the end, the main research gaps are pinpointed and discussed. Since integrity measurement is one of the main usages of TPM, special attention is paid to the assessment of run time phases and software layers it is applied to.</p

    Research on Linux Trusted Boot Method Based on Reverse Integrity Verification

    Get PDF

    Securing Critical Infrastructures

    Get PDF
    1noL'abstract Ăš presente nell'allegato / the abstract is in the attachmentopen677. INGEGNERIA INFORMATInoopenCarelli, Albert

    Authentication and Data Protection under Strong Adversarial Model

    Get PDF
    We are interested in addressing a series of existing and plausible threats to cybersecurity where the adversary possesses unconventional attack capabilities. Such unconventionality includes, in our exploration but not limited to, crowd-sourcing, physical/juridical coercion, substantial (but bounded) computational resources, malicious insiders, etc. Our studies show that unconventional adversaries can be counteracted with a special anchor of trust and/or a paradigm shift on a case-specific basis. Complementing cryptography, hardware security primitives are the last defense in the face of co-located (physical) and privileged (software) adversaries, hence serving as the special trust anchor. Examples of hardware primitives are architecture-shipped features (e.g., with CPU or chipsets), security chips or tokens, and certain features on peripheral/storage devices. We also propose changes of paradigm in conjunction with hardware primitives, such as containing attacks instead of counteracting, pretended compliance, and immunization instead of detection/prevention. In this thesis, we demonstrate how our philosophy is applied to cope with several exemplary scenarios of unconventional threats, and elaborate on the prototype systems we have implemented. Specifically, Gracewipe is designed for stealthy and verifiable secure deletion of on-disk user secrets under coercion; Hypnoguard protects in-RAM data when a computer is in sleep (ACPI S3) in case of various memory/guessing attacks; Uvauth mitigates large-scale human-assisted guessing attacks by receiving all login attempts in an indistinguishable manner, i.e., correct credentials in a legitimate session and incorrect ones in a plausible fake session; Inuksuk is proposed to protect user files against ransomware or other authorized tampering. It augments the hardware access control on self-encrypting drives with trusted execution to achieve data immunization. We have also extended the Gracewipe scenario to a network-based enterprise environment, aiming to address slightly different threats, e.g., malicious insiders. We believe the high-level methodology of these research topics can contribute to advancing the security research under strong adversarial assumptions, and the promotion of software-hardware orchestration in protecting execution integrity therein

    Economics-driven approach for self-securing assets in cloud

    Get PDF
    This thesis proposes the engineering of an elastic self-adaptive security solution for the Cloud that considers assets as independent entities, with a need for customised, ad-hoc security. The solution exploits agent-based, market-inspired methodologies and learning approaches for managing the changing security requirements of assets by considering the shared and on-demand nature of services and resources while catering for monetary and computational constraints. The usage of auction procedures allows the proposed framework to deal with the scale of the problem and the trade-offs that can arise between users and Cloud service provider(s). Whereas, the usage of a learning technique enables our framework to operate in a proactive, automated fashion and to arrive on more efficient bidding plans, informed by historical data. A variant of the proposed framework, grounded on a simulated university application environment, was developed to evaluate the applicability and effectiveness of this solution. As the proposed solution is grounded on market methods, this thesis is also concerned with asserting the dependability of market mechanisms. We follow an experimentally driven approach to demonstrate the deficiency of existing market-oriented solutions in facing common market-specific security threats and provide candidate, lightweight defensive mechanisms for securing them against these attacks

    A Grey Theory Based Approach to Big Data Risk Management Using FMEA

    Get PDF
    Big data is the term used to denote enormous sets of data that differ from other classic databases in four main ways: (huge) volume, (high) velocity, (much greater) variety, and (big) value. In general, data are stored in a distributed fashion and on computing nodes as a result of which big data may be more susceptible to attacks by hackers. This paper presents a risk model for big data, which comprises Failure Mode and Effects Analysis (FMEA) and Grey Theory, more precisely grey relational analysis. This approach has several advantages: it provides a structured approach in order to incorporate the impact of big data risk factors; it facilitates the assessment of risk by breaking down the overall risk to big data; and finally its efficient evaluation criteria can help enterprises reduce the risks associated with big data. In order to illustrate the applicability of our proposal in practice, a numerical example, with realistic data based on expert knowledge, was developed. The numerical example analyzes four dimensions, that is, managing identification and access, registering the device and application, managing the infrastructure, and data governance, and 20 failure modes concerning the vulnerabilities of big data. The results show that the most important aspect of risk to big data relates to data governance

    Identity and Privacy Governance

    Get PDF
    • 

    corecore