339 research outputs found

    Sharing Independence & Relabeling: Efficient Formal Verification of Higher-Order Masking

    Get PDF
    The efficient verification of the security of masked hardware implementations is an important issue that hinders the development and deployment of randomness-efficient masking techniques. At EUROCRYPT 2018, Bloem et al. [6] introduced the first practical formal tool to prove the side-channel resilience of masked circuits in the probing model with glitches. Most recently Barthe et al.[2] introduced a more efficient formal tool that builds upon the findings of Bloem et al. for modeling the effects of glitches. While Barthe et al.\u27s approach greatly improves the first-order verification performance, it shows that higher-order verification in the probing model with glitches is still enormously time-consuming for larger circuits like a second-order AES S-box, for instance. Furthermore, the results of Barthe et al. underline the discrepancy between state-of-the-art formal security notions that allow for faster verification of circuits. Namely the strong non-interference (SNI) notion, and existing masked hardware implementations that are secure in the probing model with glitches. In this work, we extend and improve the formal approaches of Bloem et al. and Barthe et al. on manifold levels. We first introduce a so-called sharing independence notion which helps to reason about the independence of shared variables. We then show how to use this notion to test for the independence of input and output sharings of a module which allows speeding up the formal verification of circuits that do not fulfill the SNI notion. With this extension, we are for the time able to verify the security of a second-order masked DOM AES S-box which takes about 3 seconds, and up to a fifth-order AES S-box which requires about 47 days for verification. Furthermore, we discuss in which case the independence of input and output sharings lead to composability

    Formal Verification of Masking Countermeasures for Arithmetic Programs

    Get PDF
    Cryptographic algorithms are widely used to protect data privacy in many aspects of daily lives. Unfortunately, programs implementing cryptographic algorithms may be vulnerable to practical power side-channel attacks, which may infer private data via statistical analysis. To thwart these attacks, several masking schemes have been proposed, giving rise to effective countermeasures for reducing the statistical correlation between private data and power consumptions. However, programs that rely on secure masking schemes are not secure a priori. Indeed, designing effective masking programs is a labor intensive and error-prone task. Although some techniques have been proposed for formally verifying masking countermeasures and for quantifying masking strength, they are currently limited to Boolean programs and suffer from low accuracy. In this work, we propose an approach for formally verifying masking countermeasures of arithmetic programs. Our approach is more accurate for arithmetic programs and more scalable for Boolean programs comparing to the existing approaches. It is essentially a synergistic integration of type inference and model-counting based methods, armed with domain specific heuristics. The type inference system allows a fast deduction of leakage-freeness of most intermediate computations, the model-counting based methods accounts for completeness, namely, to eliminate spurious flaws, and the heuristics facilitate both type inference and model-counting based reasoning, which improve scalability and efficiency in practice. In case that the program does contain leakage, we provide a method to quantify its masking strength. A distuiguished feature of our type sytem lies in its support of compositonal reasoning when verifying programs with procedure calls, so the need of inlining procedures can be significantly reduced. We have implemented our methods in a verification tool QMVERIF which has been extensively evaluated on cryptographic benchmarks including full AES, DES and MAC-Keccak. The experimental results demonstrate the effectiveness and efficiency of our approach, especially for compositional reasoning. In particular, our tool is able to automatically prove leakage-freeness of arithmetic programs for which only manual proofs exist so far; it is also significantly faster than the state-of-the-art tools: EasyCrypt on common arithmetic programs, QMSINFER, SC Sniffer and maskVerif on Boolean programs

    A Methodology for the Characterisation of Leakages in Combinatorial Logic

    Get PDF
    Glitches represent a great danger for hardware implementations of cryptographic schemes. Their intrinsic random nature makes them difficult to tackle and their occurrence threatens side-channel protections. Although countermeasures aiming at structurally solving the problem already exist, they usually require some effort to be applied or introduce non-negligible overhead in the design. Our work addresses the gap between such countermeasures and the naïve implementation of schemes being vulnerable in the presence of glitches. Our contribution is twofold: (1) we expand the mathematical framework proposed by Brzozowski and Esik (FMSD 2003) by meaningfully adding the ´ notion of information leakage, (2) thanks to which we define a formal methodology for the analysis of vulnerabilities in combinatorial circuits when glitches are taken into account

    The Survey, Taxonomy, and Future Directions of Trustworthy AI: A Meta Decision of Strategic Decisions

    Full text link
    When making strategic decisions, we are often confronted with overwhelming information to process. The situation can be further complicated when some pieces of evidence are contradicted each other or paradoxical. The challenge then becomes how to determine which information is useful and which ones should be eliminated. This process is known as meta-decision. Likewise, when it comes to using Artificial Intelligence (AI) systems for strategic decision-making, placing trust in the AI itself becomes a meta-decision, given that many AI systems are viewed as opaque "black boxes" that process large amounts of data. Trusting an opaque system involves deciding on the level of Trustworthy AI (TAI). We propose a new approach to address this issue by introducing a novel taxonomy or framework of TAI, which encompasses three crucial domains: articulate, authentic, and basic for different levels of trust. To underpin these domains, we create ten dimensions to measure trust: explainability/transparency, fairness/diversity, generalizability, privacy, data governance, safety/robustness, accountability, reproducibility, reliability, and sustainability. We aim to use this taxonomy to conduct a comprehensive survey and explore different TAI approaches from a strategic decision-making perspective
    • …
    corecore