10,720 research outputs found

    CacheWarp: Software-based Fault Injection using Selective State Reset

    Get PDF
    AMD SEV is a trusted-execution environment (TEE), providing confidentiality and integrity for virtual machines (VMs). With AMD SEV, it is possible to securely run VMs on an untrusted hypervisor. While previous attacks demonstrated architectural shortcomings of earlier SEV versions, AMD claims that SEV-SNP prevents all attacks on the integrity. In this paper, we introduce CacheWarp, a new software-based fault attack on AMD SEV-ES and SEV-SNP, exploiting the possibility to architecturally revert modified cache lines of guest VMs to their previous (stale) state. Unlike previous attacks on the integrity, CacheWarp is not mitigated on the newest SEV-SNP implementation, and it does not rely on specifics of the guest VM. CacheWarp only has to interrupt the VM at an attacker-chosen point to invalidate modified cache lines without them being written back to memory. Consequently, the VM continues with architecturally stale data. In 3 case studies, we demonstrate an attack on RSA in the Intel IPP crypto library, recovering the entire private key, logging into an OpenSSH server without authentication, and escalating privileges to root via the sudo binary. While we implement a software-based mitigation proof-of-concept, we argue that mitigations are difficult, as the root cause is in the hardware

    Signatures with Memory-Tight Security in the Quantum Random Oracle Model

    Get PDF
    Memory tightness of reductions in cryptography, in addition to the standard tightness related to advantage and running time, is important when the underlying problem can be solved efficiently with large memory, as discussed in Auerbach, Cash, Fersch, and Kiltz (CRYPTO 2017). Diemert, Geller, Jager, and Lyu (ASIACRYPT 2021) and Ghoshal, Ghosal, Jaeger, and Tessaro (EUROCRYPT 2022) gave memory-tight proofs for the multi-challenge security of digital signatures in the random oracle model. This paper studies the memory-tight reductions for _post-quantum_ signature schemes in the _quantum_ random oracle model. Concretely, we show that signature schemes from lossy identification are multi-challenge secure in the quantum random oracle model via memory-tight reductions. Moreover, we show that the signature schemes from lossy identification achieve more enhanced securities considering _quantum_ signing oracles proposed by Boneh and Zhandry (CRYPTO 2013) and Alagic, Majenz, Russel, and Song (EUROCRYPT 2020). We additionally show that signature schemes from preimage-sampleable functions achieve those securities via memory-tight reductions

    SoK:Prudent Evaluation Practices for Fuzzing

    Get PDF
    Fuzzing has proven to be a highly effective approach to uncover software bugs over the past decade. After AFL popularized the groundbreaking concept of lightweight coverage feedback, the field of fuzzing has seen a vast amount of scientific work proposing new techniques, improving methodological aspects of existing strategies, or porting existing methods to new domains. All such work must demonstrate its merit by showing its applicability to a problem, measuring its performance, and often showing its superiority over existing works in a thorough, empirical evaluation. Yet, fuzzing is highly sensitive to its target, environment, and circumstances, e.g., randomness in the testing process. After all, relying on randomness is one of the core principles of fuzzing, governing many aspects of a fuzzer's behavior. Combined with the often highly difficult to control environment, the reproducibility of experiments is a crucial concern and requires a prudent evaluation setup. To address these threats to validity, several works, most notably Evaluating Fuzz Testing by Klees et al., have outlined how a carefully designed evaluation setup should be implemented, but it remains unknown to what extent their recommendations have been adopted in practice. In this work, we systematically analyze the evaluation of 150 fuzzing papers published at the top venues between 2018 and 2023. We study how existing guidelines are implemented and observe potential shortcomings and pitfalls. We find a surprising disregard of the existing guidelines regarding statistical tests and systematic errors in fuzzing evaluations. For example, when investigating reported bugs, we find that the search for vulnerabilities in real-world software leads to authors requesting and receiving CVEs of questionable quality. Extending our literature analysis to the practical domain, we attempt to reproduce claims of eight fuzzing papers. These case studies allow us to assess the practical reproducibility of fuzzing research and identify archetypal pitfalls in the evaluation design. Unfortunately, our reproduced results reveal several deficiencies in the studied papers, and we are unable to fully support and reproduce the respective claims. To help the field of fuzzing move toward a scientifically reproducible evaluation strategy, we propose updated guidelines for conducting a fuzzing evaluation that future work should follow

    Learning Interpretable Models of Aircraft Handling Behaviour by Reinforcement Learning from Human Feedback

    Get PDF
    We propose a method to capture the handling abilities of fast jet pilots in a software model via reinforcement learning (RL) from human preference feedback. We use pairwise preferences over simulated flight trajectories to learn an interpretable rule-based model called a reward tree, which enables the automated scoring of trajectories alongside an explanatory rationale. We train an RL agent to execute high-quality handling behaviour by using the reward tree as the objective, and thereby generate data for iterative preference collection and further refinement of both tree and agent. Experiments with synthetic preferences show reward trees to be competitive with uninterpretable neural network reward models on quantitative and qualitative evaluations

    Authentication enhancement in command and control networks: (a study in Vehicular Ad-Hoc Networks)

    Get PDF
    Intelligent transportation systems contribute to improved traffic safety by facilitating real time communication between vehicles. By using wireless channels for communication, vehicular networks are susceptible to a wide range of attacks, such as impersonation, modification, and replay. In this context, securing data exchange between intercommunicating terminals, e.g., vehicle-to-everything (V2X) communication, constitutes a technological challenge that needs to be addressed. Hence, message authentication is crucial to safeguard vehicular ad-hoc networks (VANETs) from malicious attacks. The current state-of-the-art for authentication in VANETs relies on conventional cryptographic primitives, introducing significant computation and communication overheads. In this challenging scenario, physical (PHY)-layer authentication has gained popularity, which involves leveraging the inherent characteristics of wireless channels and the hardware imperfections to discriminate between wireless devices. However, PHY-layerbased authentication cannot be an alternative to crypto-based methods as the initial legitimacy detection must be conducted using cryptographic methods to extract the communicating terminal secret features. Nevertheless, it can be a promising complementary solution for the reauthentication problem in VANETs, introducing what is known as “cross-layer authentication.” This thesis focuses on designing efficient cross-layer authentication schemes for VANETs, reducing the communication and computation overheads associated with transmitting and verifying a crypto-based signature for each transmission. The following provides an overview of the proposed methodologies employed in various contributions presented in this thesis. 1. The first cross-layer authentication scheme: A four-step process represents this approach: initial crypto-based authentication, shared key extraction, re-authentication via a PHY challenge-response algorithm, and adaptive adjustments based on channel conditions. Simulation results validate its efficacy, especially in low signal-to-noise ratio (SNR) scenarios while proving its resilience against active and passive attacks. 2. The second cross-layer authentication scheme: Leveraging the spatially and temporally correlated wireless channel features, this scheme extracts high entropy shared keys that can be used to create dynamic PHY-layer signatures for authentication. A 3-Dimensional (3D) scattering Doppler emulator is designed to investigate the scheme’s performance at different speeds of a moving vehicle and SNRs. Theoretical and hardware implementation analyses prove the scheme’s capability to support high detection probability for an acceptable false alarm value ≀ 0.1 at SNR ≄ 0 dB and speed ≀ 45 m/s. 3. The third proposal: Reconfigurable intelligent surfaces (RIS) integration for improved authentication: Focusing on enhancing PHY-layer re-authentication, this proposal explores integrating RIS technology to improve SNR directed at designated vehicles. Theoretical analysis and practical implementation of the proposed scheme are conducted using a 1-bit RIS, consisting of 64 × 64 reflective units. Experimental results show a significant improvement in the Pd, increasing from 0.82 to 0.96 at SNR = − 6 dB for multicarrier communications. 4. The fourth proposal: RIS-enhanced vehicular communication security: Tailored for challenging SNR in non-line-of-sight (NLoS) scenarios, this proposal optimises key extraction and defends against denial-of-service (DoS) attacks through selective signal strengthening. Hardware implementation studies prove its effectiveness, showcasing improved key extraction performance and resilience against potential threats. 5. The fifth cross-layer authentication scheme: Integrating PKI-based initial legitimacy detection and blockchain-based reconciliation techniques, this scheme ensures secure data exchange. Rigorous security analyses and performance evaluations using network simulators and computation metrics showcase its effectiveness, ensuring its resistance against common attacks and time efficiency in message verification. 6. The final proposal: Group key distribution: Employing smart contract-based blockchain technology alongside PKI-based authentication, this proposal distributes group session keys securely. Its lightweight symmetric key cryptography-based method maintains privacy in VANETs, validated via Ethereum’s main network (MainNet) and comprehensive computation and communication evaluations. The analysis shows that the proposed methods yield a noteworthy reduction, approximately ranging from 70% to 99%, in both computation and communication overheads, as compared to the conventional approaches. This reduction pertains to the verification and transmission of 1000 messages in total

    Sweep-UC: Swapping Coins Privately

    Get PDF
    Fair exchange (also referred to as atomic swap) is a fundamental operation in any cryptocurrency that allows users to atomically exchange coins. While a large body of work has been devoted to this problem, most solutions lack on-chain privacy. Thus, coins retain a public transaction history which is known to degrade the fungibility of a currency. This has led to a flourishing line of related research on fair exchange with privacy guarantees. Existing protocols either rely on heavy scripting (which also degrades fungibility and leads to high transaction fees), do not support atomic swaps across a wide range of currencies, or come with incomplete security proofs. To overcome these limitations, we introduce Sweep-UC (Read as Sweep Ur Coins.), the first fair exchange protocol that simultaneously is efficient, minimizes scripting, and is compatible with a wide range of currencies (more than the state of the art). We build Sweep-UC from modular sub-protocols and give a rigorous security analysis in the UC framework. Many of our tools and security definitions can be used in standalone fashion and may serve as useful components for future constructions of fair exchange

    Distributed Ledger Technology (DLT) Applications in Payment, Clearing, and Settlement Systems:A Study of Blockchain-Based Payment Barriers and Potential Solutions, and DLT Application in Central Bank Payment System Functions

    Get PDF
    Payment, clearing, and settlement systems are essential components of the financial markets and exert considerable influence on the overall economy. While there have been considerable technological advancements in payment systems, the conventional systems still depend on centralized architecture, with inherent limitations and risks. The emergence of Distributed ledger technology (DLT) is being regarded as a potential solution to transform payment and settlement processes and address certain challenges posed by the centralized architecture of traditional payment systems (Bank for International Settlements, 2017). While proof-of-concept projects have demonstrated the technical feasibility of DLT, significant barriers still hinder its adoption and implementation. The overarching objective of this thesis is to contribute to the developing area of DLT application in payment, clearing and settlement systems, which is still in its initial stages of applications development and lacks a substantial body of scholarly literature and empirical research. This is achieved by identifying the socio-technical barriers to adoption and diffusion of blockchain-based payment systems and the solutions proposed to address them. Furthermore, the thesis examines and classifies various applications of DLT in central bank payment system functions, offering valuable insights into the motivations, DLT platforms used, and consensus algorithms for applicable use cases. To achieve these objectives, the methodology employed involved a systematic literature review (SLR) of academic literature on blockchain-based payment systems. Furthermore, we utilized a thematic analysis approach to examine data collected from various sources regarding the use of DLT applications in central bank payment system functions, such as central bank white papers, industry reports, and policy documents. The study's findings on blockchain-based payment systems barriers and proposed solutions; challenge the prevailing emphasis on technological and regulatory barriers in the literature and industry discourse regarding the adoption and implementation of blockchain-based payment systems. It highlights the importance of considering the broader socio-technical context and identifying barriers across all five dimensions of the social technical framework, including technological, infrastructural, user practices/market, regulatory, and cultural dimensions. Furthermore, the research identified seven DLT applications in central bank payment system functions. These are grouped into three overarching themes: central banks' operational responsibilities in payment and settlement systems, issuance of central bank digital money, and regulatory oversight/supervisory functions, along with other ancillary functions. Each of these applications has unique motivations or value proposition, which is the underlying reason for utilizing in that particular use case

    Routing schemes for hybrid communication networks

    Get PDF
    We consider the problem of computing routing schemes in the HYBRID model of distributed computing where nodes have access to two fundamentally different communication modes. In this problem nodes have to compute small labels and routing tables that allow for efficient routing of messages in the local network, which typically offers the majority of the throughput. Recent work has shown that using the HYBRID model admits a significant speed-up compared to what would be possible if either communication mode were used in isolation. Nonetheless, if general graphs are used as the input graph the computation of routing schemes still takes polynomial rounds in the HYBRID model. We bypass this lower bound by restricting the local graph to unit-disc-graphs and solve the problem deterministically with running time O(|H|2+log⁡n), label size O(log⁡n), and size of routing tables O(|H|2⋅log⁡n) where |H| is the number of “radio holes” in the network. Our work builds on recent work by Coy et al., who obtain this result in the much simpler setting where the input graph has no radio holes. We develop new techniques to achieve this, including a decomposition of the local graph into path-convex regions, where each region contains a shortest path for any pair of nodes in it

    THE ECONOMICS OF THE MANUSCRIPT AND RARE BOOK TRADE, ca. 1890–1939

    Get PDF
    The market for rare books has been characterized as unpredictable, and driven by the whims of a small number of rich individuals. Yet behind the headlines announcing new auction records, a range of sources make it possible to analyze the market as a whole. This book introduces the economics of the trade in manuscripts and rare books during the turbulent period ca. 1890–1939. It demonstrates how surviving sources, even when incomplete and inconsistent, can be used to tackle questions about the operation of the rare book trade, including how books were priced, profit margins, accounting practices, and books as investments, from the perspectives of both dealers and collectors

    Classical and quantum algorithms for scaling problems

    Get PDF
    This thesis is concerned with scaling problems, which have a plethora of connections to different areas of mathematics, physics and computer science. Although many structural aspects of these problems are understood by now, we only know how to solve them efficiently in special cases.We give new algorithms for non-commutative scaling problems with complexity guarantees that match the prior state of the art. To this end, we extend the well-known (self-concordance based) interior-point method (IPM) framework to Riemannian manifolds, motivated by its success in the commutative setting. Moreover, the IPM framework does not obviously suffer from the same obstructions to efficiency as previous methods. It also yields the first high-precision algorithms for other natural geometric problems in non-positive curvature.For the (commutative) problems of matrix scaling and balancing, we show that quantum algorithms can outperform the (already very efficient) state-of-the-art classical algorithms. Their time complexity can be sublinear in the input size; in certain parameter regimes they are also optimal, whereas in others we show no quantum speedup over the classical methods is possible. Along the way, we provide improvements over the long-standing state of the art for searching for all marked elements in a list, and computing the sum of a list of numbers.We identify a new application in the context of tensor networks for quantum many-body physics. We define a computable canonical form for uniform projected entangled pair states (as the solution to a scaling problem), circumventing previously known undecidability results. We also show, by characterizing the invariant polynomials, that the canonical form is determined by evaluating the tensor network contractions on networks of bounded size
    • 

    corecore