172 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Threshold Encrypted Mempools: Limitations and Considerations

    Full text link
    Encrypted mempools are a class of solutions aimed at preventing or reducing negative externalities of MEV extraction using cryptographic privacy. Mempool encryption aims to hide information related to pending transactions until a block including the transactions is committed, targeting the prevention of frontrunning and similar behaviour. Among the various methods of encryption, threshold schemes are particularly interesting for the design of MEV mitigation mechanisms, as their distributed nature and minimal hardware requirements harmonize with a broader goal of decentralization. This work looks beyond the formal and technical cryptographic aspects of threshold encryption schemes to focus on the market and incentive implications of implementing encrypted mempools as MEV mitigation techniques. In particular, this paper argues that the deployment of such protocols without proper consideration and understanding of market impact invites several undesired outcomes, with the ultimate goal of stimulating further analysis of this class of solutions outside of pure cryptograhic considerations. Included in the paper is an overview of a series of problems, various candidate solutions in the form of mempool encryption techniques with a focus on threshold encryption, potential drawbacks to these solutions, and Osmosis as a case study. The paper targets a broad audience and remains agnostic to blockchain design where possible while drawing from mostly financial examples

    Shufflecake: Plausible Deniability for Multiple Hidden Filesystems on Linux

    Get PDF
    We present Shufflecake, a new plausible deniability design to hide the existence of encrypted data on a storage medium making it very difficult for an adversary to prove the existence of such data. Shufflecake can be considered a ``spiritual successor\u27\u27 of tools such as TrueCrypt and VeraCrypt, but vastly improved: it works natively on Linux, it supports any filesystem of choice, and can manage multiple volumes per device, so to make deniability of the existence of hidden partitions really plausible. Compared to ORAM-based solutions, Shufflecake is extremely fast and simpler but does not offer native protection against multi-snapshot adversaries. However, we discuss security extensions that are made possible by its architecture, and we show evidence why these extensions might be enough to thwart more powerful adversaries. We implemented Shufflecake as an in-kernel tool for Linux, adding useful features, and we benchmarked its performance showing only a minor slowdown compared to a base encrypted system. We believe Shufflecake represents a useful tool for people whose freedom of expression is threatened by repressive authorities or dangerous criminal organizations, in particular: whistleblowers, investigative journalists, and activists for human rights in oppressive regimes

    Cyber Conflict and Just War Theory

    Get PDF

    Evaluating Copyright Protection in the Data-Driven Era: Centering on Motion Picture\u27s Past and Future

    Get PDF
    Since the 1910s, Hollywood has measured audience preferences with rough industry-created methods. In the 1940s, scientific audience research led by George Gallup started to conduct film audience surveys with traditional statistical and psychological methods. However, the quantity, quality, and speed were limited. Things dramatically changed in the internet age. The prevalence of digital data increases the instantaneousness, convenience, width, and depth of collecting audience and content data. Advanced data and AI technologies have also allowed machines to provide filmmakers with ideas or even make human-like expressions. This brings new copyright challenges in the data-driven era. Massive amounts of text and data are the premise of text and data mining (TDM), as well as the admission ticket to access machine learning technologies. Given the high and uncertain copyright violation risks in the data-driven creation process, whoever controls the copyrighted film materials can monopolize the data and AI technologies to create motion pictures in the data-driven era. Considering that copyright shall not be the gatekeeper to new technological uses that do not impair the original uses of copyrighted works in the existing markets, this study proposes to create a TDM and model training limitations or exceptions to copyrights and recommends the Singapore legislative model. Motion pictures, as public entertainment media, have inherently limited creative choices. Identifying data-driven works’ human original expression components is also challenging. This study proposes establishing a voluntarily negotiated license institution backed up by a compulsory license to enable other filmmakers to reuse film materials in new motion pictures. The film material’s degree of human original authorship certified by film artists’ guilds shall be a crucial factor in deciding the compulsory license’s royalty rate and terms to encourage retaining human artists. This study argues that international and domestic policymakers should enjoy broad discretion to qualify data-driven work’s copyright protection because data-driven work is a new category of work. It would be too late to wait until ubiquitous data-driven works block human creative freedom and floods of data-driven work copyright litigations overwhelm the judicial systems

    New Random Oracle Instantiations from Extremely Lossy Functions

    Get PDF
    We instantiate two random oracle (RO) transformations using Zhandry\u27s extremely lossy function (ELF) technique (Crypto\u2716). Firstly, using ELFs and indistinguishabililty obfuscation (iO), we instantiate a modified version of the Fujisaki-Okamoto (FO) transform which upgrades a public-key encryption scheme (PKE) from indistinguishability under chosen plaintext attacks (IND-CPA) to indistinguishability under chosen ciphertext attacks (IND-CCA). We side-step a prior uninstantiability result for FO by Brzuska, Farshim, and Mittelbach (TCC\u2715) by (1) hiding the randomness from the (potentially ill-designed) IND-CPA encryption scheme and (2) embedding an additional secret related to the hash-function into the secret-key of the IND-CCA-secure PKE, an idea brought forward by Murphy, O’Neill, Zaheri (Asiacrypt 2022) who also instantiate a modified FO variant also under ELFs and iO for the class of lossy PKE. Our transformation applies to all PKE which can be inverted given their randomness. Secondly, we instantiate the hash-then-evaluate paradigm for pseudorandom functions (PRFs), PRFnew(k,x):=wPRF(k,RO(x))\mathsf{PRF}_\mathsf{new}(k,x):=\mathsf{wPRF}(k,\mathsf{RO}(x)). Our construction replaces RO\mathsf{RO} by PRFold(kpub,elf(x))\mathsf{PRF}_\mathsf{old}(k_\mathsf{pub},\mathsf{elf}(x)) with a key kpubk_\mathsf{pub}, that, unusually, is known to the distinguishing adversary against PRFnew\mathsf{PRF}_\mathsf{new}. We start by observing that several existing weak PRF candidates are plausibly also secure under such distributions of pseudorandom inputs, generated by PRFold\mathsf{PRF}_\mathsf{old}. Firstly, analogous cryptanalysis applies and/or an attack with such pseudorandom inputs would imply surprising results such as key agreement from the high-noise version of the Learning Parity with Noise (LPN) assumption. Our simple transformation applies to the entire family of PRF-style functions. Specifically, we obtain results for oblivious PRFs, which are a core building block for password-based authenticated key exchange (PAKE) and private set intersection (PSI) protocols, and we also obtain results for pseudorandom correlation functions (PCF), which are a key tool for silent oblivious transfer (OT) extension

    Clinton's Grand Strategy

    Get PDF
    This book is available as open access through the Bloomsbury Open Access programme and is available on www.bloomsburycollections.com. President Clinton's time in office coincided with historic global events following the end of the Cold War. The collapse of Communism called for a new US Grand Strategy to address the emerging geopolitical era that brought upheavals in Somalia and the Balkans, economic challenges in Mexico and Europe and the emergence of new entities such as the EU, NAFTA and the WTO. Clinton's handling of these events was crucial to the development of world politics at the dawn of the twenty-first century. Only by understanding Clinton's efforts to address the challenges of the post-Cold War era can we understand the strategies of his immediate successors, George W. Bush and Barack Obama, both of whom inherited and continued Clinton-era policies and practices. James D. Boys sheds new light on the evolution and execution of US Grand Strategy from 1993 to 2001. He explores the manner in which policy was devised and examines the actors responsible for its development, including Bill Clinton, Anthony Lake, Samuel Berger, Warren Christopher, Madeline Albright and Richard Holbrook. He examines the core components of the strategy (National Security, Prosperity Promotion and Democracy Promotion) and how they were implemented, revealing a hitherto unexplored continuity from campaign trail to the White House. Covering the entire duration of Clinton’s presidential odyssey, from his 1991 Announcement Speech to his final day in office, the book draws extensively on newly declassified primary materials and interviews by the author with key members of the Clinton administration to reveal for the first time the development and implementation of US Grand Strategy from deep within the West Wing of the Clinton White House

    Report of the Attorney General's Cyber Digital Taskforce (2018)

    Get PDF

    Assuming Data Integrity and Empirical Evidence to The Contrary

    Get PDF
    Background: Not all respondents to surveys apply their minds or understand the posed questions, and as such provide answers which lack coherence, and this threatens the integrity of the research. Casual inspection and limited research of the 10-item Big Five Inventory (BFI-10), included in the dataset of the World Values Survey (WVS), suggested that random responses may be common. Objective: To specify the percentage of cases in the BRI-10 which include incoherent or contradictory responses and to test the extent to which the removal of these cases will improve the quality of the dataset. Method: The WVS data on the BFI-10, measuring the Big Five Personality (B5P), in South Africa (N=3 531), was used. Incoherent or contradictory responses were removed. Then the cases from the cleaned-up dataset were analysed for their theoretical validity. Results: Only 1 612 (45.7%) cases were identified as not including incoherent or contradictory responses. The cleaned-up data did not mirror the B5P- structure, as was envisaged. The test for common method bias was negative. Conclusion: In most cases the responses were incoherent. Cleaning up the data did not improve the psychometric properties of the BFI-10. This raises concerns about the quality of the WVS data, the BFI-10, and the universality of B5P-theory. Given these results, it would be unwise to use the BFI-10 in South Africa. Researchers are alerted to do a proper assessment of the psychometric properties of instruments before they use it, particularly in a cross-cultural setting
    • …
    corecore