145 research outputs found

    Keeping Authorities "Honest or Bust" with Decentralized Witness Cosigning

    Get PDF
    The secret keys of critical network authorities - such as time, name, certificate, and software update services - represent high-value targets for hackers, criminals, and spy agencies wishing to use these keys secretly to compromise other hosts. To protect authorities and their clients proactively from undetected exploits and misuse, we introduce CoSi, a scalable witness cosigning protocol ensuring that every authoritative statement is validated and publicly logged by a diverse group of witnesses before any client will accept it. A statement S collectively signed by W witnesses assures clients that S has been seen, and not immediately found erroneous, by those W observers. Even if S is compromised in a fashion not readily detectable by the witnesses, CoSi still guarantees S's exposure to public scrutiny, forcing secrecy-minded attackers to risk that the compromise will soon be detected by one of the W witnesses. Because clients can verify collective signatures efficiently without communication, CoSi protects clients' privacy, and offers the first transparency mechanism effective against persistent man-in-the-middle attackers who control a victim's Internet access, the authority's secret key, and several witnesses' secret keys. CoSi builds on existing cryptographic multisignature methods, scaling them to support thousands of witnesses via signature aggregation over efficient communication trees. A working prototype demonstrates CoSi in the context of timestamping and logging authorities, enabling groups of over 8,000 distributed witnesses to cosign authoritative statements in under two seconds.Comment: 20 pages, 7 figure

    Protocols for Integrity Constraint Checking in Federated Databases

    Get PDF
    A federated database is comprised of multiple interconnected database systems that primarily operate independently but cooperate to a certain extent. Global integrity constraints can be very useful in federated databases, but the lack of global queries, global transaction mechanisms, and global concurrency control renders traditional constraint management techniques inapplicable. This paper presents a threefold contribution to integrity constraint checking in federated databases: (1) The problem of constraint checking in a federated database environment is clearly formulated. (2) A family of protocols for constraint checking is presented. (3) The differences across protocols in the family are analyzed with respect to system requirements, properties guaranteed by the protocols, and processing and communication costs. Thus, our work yields a suite of options from which a protocol can be chosen to suit the system capabilities and integrity requirements of a particular federated database environment

    Integrity Constraint Checking in Federated Databases

    Get PDF
    A federated database is comprised of multiple interconnected databases that cooperate in an autonomous fashion. Global integrity constraints are very useful in federated databases, but the lack of global queries, global transaction mechanisms, and global concurrency control renders traditional constraint management techniques inapplicable. The paper presents a threefold contribution to integrity constraint checking in federated databases: (1) the problem of constraint checking in a federated database environment is clearly formulated; (2) a family of cooperative protocols for constraint checking is presented; (3) the differences across protocols in the family are analyzed with respect to system requirements, properties guaranteed, and costs involved. Thus, we provide a suite of options with protocols for various environments with specific system capabilities and integrity requirement

    A Survey of Traditional and Practical Concurrency Control in Relational Database Management Systems

    Get PDF
    Traditionally, database theory has focused on concepts such as atomicity and serializability, asserting that concurrent transaction management must enable correctness above all else. Textbooks and academic journals detail a vision of unbounded rationality, where reduced throughput because of concurrency protocols is not of tremendous concern. This thesis seeks to survey the traditional basis for concurrency in relational database management systems and contrast that with actual practice. SQL-92, the current standard for concurrency in relational database management systems has defined isolation, or allowable concurrency levels, and these are examined. Some ways in which DB2, a popular database, interprets these levels and finesses extra concurrency through performance enhancement are detailed. SQL-92 standardizes de facto relational database management systems features. Given this and a superabundance of articles in professional journals detailing steps for fine-tuning transaction concurrency, the expansion of performance tuning seems bright, even at the expense of serializabilty. Are the practical changes wrought by non-academic professionals killing traditional database concurrency ideals? Not really. Reasoned changes for performance gains advocate compromise, using complex concurrency controls when necessary for the job at hand and relaxing standards otherwise. The idea of relational database management systems is only twenty years old, and standards are still evolving. Is there still an interplay between tradition and practice? Of course. Current practice uses tradition pragmatically, not idealistically. Academic ideas help drive the systems available for use, and perhaps current practice now will help academic ideas define concurrency control concepts for relational database management systems

    QuePaxa: Escaping the tyranny of timeouts in consensus

    Get PDF
    Leader-based consensus algorithms are fast and efficient under normal conditions, but lack robustness to adverse conditions due to their reliance on timeouts for liveness. We present QuePaxa, the first protocol offering state-of-the-art normal-case efficiency without depending on timeouts. QuePaxa uses a novel randomized asynchronous consensus core to tolerate adverse conditions such as denial-of-service (DoS) attacks, while a one-round-trip fast path preserves the normal-case efficiency of Multi-Paxos or Raft. By allowing simultaneous proposers without destructive interference, and using short hedging delays instead of conservative timeouts to limit redundant effort, QuePaxa permits rapid recovery after leader failure without risking costly view changes due to false timeouts. By treating leader choice and hedging delay as a multi-armed-bandit optimization, QuePaxa achieves responsiveness to prevalent conditions, and can choose the best leader even if the current one has not failed. Experiments with a prototype confirm that QuePaxa achieves normal-case LAN and WAN performance of 584k and 250k cmd/sec in throughput, respectively, comparable to Multi-Paxos. Under conditions such as DoS attacks, misconfigurations, or slow leaders that severely impact existing protocols, we find that QuePaxa remains live with median latency under 380ms in WAN experiments

    Proof of Latency Using a Verifiable Delay Function

    Get PDF
    In this thesis I present an interactive public-coin protocol called Proof of Latency (PoL) that aims to improve connections in peer-to-peer networks by measuring latencies with logical clocks built from verifiable delay functions (VDF). PoL is a tuple of three algorithms, Setup(e, λ), VCOpen(c, e), and Measure(g, T, l_p, l_v). Setup creates a vector commitment (VC), from which a vector commitment opening corresponding to a collaborator's public key is taken in VCOpen, which then gets used to create a common reference string used in Measure. If no collusion gets detected by neither party, a signed proof is ready for advertising. PoL is agnostic in terms of the individual implementations of the VC or VDF used. This said, I present a proof of concept in the form of a state machine implemented in Rust that uses RSA-2048, Catalano-Fiore vector commitments and Wesolowski's VDF to demonstrate PoL. As VDFs themselves have been shown to be useful in timestamping, they seem to work as a measurement of time in this context as well, albeit requiring a public performance metric for each peer to compare to during the measurement. I have imagined many use cases for PoL, like proving a geographical location, working as a benchmark query, or using the proofs to calculate VDFs with the latencies between peers themselves. As it stands, PoL works as a distance bounding protocol between two participants, considering their computing performance is relatively similar. More work is needed to verify the soundness of PoL as a publicly verifiable proof that a third party can believe in.Tässä tutkielmassa esitän interaktiivisen protokollan nimeltä Proof of latency (PoL), joka pyrkii parantamaan yhteyksiä vertaisverkoissa mittaamalla viivettä todennettavasta viivefunktiosta rakennetulla loogisella kellolla. Proof of latency koostuu kolmesta algoritmista, Setup(e, λ), VCOpen(c, e) ja Measure(g, T, l_p, l_v). Setup luo vektorisitoumuksen, josta luodaan avaus algoritmissa VCOpen avaamalla vektorisitoumus indeksistä, joka kuvautuu toisen mittaavan osapuolen julkiseen avaimeen. Tätä avausta käytetään luomaan yleinen viitemerkkijono, jota käytetään algoritmissa Measure alkupisteenä molempien osapuolien todennettavissa viivefunktioissa mittaamaan viivettä. Jos kumpikin osapuoli ei huomaa virheitä mittauksessa, on heidän allekirjoittama todistus valmis mainostettavaksi vertaisverkossa. PoL ei ota kantaa sen käyttämien kryptografisten funktioiden implementaatioon. Tästä huolimatta olen ohjelmoinut protokollasta prototyypin Rust-ohjelmointikielellä käyttäen RSA-2048:tta, Catalano-Fiore--vektorisitoumuksia ja Wesolowskin todennettavaa viivefunktiota protokollan esittelyyn. Todistettavat viivefunktiot ovat osoittaneet hyödyllisiksi aikaleimauksessa, mikä näyttäisi osoittavan niiden soveltumisen myös ajan mittaamiseen tässä konteksissa, huolimatta siitä että jokaisen osapuolen tulee ilmoittaa julkisesti teholukema, joka kuvaa niiden tehokkuutta viivefunktioiden laskemisessa. Toinen osapuoli käyttää tätä lukemaa arvioimaan valehteliko toinen viivemittauksessa. Olen kuvitellut monta käyttökohdetta PoL:lle, kuten maantieteellisen sijainnin todistaminen, suorituskykytestaus, tai itse viivetodistuksien käyttäminen uusien viivetodistusten laskemisessa vertaisverkon osallistujien välillä. Tällä hetkellä PoL toimii etäisyydenmittausprotokollana kahden osallistujan välillä, jos niiden suorituskyvyt ovat tarpeeksi lähellä toisiaan. Protokolla tarvitsee lisätutkimusta sen suhteen, voiko se toimia uskottavana todistuksena kolmansille osapuolille kahden vertaisverkon osallistujan välisestä viiveestä

    Time Moves Faster When There is Nothing You Anticipate: The Role of Time in MEV Rewards

    Get PDF
    We present a novel analysis of a competitive dynamic present on Ethereum known as "waiting games", where validators can use their distinct monopoly position in their assigned slots to delay block proposals in order to optimize returns through Maximal Extractable Value (MEV) payments, a type of incentive outside the Proof-of-Stake incentive scheme. However, this strategy risks block exclusion due to missed slots or potential orphaning. Our analysis reveals evidence that, although there are substantial incentives to undertaking the risks, validators are not capitalizing on waiting games, leaving potential profits unrealized. Moreover, we present an agent-based model to test the eventual consensus disruption caused by waiting games under different settings, arguing that such disruption only occurs with significant delay strategies. Ultimately, this research provides in-depth insights into Ethereum's waiting games, illuminating the trade-offs and potential profit opportunities for validators in this evolving blockchain landscape
    corecore