16 research outputs found

    Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding

    Full text link
    We study coding schemes for error correction in interactive communications. Such interactive coding schemes simulate any nn-round interactive protocol using NN rounds over an adversarial channel that corrupts up to ρN\rho N transmissions. Important performance measures for a coding scheme are its maximum tolerable error rate ρ\rho, communication complexity NN, and computational complexity. We give the first coding scheme for the standard setting which performs optimally in all three measures: Our randomized non-adaptive coding scheme has a near-linear computational complexity and tolerates any error rate δ<1/4\delta < 1/4 with a linear N=Θ(n)N = \Theta(n) communication complexity. This improves over prior results which each performed well in two of these measures. We also give results for other settings of interest, namely, the first computationally and communication efficient schemes that tolerate ρ<27\rho < \frac{2}{7} adaptively, ρ<13\rho < \frac{1}{3} if only one party is required to decode, and ρ<12\rho < \frac{1}{2} if list decoding is allowed. These are the optimal tolerable error rates for the respective settings. These coding schemes also have near linear computational and communication complexity. These results are obtained via two techniques: We give a general black-box reduction which reduces unique decoding, in various settings, to list decoding. We also show how to boost the computational and communication efficiency of any list decoder to become near linear.Comment: preliminary versio

    Unclonable Secret Keys

    Full text link
    We propose a novel concept of securing cryptographic keys which we call “Unclonable Secret Keys,” where any cryptographic object is modified so that its secret key is an unclonable quantum bit-string whereas all other parameters such as messages, public keys, ciphertexts, signatures, etc., remain classical. We study this model in the authentication and encryption setting giving a plethora of definitions and positive results as well as several applications that are impossible in a purely classical setting. In the authentication setting, we define the notion of one-shot signatures, a fundamental element in building unclonable keys, where the signing key not only is unclonable, but also is restricted to signing only one message even in the paradoxical scenario where it is generated dishonestly. We propose a construction relative to a classical oracle and prove its unconditional security. Moreover, we provide numerous applications including a signature scheme where an adversary can sign as many messages as it wants and yet it cannot generate two signing keys for the same public key. We show that one-shot signatures are sufficient to build a proof-of-work-based decentralized cryptocurrency with several ideal properties: it does not make use of a blockchain, it allows sending money over insecure classical channels and it admits several smart contracts. Moreover, we demonstrate that a weaker version of one-shot signatures, namely privately verifiable tokens for signatures, are sufficient to reduce any classically queried stateful oracle to a stateless one. This effectively eliminates, in a provable manner, resetting attacks to hardware devices (modeled as oracles). In the encryption setting, we study different forms of unclonable decryption keys. We give constructions that vary on their security guarantees and their flexibility. We start with the simplest setting of secret key encryption with honestly generated keys and show that it exists in the quantum random oracle model. We provide a range of extensions, such as public key encryption with dishonestly generated keys, predicate encryption, broadcast encryption and more

    Delegating RAM Computations with Adaptive Soundness and Privacy

    Get PDF
    We consider the problem of delegating RAM computations over persistent databases. A user wishes to delegate a sequence of computations over a database to a server, where each computation may read and modify the database and the modifications persist between computations. Delegating RAM computations is important as it has the distinct feature that the run-time of computations maybe sub-linear in the size of the database. We present the first RAM delegation scheme that provide both soundness and privacy guarantees in the adaptive setting, where the sequence of delegated RAM programs are chosen adaptively, depending potentially on the encodings of the database and previously chosen programs. Prior works either achieved only adaptive soundness without privacy [Kalai and Paneth, ePrint\u2715], or only security in the selective setting where all RAM programs are chosen statically [Chen et al. ITCS\u2716, Canetti and Holmgren ITCS\u2716]. Our scheme assumes the existence of indistinguishability obfuscation (\iO) for circuits and the decisional Diffie-Hellman (DDH) assumption. However, our techniques are quite general and in particular, might be applicable even in settings where iO is not used. We provide a security lifting technique that lifts any proof of selective security satisfying certain special properties into a proof of adaptive security, for arbitrary cryptographic schemes. We then apply this technique to the delegation scheme of Chen et al. and its selective security proof, obtaining that their scheme is essentially already adaptively secure. Because of the general approach, we can also easily extend to delegating parallel RAM (PRAM) computations. We believe that the security lifting technique can potentially find other applications and is of independent interest

    Encapsulated Search Index: Public-Key, Sub-linear, Distributed, and Delegatable

    Get PDF
    We build the first sub-linear (in fact, potentially constant-time) public-key searchable encryption system: − server can publish a public key PKPK. − anybody can build an encrypted index for document DD under PKPK. − client holding the index can obtain a token zwz_w from the server to check if a keyword ww belongs to DD. − search using zwz_w is almost as fast (e.g., sub-linear) as the non-private search. − server granting the token does not learn anything about the document DD, beyond the keyword ww. − yet, the token zwz_w is specific to the pair (D,w)(D, w): the client does not learn if other keywords w2˘7ww\u27\neq w belong to DD, or if w belongs to other, freshly indexed documents D2˘7D\u27. − server cannot fool the client by giving a wrong token zwz_w. We call such a primitive Encapsulated Search Index (ESI). Our ESI scheme can be made (t,n)(t, n)- distributed among nn servers in the best possible way: non-interactive, verifiable, and resilient to any coalition of up to (t1)(t − 1) malicious servers. We also introduce the notion of delegatable ESI and show how to extend our construction to this setting. Our solution — including public indexing, sub-linear search, delegation, and distributed token generation — is deployed as a commercial application by Atakama

    More Constructions of Re-splittable Threshold Public Key Encryption

    No full text

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Operations Research in action

    Get PDF
    Wie der Titel bereits andeutet bezieht sich diese Dissertation auf ein Operations Research Projekt, dass der Ä Osterreichische Telekommunikationsanbieter Telekom Austria in den Jahren 2006 bis 2009 durchfÄuhrte. Die wachsende Zahl von Internet Nutzern, neue Anwendungen im Internet und die zunehmende Konkurrenz von mobilem Internet zwingen Festnetzbetreiber wie Telekom Austria ihre Produkte fÄur den Internet Zugang mit hÄoheren Bandbreiten zu versehen. ZwangslÄau¯g mÄussen die Zugangsnetze verbessert werden, was nur mit hohen Investitionskosten erreichbar ist. Aus diesem Grund kommt der kostenoptimalen Planung solcher Netzwerke eine zentrale Rolle zu. Ein wesentliches Projektziel war es, den Planungsprozess mit Methoden der diskreten Optimie- rung aus dem Bereich Network Design zu unterstÄutzen. Die Ergebnisse, die in dieser Disserta- tion beschrieben werden, beschÄaftigen sich mit Algorithmen aus dem Gebiet Facility Location (Bestimmung von Versorgungsstandorten). Vor der PrÄasentation der dazugehÄorigen Theorie und ihrer Anwendung auf die gestellten Problem werden zweitere grÄundlich analysiert. ZunÄachst wird der Telekommunikationsmarkt bis 2009 mit speziellem Fokus auf den Zeitraum zwischen 2006 und 2009 beschrieben. Die Telekommunikationsindustrie hatte bereits einige Strategien zur Verbesserung der Netzwerkinfrastruktur entwickelt. Ihre Relevanz fÄur die ge- stellten Probleme wird herausgearbeitet Dem folgt eine Au°istung der Problemspezi¯kationen, wie sie in der Evaluierungsphase des Projekts mit den beteiligten Anwendern erstellt wurde. Mit Hilfe eines dynamischen Programmes wird die gestellte Fragestellung unter BerÄucksichtigung aller Spezi¯kationen gelÄost. Eine Au°istung von Bedingungen, wann dieser Algorithmus die optimale LÄosung liefert, und die dazugehÄorigen Beweise beschlie¼en Kapitel 1. In der Folge stellte sich allerdings heraus, dass die Praktiker mit dieser ersten LÄosung nicht zufrieden waren. Die Liste der Spezi¯kationen war nicht vollstÄandig. Sie musste verÄandert und erweitert werden. Mangelnde E±zienz machte die LÄosungen fÄur die Praxis unbrauchbar. Die LÄosungen enthielten Versorgungsstandorte, die minder ausgelastet waren (underutilized), d.h. diesen Standorten waren zu wenige Kunden zugeordnet worden. Solche Lokationen mussten aus den LÄosungen entfernt werden. Dann aber waren die Verbleibenden so zu repositionieren, dass die Versorgung mit einer vorgegebenen MindestÄubertragungsrate fÄur die grÄo¼tmÄogliche Menge an Kunden sichergestellt werden konnte. Diese Strategie wurde mit Hilfe des Konzepts der k-Mediane umgesetzt: Unter der Nebenbedingung, dass die Anzahl der Standorte durch eine Konstante k beschrÄankt ist, wird die optimale Zuordnung von Kunden zu Versorgungs- standorten, d.h. ihre Versorgung, gesucht. Anschlie¼end lÄost man dann k-Median Probleme fÄur verschiedene Werte von k und bestimmt die Mindestauslastungen und Versorgungsraten, die diese LÄosungen erzielen. Dieses Vorgehen versetzt den Anwender in die Lage unter verschie- denen LÄosungen zwischen e±zienter Auslastung der Versorgungsstandorten und der HÄohe der Versorgungsraten balancieren zu kÄonnen. In Kapitel 2 werden zunÄachst die Ereignisse und Diskussionen beschrieben, die eine ÄAnderung der LÄosungsstrategie notwendig machten, und die geÄanderten bzw. neuen Spezi¯kationen wer- den prÄasentiert. Dem folgt die Vorstellung der Theorie der k-Mediane inklusive der Beschrei- bung eines Algorithmus aus der Literatur. Am Ende des zweiten Kapitels wird eine Variante dieses Algorithmus entwickelt, der fÄur die spezi¯schen Anforderungen noch besser geeignet ist: Der Algorithmus aus der Literatur fÄugt Lokationen schrittweise in die LÄosung ein, d.h. pro Iteration erhÄoht sich die Anzahl der Versorgungsstandorte um einen, bis die maximale Anzahl von Lokationen erreicht ist. Im Falle von Zugangsnetzen ist die zu erwartende Anzahl von Standorten aber eher gro¼. Daher ist es vorteilhafter die gewÄunschte Anzahl von oben, durch Reduktion der Anzahl von Versorgungsstandorten in der LÄosung zu erreichen. Kapitel 3 liefert eine extensive empirische Analyse von 106 verschiedenen Zugangsnetzen. Kon- kreter Zweck dieser Demonstration ist es einen Eindruck zu vermittelt, wie man die entwickel- ten und adaptierten Methoden bei der Vorbereitung des Planungsprozesses einsetzen kann. So ist es einerseits mÄoglich strategischen Fragestellungen vorab zu analysieren (z.B. E®ekt der Erzwingung des HV Kreises, Balance zwischen Auslastung der Versorgungsstandorte und der Versorgungsrate), und andererseits VorschlÄage fÄur passende Planungsprozesse fÄur die An- wender zu entwickeln (z.B. durch Laufzeitanalysen). ZusÄatzlich werden die beiden Methoden zur LÄosung des k-Median Problems, die in dieser Abreit vorgestellt werden, noch bzgl. ihres Laufzeitverhaltens verglichen.As indicated by the title this thesis is based on an Operations Research project which was conducted at the Austrian telecommunications provider Telekom Austria between 2006 and 2009. An increasing number of internet users, new internet applications and the growing competition of mobile internet access force ¯xed line providers like Telekom Austria to o®er higher rates for data transmission via their access networks. As a consequence access nets have to be improved which leads to investments of signi¯cant size. Therefore, minimizing such investments by a cost optimal planning of networks becomes a key issue. The main goal of the project was to support the planning process by utilizing discrete opti- mization methods from the ¯eld of network design. The key results which are presented in this thesis are algorithms for facility location. However, before dealing with the theory and the solutions | in practice as well as in this thesis | a thorough analysis of the stated problem is undertaken. To begin with the telecommunication market before 2006 and especially between 2006 and 2009 is reviewed to provide some background information. The industry had already developed di®erent strategies to improve ¯xed line infrastructure. Their relevance for the stated problem is presented. Furthermore, the most important problem speci¯cations as they were gathered in cooperation with the practitioners are listed and discussed in detail. A ¯rst solution was based on a dynamic program for solving the facility location problem which was derived from the speci¯cations. The statement of conditions for the optimality of this algorithm and their proofs conclude Chapter 1. It turned out that this ¯rst solution did not provide the desired result. It rather fostered the discussion process between operations researches and practitioners. New speci¯cations were added to the existing list. The planners dismissed these ¯rst solutions because they were not e±cient enough. These solutions contained facilities which were underutilized, i.e. too few customers were assigned to such facilities. To overcome this problem facilities of low utilization had to be removed from the solutions. The remaining facilities were rearranged in a way to maximize the coverage with a certain minimum transmission rate. This strategy was realized by adapting the concept of the k-median problem: The number of facilities is bounded whereas simultaneously the number of optimally supplied customers is maximized. Then for di®erent bounds the minimum facility utilization is reported. That way the practitioner is enabled to ¯nd the optimal balance between e±cient facility utilization and coverage of customer demands. After sketching the events and discussions which made further development necessary and listing the additional speci¯cations, the theory of the k-median problem is presented and a basic algorithm from the literature is cited. For the speci¯c requirements of the given problem a variant of the algorithm is developed and described at the end of Chapter 2: The algorithm from the literature inserts facilities one by one into the solution that way approaching the bound in an ascending manner. However, since the expected number of facilities is usually large it is more advantageous to approach the bound from above in a descending manner. Finally, an extensive empirical study of 106 di®erent local access areas is presented. The main purpose of this demonstration is to give a concrete impression of how the adapted and developed methods can be utilized in preparation of the planning process by studying strategic questions (e.g. CO circle enforcement, balancing between facility utilization and coverage) and by providing information (runtime) which is useful to set up an appropriate working environment for the future users. Additionally, the two variants of the k-median algorithm | the ascending and the descending method | can be compared
    corecore