71 research outputs found

    Assumptions, Efficiency and Trust in Non-Interactive Zero-Knowledge Proofs

    Get PDF
    Vi lever i en digital verden. En betydelig del av livene våre skjer på nettet, og vi bruker internett for stadig flere formål og er avhengig av stadig mer avansert teknologi. Det er derfor viktig å beskytte seg mot ondsinnede aktører som kan forsøke å utnytte denne avhengigheten for egen vinning. Kryptografi er en sentral del av svaret på hvordan man kan beskytte internettbrukere. Historisk sett har kryptografi hovedsakelig vært opptatt av konfidensiell kommunikasjon, altså at ingen kan lese private meldinger sendt mellom to personer. I de siste tiårene har kryptografi blitt mer opptatt av å lage protokoller som garanterer personvern selv om man kan gjennomføre komplekse handlinger. Et viktig kryptografisk verktøy for å sikre at disse protokollene faktisk følges er kunnskapsløse bevis. Et kunnskapsløst bevis er en prosess hvor to parter, en bevisfører og en attestant, utveksler meldinger for å overbevise attestanten om at bevisføreren fulgte protokollen riktig (hvis dette faktisk er tilfelle) uten å avsløre privat informasjon til attestanten. For de fleste anvendelser er det ønskelig å lage et ikke-interaktivt kunnskapsløst bevis (IIK-bevis), der bevisføreren kun sender én melding til attestanten. IIK-bevis har en rekke ulike bruksområder, som gjør de til attraktive studieobjekter. Et IIK-bevis har en rekke ulike egenskaper og forbedring av noen av disse fremmer vår kollektive kryptografiske kunnskap. I den første artikkelen i denne avhandlingen konstruerer vi et nytt ikke-interaktivt kunnskapsløst bevis for språk basert på algebraiske mengder. Denne artikkelen er basert på arbeid av Couteau og Hartmann (Crypto 2020), som viste hvordan man omformer et bestemt interaktivt kunnskapsløst bevis til et IIK-bevis. Vi følger deres tilnærming, men vi bruker et annet interaktivt kunnskapsløst bevis. Dette fører til en forbedring sammenlignet med arbeidet deres på flere områder, spesielt når det gjelder både formodninger og effektivitet. I den andre artikkelen i denne avhandlingen studerer vi egenskapene til ikke-interaktive kunnskapsløse bevis som er motstandsdyktige mot undergraving. Det er umulig å lage et IIK-bevis uten å stole på en felles referansestreng (FRS) generert av en pålitelig tredjepart. Men det finnes eksempler på IIK-bevis der ingen lærer noe privat informasjon fra beviset selv om den felles referansestrengen ble skapt på en uredelig måte. I denne artikkelen lager vi en ny kryptografisk primitiv (verifiserbart-uttrekkbare enveisfunksjoner) og viser hvordan denne primitiven er relatert til IIK-bevis med den ovennevnte egenskapen.We live in a digital world. A significant part of our lives happens online, and we use the internet for incredibly many different purposes and we rely on increasingly advanced technology. It therefore is important to protect against malicious actors who may try to exploit this reliance for their own gain. Cryptography is a key part of the answer to protecting internet users. Historically, cryptography has mainly been focused on maintaining the confidentiality of communication, ensuring that no one can read private messages sent between people. In recent decades, cryptography has become concerned with creating protocols which guarantee privacy even as they support more complex actions. A crucial cryptographic tool to ensure that these protocols are indeed followed is the zero-knowledge proof. A zero-knowledge proof is a process where two parties, a prover and a verifier, exchange messages to convince the verifier that the prover followed the protocol correctly (if indeed the prover did so) without revealing any private information to the verifier. It is often desirable to create a non-interactive zero-knowledge proof (NIZK), where the prover only sends one message to the verifier. NIZKs have found a number of different applications, which makes them an attractive object of study. A NIZK has a variety of different properties, and improving any of these aspects advances our collective cryptographic knowledge. In the first paper in this thesis, we construct a new non-interactive zero-knowledge proof for languages based on algebraic sets. This paper is based on work by Couteau and Hartmann (Crypto 2020), which showed how to convert a particular interactive zero-knowledge proof to a NIZK. We follow their approach, but we start with a different interactive zero-knowledge proof. This leads to an improvement compared to their work in several ways, in particular in terms of both assumptions and efficiency. In the second paper in this thesis, we study the property of subversion zero-knowledge in non-interactive zero-knowledge proofs. It is impossible to create a NIZK without relying on a common reference string (CRS) generated by a trusted party. However, a NIZK with the subversion zero-knowledge property guarantees that no one learns any private information from the proof even if the CRS was generated dishonestly. In this paper, we create a new cryptographic primitive (verifiably-extractable one-way functions) and show how this primitive relates to NIZKs with subversion zero-knowledge.Doktorgradsavhandlin

    Numerical algebraic fan of a design for statistical model building

    Full text link

    Algebraic Dependencies and PSPACE Algorithms in Approximative Complexity

    Get PDF
    Testing whether a set f\mathbf{f} of polynomials has an algebraic dependence is a basic problem with several applications. The polynomials are given as algebraic circuits. Algebraic independence testing question is wide open over finite fields (Dvir, Gabizon, Wigderson, FOCS'07). The best complexity known is NP#P^{\#\rm P} (Mittmann, Saxena, Scheiblechner, Trans.AMS'14). In this work we put the problem in AM \cap coAM. In particular, dependence testing is unlikely to be NP-hard and joins the league of problems of "intermediate" complexity, eg. graph isomorphism & integer factoring. Our proof method is algebro-geometric-- estimating the size of the image/preimage of the polynomial map f\mathbf{f} over the finite field. A gap in this size is utilized in the AM protocols. Next, we study the open question of testing whether every annihilator of f\mathbf{f} has zero constant term (Kayal, CCC'09). We give a geometric characterization using Zariski closure of the image of f\mathbf{f}; introducing a new problem called approximate polynomials satisfiability (APS). We show that APS is NP-hard and, using projective algebraic-geometry ideas, we put APS in PSPACE (prior best was EXPSPACE via Grobner basis computation). As an unexpected application of this to approximative complexity theory we get-- Over any field, hitting-set for VP\overline{\rm VP} can be designed in PSPACE. This solves an open problem posed in (Mulmuley, FOCS'12, J.AMS 2017); greatly mitigating the GCT Chasm (exponentially in terms of space complexity)

    Algebraic Methods in Computational Complexity

    Get PDF
    Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. In some of the most exciting recent progress in Computational Complexity the algebraic theme still plays a central role. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Also the areas of derandomization and coding theory have experimented important advances. The seminar aimed to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and the goal of the seminar was to play an important role in educating a diverse community about the latest new techniques

    Proof complexity lower bounds from algebraic circuit complexity

    Get PDF
    We give upper and lower bounds on the power of subsystems of the Ideal Proof System (IPS), the algebraic proof system recently proposed by Grochow and Pitassi, where the circuits comprising the proof come from various restricted algebraic circuit classes. This mimics an established research direction in the boolean setting for subsystems of Extended Frege proofs, where proof-lines are circuits from restricted boolean circuit classes. Except one, all of the subsystems considered in this paper can simulate the well-studied Nullstellensatz proof system, and prior to this work there were no known lower bounds when measuring proof size by the algebraic complexity of the polynomials (except with respect to degree, or to sparsity). We give two general methods of converting certain algebraic lower bounds into proof complexity ones. Our methods require stronger notions of lower bounds, which lower bound a polynomial as well as an entire family of polynomials it defines. Our techniques are reminiscent of existing methods for converting boolean circuit lower bounds into related proof complexity results, such as feasible interpolation. We obtain the relevant types of lower bounds for a variety of classes (sparse polynomials, depth-3 powering formulas, read-once oblivious algebraic branching programs, and multilinear formulas), and infer the relevant proof complexity results. We complement our lower bounds by giving short refutations of the previously-studied subset-sum axiom using IPS subsystems, allowing us to conclude strict separations between some of these subsystems

    Trade-Offs Between Size and Degree in Polynomial Calculus

    Get PDF
    Building on [Clegg et al. \u2796], [Impagliazzo et al. \u2799] established that if an unsatisfiable k-CNF formula over n variables has a refutation of size S in the polynomial calculus resolution proof system, then this formula also has a refutation of degree k + O(?(n log S)). The proof of this works by converting a small-size refutation into a small-degree one, but at the expense of increasing the proof size exponentially. This raises the question of whether it is possible to achieve both small size and small degree in the same refutation, or whether the exponential blow-up is inherent. Using and extending ideas from [Thapen \u2716], who studied the analogous question for the resolution proof system, we prove that a strong size-degree trade-off is necessary

    Polynomial systems : graphical structure, geometry, and applications

    Get PDF
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 199-208).Solving systems of polynomial equations is a foundational problem in computational mathematics, that has several applications in the sciences and engineering. A closely related problem, also prevalent in applications, is that of optimizing polynomial functions subject to polynomial constraints. In this thesis we propose novel methods for both of these tasks. By taking advantage of the graphical and geometrical structure of the problem, our methods can achieve higher efficiency, and we can also prove better guarantees. Various problems in areas such as robotics, power systems, computer vision, cryptography, and chemical reaction networks, can be modeled by systems of polynomial equations, and in many cases the resulting systems have a simple sparsity structure. In the first part of this thesis we represent this sparsity structure with a graph, and study the algorithmic and complexity consequences of this graphical abstraction. Our main contribution is the introduction of a novel data structure, chordal networks, that always preserves the underlying graphical structure of the system. Remarkably, many interesting families of polynomial systems admit compact chordal network representations (of size linear in the number of variables), even though the number of components is exponentially large. Our methods outperform existing techniques by orders of magnitude in applications from algebraic statistics and vector addition systems. We then turn our attention to the study of graphical structure in the computation of matrix permanents, a classical problem from computer science. We provide a novel algorithm that requires Õ(n 2[superscript w]) arithmetic operations, where [superscript w] is the treewidth of its bipartite adjacency graph. We also investigate the complexity of some related problems, including mixed discriminants, hyperdeterminants, and mixed volumes. Although seemingly unrelated to polynomial systems, our results have natural implications on the complexity of solving sparse systems. The second part of this thesis focuses on the problem of minimizing a polynomial function subject to polynomial equality constraints. This problem captures many important applications, including Max-Cut, tensor low rank approximation, the triangulation problem, and rotation synchronization. Although these problems are nonconvex, tractable semidefinite programming (SDP) relaxations have been proposed. We introduce a methodology to derive more efficient (smaller) relaxations, by leveraging the geometrical structure of the underlying variety. The main idea behind our method is to describe the variety with a generic set of samples, instead of relying on an algebraic description. Our methods are particularly appealing for varieties that are easy to sample from, such as SO(n), Grassmannians, or rank k tensors. For arbitrary varieties we can take advantage of the tools from numerical algebraic geometry. Optimization problems from applications usually involve parameters (e.g., the data), and there is often a natural value of the parameters for which SDP relaxations solve the (polynomial) problem exactly. The final contribution of this thesis is to establish sufficient conditions (and quantitative bounds) under which SDP relaxations will continue to be exact as the parameter moves in a neighborhood of the original one. Our results can be used to show that several statistical estimation problems are solved exactly by SDP relaxations in the low noise regime. In particular, we prove this for the triangulation problem, rotation synchronization, rank one tensor approximation, and weighted orthogonal Procrustes.by Diego Cifuentes.Ph. D

    Cryptanalysis of ARX-based White-box Implementations

    Get PDF
    At CRYPTO’22, Ranea, Vandersmissen, and Preneel proposed a new way to design white-box implementations of ARX-based ciphers using so-called implicit functions and quadratic-affine encodings. They suggest the Speck block-cipher as an example target. In this work, we describe practical attacks on the construction. For the implementation without one of the external encodings, we describe a simple algebraic key recovery attack. If both external encodings are used (the main scenario suggested by the authors), we propose optimization and inversion attacks, followed by our main result - a multiple-step round decomposition attack and a decomposition-based key recovery attack. Our attacks only use the white-box round functions as oracles and do not rely on their description. We implemented and verified experimentally attacks on white-box instances of Speck-32/64 and Speck-64/128. We conclude that a single ARX-round is too weak to be used as a white-box round

    Nullstellensatz Size-Degree Trade-offs from Reversible Pebbling

    Get PDF
    We establish an exactly tight relation between reversible pebblings of graphs and Nullstellensatz refutations of pebbling formulas, showing that a graph G can be reversibly pebbled in time t and space s if and only if there is a Nullstellensatz refutation of the pebbling formula over G in size t+1 and degree s (independently of the field in which the Nullstellensatz refutation is made). We use this correspondence to prove a number of strong size-degree trade-offs for Nullstellensatz, which to the best of our knowledge are the first such results for this proof system

    Essays on strategic trading

    Get PDF
    This dissertation discusses various aspects of strategic trading using both analytical modeling and numerical methods. Strategic trading, in short, encompasses models of trading, most notably models of optimal execution and portfolio selection, in which one seeks to rigorously consider various---both explicit and implicit---costs stemming from the act of trading itself. The strategic trading approach, rooted in the market microstructure literature, contrasts with many classical finance models in which markets are assumed to be frictionless and traders can, for the most part, take prices as given. Introducing trading costs to dynamic models of financial markets tend to complicate matters. First, the objectives of the traders become more nuanced since now overtrading leads to poor outcomes due to increased trading costs. Second, when trades affect prices and there are multiple traders in the market, the traders start to behave in a more calculated fashion, taking into account both their own objectives and the perceived actions of others. Acknowledging this strategic behavior is especially important when the traders are asymmetrically informed. These new features allow the models discussed to better reflect aspects real-world trading, for instance, intraday trading patterns, and enable one to ask and answer new questions, for instance, related to the interactions between different traders. To efficiently analyze the models put forth, numerical methods must be utilized. This is, as is to be expected, the price one must pay from added complexity. However, it also opens an opportunity to have a closer look at the numerical approaches themselves. This opportunity is capitalized on and various new and novel computational procedures influenced by the growing field of numerical real algebraic geometry are introduced and employed. These procedures are utilizable beyond the scope of this dissertation and enable one to sharpen the analysis of dynamic equilibrium models.Tämä väitöskirja käsittelee strategista kaupankäyntiä hyödyntäen sekä analyyttisiä että numeerisia menetelmiä. Strategisen kaupankäynnin mallit, erityisesti optimaalinen kauppojen toteutus ja portfolion valinta, pyrkivät tarkasti huomioimaan kaupankäynnistä itsestään aiheutuvat eksplisiittiset ja implisiittiset kustannukset. Tämä erottaa strategisen kaupankäynnin mallit klassisista kitkattomista malleista. Kustannusten huomioiminen rahoitusmarkkinoiden dynaamisessa tarkastelussa monimutkaistaa malleja. Ensinnäkin kaupankävijöiden tavoitteet muuttuvat hienovaraisemmiksi, koska liian aktiivinen kaupankäynti johtaa korkeisiin kaupankäyntikuluihin ja heikkoon tuottoon. Toiseksi oletus siitä, että kaupankävijöiden valitsemat toimet vaikuttavat hintoihin, johtaa pelikäyttäytymiseen silloin, kun markkinoilla on useampia kaupankävijöitä. Pelikäyttäytymisen huomioiminen on ensiarvoisen tärkeää, mikäli informaatio kaupankävijöiden kesken on asymmetristä. Näiden piirteiden johdosta tässä väitöskirjassa käsitellyt mallit mahdollistavat abstrahoitujen rahoitusmarkkinoiden aiempaa täsmällisemmän tarkastelun esimerkiksi päivänsisäisen kaupankäynnin osalta. Tämän lisäksi mallien avulla voidaan löytää vastauksia uusiin kysymyksiin, kuten esimerkiksi siihen, millaisia ovat kaupankävijöiden keskinäiset vuorovaikutussuhteet dynaamisilla markkinoilla. Monimutkaisten mallien analysointiin hyödynnetään numeerisia menetelmiä. Tämä avaa mahdollisuuden näiden menetelmien yksityiskohtaisempaan tarkasteluun, ja tätä mahdollisuutta hyödynnetään pohtimalla laskennallisia ratkaisuja tuoreesta numeerista reaalista algebrallista geometriaa hyödyntävästä näkökulmasta. Väitöskirjassa esitellyt uudet laskennalliset ratkaisut ovat laajalti hyödynnettävissä, ja niiden avulla on mahdollista terävöittää dynaamisten tasapainomallien analysointia
    corecore