23 research outputs found

    More Efficient Commitments from Structured Lattice Assumptions

    Get PDF
    We present a practical construction of an additively homomorphic commitment scheme based on structured lattice assumptions, together with a zero-knowledge proof of opening knowledge. Our scheme is a design improvement over the previous work of Benhamouda et al. in that it is not restricted to being statistically binding. While it is possible to instantiate our scheme to be statistically binding or statistically hiding, it is most efficient when both hiding and binding properties are only computational. This results in approximately a factor of 4 reduction in the size of the proof and a factor of 6 reduction in the size of the commitment over the aforementioned scheme

    On One-way Functions and Kolmogorov Complexity

    Get PDF
    We prove that the equivalence of two fundamental problems in the theory of computing. For every polynomial t(n)≄(1+Δ)n,Δ>0t(n)\geq (1+\varepsilon)n, \varepsilon>0, the following are equivalent: - One-way functions exists (which in turn is equivalent to the existence of secure private-key encryption schemes, digital signatures, pseudorandom generators, pseudorandom functions, commitment schemes, and more); - tt-time bounded Kolmogorov Complexity, KtK^t, is mildly hard-on-average (i.e., there exists a polynomial p(n)>0p(n)>0 such that no PPT algorithm can compute KtK^t, for more than a 1−1p(n)1-\frac{1}{p(n)} fraction of nn-bit strings). In doing so, we present the first natural, and well-studied, computational problem characterizing the feasibility of the central private-key primitives and protocols in Cryptography

    Software test and evaluation study phase I and II : survey and analysis

    Get PDF
    Issued as Final report, Project no. G-36-661 (continues G-36-636; includes A-2568

    Commitments with Efficient Zero-Knowledge Arguments from Subset Sum Problems

    Get PDF
    We present a cryptographic string commitment scheme that is computationally hiding and binding based on (modular) subset sum problems. It is believed that these NP-complete problems provide post-quantum security contrary to the number theory assumptions currently used in cryptography. Using techniques recently introduced by Feneuil, Maire, Rivain and Vergnaud, this simple commitment scheme enables an efficient zero-knowledge proof of knowledge for committed values as well as proofs showing Boolean relations amongst the committed bits. In particular, one can prove that committed bits m0,m1,...,mℓm_0, m_1, ..., m_\ell satisfy m0=C(m1,...,mℓ)m_0 = C(m_1, ..., m_\ell) for any Boolean circuit CC (without revealing any information on those bits). The proof system achieves good communication and computational complexity since for a security parameter λ\lambda, the protocol\u27s communication complexity is O~(∣C∣λ+λ2)\tilde{O}(|C| \lambda + \lambda^2) (compared to O~(∣C∣λ2)\tilde{O}(|C| \lambda^2) for the best code-based protocol due to Jain, Krenn, Pietrzak and Tentes)

    “Think of it as Money”: A History of the VISA Payment System, 1970–1984

    Get PDF
    This dissertation is a historical case study of the payment system designed, built, and operated by Visa International Services Association (VISA, hereafter “Visa”). The system is analyzed as a sociotechnical one, consisting of both social and technical elements that mutually constitute and shape one another. The historical narrative concentrates on the period of 1970 to 1984, which roughly corresponds to the tenure of the system’s founder and first CEO, Dee Ward Hock. It also focuses primarily upon the events that took place within the United States. After establishing a theoretical and historical context, I describe why and how the organization now known as Visa was formed. I then explain how the founder and his staff transformed the disintegrated, paper-based credit card systems of the 1960s into the unified, electronic value exchange system we know today. Special attention is paid throughout this narrative to the ways in which the technologies were shaped by political, legal, economic, and cultural forces, as well as the ways in which the system began to alter those social relations in return. In the final chapter, I offer three small extensions to the literature on payment systems, cooperative networks, and technology and culture

    Camera based Display Image Quality Assessment

    Get PDF
    This thesis presents the outcomes of research carried out by the PhD candidate Ping Zhao during 2012 to 2015 in Gjþvik University College. The underlying research was a part of the HyPerCept project, in the program of Strategic Projects for University Colleges, which was funded by The Research Council of Norway. The research was engaged under the supervision of Professor Jon Yngve Hardeberg and co-supervision of Associate Professor Marius Pedersen, from The Norwegian Colour and Visual Computing Laboratory, in the Faculty of Computer Science and Media Technology of Gjþvik University College; as well as the co-supervision of Associate Professor Jean-Baptiste Thomas, from The Laboratoire Electronique, Informatique et Image, in the Faculty of Computer Science of Universit®e de Bourgogne. The main goal of this research was to develop a fast and an inexpensive camera based display image quality assessment framework. Due to the limited time frame, we decided to focus only on projection displays with static images displayed on them. However, the proposed methods were not limited to projection displays, and they were expected to work with other types of displays, such as desktop monitors, laptop screens, smart phone screens, etc., with limited modifications. The primary contributions from this research can be summarized as follows: 1. We proposed a camera based display image quality assessment framework, which was originally designed for projection displays but it can be used for other types of displays with limited modifications. 2. We proposed a method to calibrate the camera in order to eliminate unwanted vignetting artifact, which is mainly introduced by the camera lens. 3. We proposed a method to optimize the camera’s exposure with respect to the measured luminance of incident light, so that after the calibration all camera sensors share a common linear response region. 4. We proposed a marker-less and view-independent method to register one captured image with its original at a sub-pixel level, so that we can incorporate existing full reference image quality metrics without modifying them. 5. We identified spatial uniformity, contrast and sharpness as the most important image quality attributes for projection displays, and we used the proposed framework to evaluate the prediction performance of the state-of-the-art image quality metrics regarding these attributes. The proposed image quality assessment framework is the core contribution of this research. Comparing to conventional image quality assessment approaches, which were largely based on the measurements of colorimeter or spectroradiometer, using camera as the acquisition device has the advantages of quickly recording all displayed pixels in one shot, relatively inexpensive to purchase the instrument. Therefore, the consumption of time and resources for image quality assessment can be largely reduced. We proposed a method to calibrate the camera in order to eliminate unwanted vignetting artifact primarily introduced by the camera lens. We used a hazy sky as a closely uniform light source, and the vignetting mask was generated with respect to the median sensor responses over i only a few rotated shots of the same spot on the sky. We also proposed a method to quickly determine whether all camera sensors were sharing a common linear response region. In order to incorporate existing full reference image quality metrics without modifying them, an accurate registration of pairs of pixels between one captured image and its original is required. We proposed a marker-less and view-independent image registration method to solve this problem. The experimental results proved that the proposed method worked well in the viewing conditions with a low ambient light. We further identified spatial uniformity, contrast and sharpness as the most important image quality attributes for projection displays. Subsequently, we used the developed framework to objectively evaluate the prediction performance of the state-of-art image quality metrics regarding these attributes in a robust manner. In this process, the metrics were benchmarked with respect to the correlations between the prediction results and the perceptual ratings collected from subjective experiments. The analysis of the experimental results indicated that our proposed methods were effective and efficient. Subjective experiment is an essential component for image quality assessment; however it can be time and resource consuming, especially in the cases that additional image distortion levels are required to extend the existing subjective experimental results. For this reason, we investigated the possibility of extending subjective experiments with baseline adjustment method, and we found that the method could work well if appropriate strategies were applied. The underlying strategies referred to the best distortion levels to be included in the baseline, as well as the number of them

    Special Libraries, September 1980

    Get PDF
    Volume 71, Issue 9https://scholarworks.sjsu.edu/sla_sl_1980/1007/thumbnail.jp

    On One-way Functions and the Worst-case Hardness of Time-Bounded Kolmogorov Complexity

    Get PDF
    Whether one-way functions (OWF) exist is arguably the most important problem in Cryptography, and beyond. While lots of candidate constructions of one-way functions are known, and recently also problems whose average-case hardness characterize the existence of OWFs have been demonstrated, the question of whether there exists some \emph{worst-case hard problem} that characterizes the existence of one-way functions has remained open since their introduction in 1976. In this work, we present the first ``OWF-complete\u27\u27 promise problem---a promise problem whose worst-case hardness w.r.t. \BPP (resp. \Ppoly) is \emph{equivalent} to the existence of OWFs secure against \PPT (resp. \nuPPT) algorithms. The problem is a variant of the Minimum Time-bounded Kolmogorov Complexity problem (\mktp[s] with a threshold ss), where we condition on instances having small ``computational depth\u27\u27. We furthermore show that depending on the choice of the threshold ss, this problem characterizes either ``standard\u27\u27 (polynomially-hard) OWFs, or quasi polynomially- or subexponentially-hard OWFs. Additionally, when the threshold is sufficiently small (e.g., 2O(n)2^{O(\sqrt{n})} or \poly\log n) then \emph{sublinear} hardness of this problem suffices to characterize quasi-polynomial/sub-exponential OWFs. While our constructions are black-box, our analysis is \emph{non- black box}; we additionally demonstrate that fully black-box constructions of OWF from the worst-case hardness of this problem are impossible. We finally show that, under Rudich\u27s conjecture, and standard derandomization assumptions, our problem is not inside \coAM; as such, it yields the first candidate problem believed to be outside of \AM \cap \coAM, or even SZK{\bf SZK}, whose worst case hardness implies the existence of OWFs

    End-to-end security in active networks

    Get PDF
    Active network solutions have been proposed to many of the problems caused by the increasing heterogeneity of the Internet. These ystems allow nodes within the network to process data passing through in several ways. Allowing code from various sources to run on routers introduces numerous security concerns that have been addressed by research into safe languages, restricted execution environments, and other related areas. But little attention has been paid to an even more critical question: the effect on end-to-end security of active flow manipulation. This thesis first examines the threat model implicit in active networks. It develops a framework of security protocols in use at various layers of the networking stack, and their utility to multimedia transport and flow processing, and asks if it is reasonable to give active routers access to the plaintext of these flows. After considering the various security problem introduced, such as vulnerability to attacks on intermediaries or coercion, it concludes not. We then ask if active network systems can be built that maintain end-to-end security without seriously degrading the functionality they provide. We describe the design and analysis of three such protocols: a distributed packet filtering system that can be used to adjust multimedia bandwidth requirements and defend against denial-of-service attacks; an efficient composition of link and transport-layer reliability mechanisms that increases the performance of TCP over lossy wireless links; and a distributed watermarking servicethat can efficiently deliver media flows marked with the identity of their recipients. In all three cases, similar functionality is provided to designs that do not maintain end-to-end security. Finally, we reconsider traditional end-to-end arguments in both networking and security, and show that they have continuing importance for Internet design. Our watermarking work adds the concept of splitting trust throughout a network to that model; we suggest further applications of this idea
    corecore