104 research outputs found

    Fine-Grained Cryptography

    Get PDF
    Fine-grained cryptographic primitives are ones that are secure against adversaries with an a-priori bounded polynomial amount of resources (time, space or parallel-time), where the honest algorithms use less resources than the adversaries they are designed to fool. Such primitives were previously studied in the context of time-bounded adversaries (Merkle, CACM 1978), space-bounded adversaries (Cachin and Maurer, CRYPTO 1997) and parallel-time-bounded adversaries (Håstad, IPL 1987). Our goal is come up with fine-grained primitives (in the setting of parallel-time-bounded adversaries) and to show unconditional security of these constructions when possible, or base security on widely believed separation of worst-case complexity classes. We show: 1. NC¹-cryptography: Under the assumption that Open image in new window, we construct one-way functions, pseudo-random generators (with sub-linear stretch), collision-resistant hash functions and most importantly, public-key encryption schemes, all computable in NC¹ and secure against all NC¹ circuits. Our results rely heavily on the notion of randomized encodings pioneered by Applebaum, Ishai and Kushilevitz, and crucially, make non-black-box use of randomized encodings for logspace classes. 2. AC⁰-cryptography: We construct (unconditionally secure) pseudo-random generators with arbitrary polynomial stretch, weak pseudo-random functions, secret-key encryption and perhaps most interestingly, collision-resistant hash functions, computable in AC⁰ and secure against all AC⁰ circuits. Previously, one-way permutations and pseudo-random generators (with linear stretch) computable in AC⁰ and secure against AC⁰ circuits were known from the works of Håstad and Braverman.United States. Defense Advanced Research Projects Agency (Contract W911NF-15-C-0226)United States. Army Research Office (Contract W911NF-15-C-0226

    Fine-grained Cryptography

    Get PDF
    Fine-grained cryptographic primitives are ones that are secure against adversaries with a-priori bounded polynomial resources (time, space or parallel-time), where the honest algorithms use less resources than the adversaries they are designed to fool. Such primitives were previously studied in the context of time-bounded adversaries (Merkle, CACM 1978), space-bounded adversaries (Cachin and Maurer, CRYPTO 1997) and parallel-time-bounded adversaries (Håstad, IPL 1987). Our goal is to show unconditional security of these constructions when possible, or base security on widely believed separation of worst-case complexity classes. We show: NC1^1-cryptography: Under the assumption that NC1^1 \neq \oplusL/poly, we construct one-way functions, pseudo-random generators (with sub-linear stretch), collision-resistant hash functions and most importantly, public-key encryption schemes, all computable in NC1^1 and secure against all NC1^1 circuits. Our results rely heavily on the notion of randomized encodings pioneered by Applebaum, Ishai and Kushilevitz, and crucially, make {\em non-black-box} use of randomized encodings for logspace classes. AC0^0-cryptography: We construct (unconditionally secure) pseudo-random generators with arbitrary polynomial stretch, weak pseudo-random functions, secret-key encryption and perhaps most interestingly, {\em collision-resistant hash functions}, computable in AC0^0 and secure against all AC^00 circuits. Previously, one-way permutations and pseudo-random generators (with linear stretch) computable in AC0^0 and secure against AC0^0 circuits were known from the works of Håstad and Braverman

    On Randomness Extraction in AC0

    Get PDF
    We consider randomness extraction by AC0 circuits. The main parameter, n, is the length of the source, and all other parameters are functions of it. The additional extraction parameters are the min-entropy bound k=k(n), the seed length r=r(n), the output length m=m(n), and the (output) deviation bound epsilon=epsilon(n). For k = r+1) is possible if and only if k * r > n/poly(log(n)). For k >= n/log^(O(1))(n), we show that AC0-extraction of r+Omega(r) bits is possible when r=O(log(n)), but leave open the question of whether more bits can be extracted in this case. The impossibility result is for constant epsilon, and the possibility result supports epsilon=1/poly(n). The impossibility result is for (possibly) non-uniform AC0, whereas the possibility result hold for uniform AC0. All our impossibility results hold even for the model of bit-fixing sources, where k coincides with the number of non-fixed (i.e., random) bits. We also consider deterministic AC0 extraction from various classes of restricted sources. In particular, for any constant delta>0delta>0, we give explicit AC0 extractors for poly(1/delta) independent sources that are each of min-entropy rate delta; and four sources suffice for delta=0.99. Also, we give non-explicit AC0 extractors for bit-fixing sources of entropy rate 1/poly(log(n)) (i.e., having n/poly(log(n)) unfixed bits). This shows that the known analysis of the "restriction method" (for making a circuit constant by fixing as few variables as possible) is tight for AC0 even if the restriction is picked deterministically depending on the circuit

    Spread spectrum-based video watermarking algorithms for copyright protection

    Get PDF
    Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can now benefit from hardware and software which was considered state-of-the-art several years ago. The advantages offered by the digital technologies are major but the same digital technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly possible and relatively easy, in spite of various forms of protection, but due to the analogue environment, the subsequent copies had an inherent loss in quality. This was a natural way of limiting the multiple copying of a video material. With digital technology, this barrier disappears, being possible to make as many copies as desired, without any loss in quality whatsoever. Digital watermarking is one of the best available tools for fighting this threat. The aim of the present work was to develop a digital watermarking system compliant with the recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark can be inserted in either spatial domain or transform domain, this aspect was investigated and led to the conclusion that wavelet transform is one of the best solutions available. Since watermarking is not an easy task, especially considering the robustness under various attacks several techniques were employed in order to increase the capacity/robustness of the system: spread-spectrum and modulation techniques to cast the watermark, powerful error correction to protect the mark, human visual models to insert a robust mark and to ensure its invisibility. The combination of these methods led to a major improvement, but yet the system wasn't robust to several important geometrical attacks. In order to achieve this last milestone, the system uses two distinct watermarks: a spatial domain reference watermark and the main watermark embedded in the wavelet domain. By using this reference watermark and techniques specific to image registration, the system is able to determine the parameters of the attack and revert it. Once the attack was reverted, the main watermark is recovered. The final result is a high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen

    New FPGA design tools and architectures

    Get PDF

    Pseudorandom sequence generation using binary cellular automata

    Get PDF
    Tezin basılısı İstanbul Şehir Üniversitesi Kütüphanesi'ndedir.Random numbers are an integral part of many applications from computer simulations, gaming, security protocols to the practices of applied mathematics and physics. As randomness plays more critical roles, cheap and fast generation methods are becoming a point of interest for both scientific and technological use. Cellular Automata (CA) is a class of functions which attracts attention mostly due to the potential it holds in modeling complex phenomena in nature along with its discreteness and simplicity. Several studies are available in the literature expressing its potentiality for generating randomness and presenting its advantages over commonly used random number generators. Most of the researches in the CA field focus on one-dimensional 3-input CA rules. In this study, we perform an exhaustive search over the set of 5-input CA to find out the rules with high randomness quality. As the measure of quality, the outcomes of NIST Statistical Test Suite are used. Since the set of 5-input CA rules is very large (including more than 4.2 billions of rules), they are eliminated by discarding poor-quality rules before testing. In the literature, generally entropy is used as the elimination criterion, but we preferred mutual information. The main motive behind that choice is to find out a metric for elimination which is directly computed on the truth table of the CA rule instead of the generated sequence. As the test results collected on 3- and 4-input CA indicate, all rules with very good statistical performance have zero mutual information. By exploiting this observation, we limit the set to be tested to the rules with zero mutual information. The reasons and consequences of this choice are discussed. In total, more than 248 millions of rules are tested. Among them, 120 rules show out- standing performance with all attempted neighborhood schemes. Along with these tests, one of them is subjected to a more detailed testing and test results are included. Keywords: Cellular Automata, Pseudorandom Number Generators, Randomness TestsContents Declaration of Authorship ii Abstract iii Öz iv Acknowledgments v List of Figures ix List of Tables x 1 Introduction 1 2 Random Number Sequences 4 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Theoretical Approaches to Randomness . . . . . . . . . . . . . . . . . . . 5 2.2.1 Information Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2.2 Complexity Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2.3 Computability Theory . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 Random Number Generator Classification . . . . . . . . . . . . . . . . . . 7 2.3.1 Physical TRNGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.3.2 Non-Physical TRNGs . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.3.3 Pseudorandom Number Generators . . . . . . . . . . . . . . . . . . 10 2.3.3.1 Generic Design of Pseudorandom Number Generators . . 10 2.3.3.2 Cryptographically Secure Pseudorandom Number Gener- ators . . . . . . . . . . . . . .11 2.3.4 Hybrid Random Number Generators . . . . . . . . . . . . . . . . . 13 2.4 A Comparison between True and Pseudo RNGs . . . . . . . . . . . . . . . 14 2.5 General Requirements on Random Number Sequences . . . . . . . . . . . 14 2.6 Evaluation Criteria of PRNGs . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.7 Statistical Test Suites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.8 NIST Test Suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.8.1 Hypothetical Testing . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.8.2 Tests in NIST Test Suite . . . . . . . . . . . . . . . . . . . . . . . . 20 2.8.2.1 Frequency Test . . . . . . . . . . . . . . . . . . . . . . . . 20 2.8.2.2 Block Frequency Test . . . . . . . . . . . . . . . . . . . . 20 2.8.2.3 Runs Test . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.8.2.4 Longest Run of Ones in a Block . . . . . . . . . . . . . . 21 2.8.2.5 Binary Matrix Rank Test . . . . . . . . . . . . . . . . . . 21 2.8.2.6 Spectral Test . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.8.2.7 Non-overlapping Template Matching Test . . . . . . . . . 22 2.8.2.8 Overlapping Template Matching Test . . . . . . . . . . . 22 2.8.2.9 Universal Statistical Test . . . . . . . . . . . . . . . . . . 23 2.8.2.10 Linear Complexity Test . . . . . . . . . . . . . . . . . . . 23 2.8.2.11 Serial Test . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.8.2.12 Approximate Entropy Test . . . . . . . . . . . . . . . . . 24 2.8.2.13 Cumulative Sums Test . . . . . . . . . . . . . . . . . . . . 24 2.8.2.14 Random Excursions Test . . . . . . . . . . . . . . . . . . 24 2.8.2.15 Random Excursions Variant Test . . . . . . . . . . . . . . 25 3 Cellular Automata 26 3.1 History of Cellular Automata . . . . . . . . . . . . . . . . . . . . . . . .26 3.1.1 von Neumann’s Work . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.1.2 Conway’s Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.1.3 Wolfram’s Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.2 Cellular Automata and the Definitive Parameters . . . . . . . . . . . . . . 31 3.2.1 Lattice Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.2.2 Cell Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.2.3 Guiding Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.2.4 Neighborhood Scheme . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.3 A Formal Definition of Cellular Automata . . . . . . . . . . . . . . . . . . 37 3.4 Elementary Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.5 Rule Families . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.6 Producing Randomness via Cellular Automata . . . . . . . . . . . . . . . 42 3.6.1 CA-Based PRNGs . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.6.2 Balancedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.6.3 Mutual Information . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.6.4 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4 Test Results 47 4.1 Output of a Statistical Test . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.2 Testing Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.3 Interpretation of the Test Results . . . . . . . . . . . . . . . . . . . . . . . 49 4.3.1 Rate of success over all trials . . . . . . . . . . . . . . . . . . . . . 49 4.3.2 Distribution of P-values . . . . . . . . . . . . . . . . . . . . . . . . 50 4.4 Testing over a big space of functions . . . . . . . . . . . . . . . . . . . . . 50 4.5 Our Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.6 Results and Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.6.1 Change in State Width . . . . . . . . . . . . . . . . . . . . . . . . 53 4.6.2 Change in Neighborhood Scheme . . . . . . . . . . . . . . . . . . . 53 4.6.3 Entropy vs. Statistical Quality . . . . . . . . . . . . . . . . . . . . 58 4.6.4 Mutual Information vs. Statistical Quality . . . . . . . . . . . . . . 60 4.6.5 Entropy vs. Mutual Information . . . . . . . . . . . . . . . . . . . 62 4.6.6 Overall Test Results of 4- and 5-input CA . . . . . . . . . . . . . . 6 4.7 The simplest rule: 1435932310 . . . . . . . . . . . . . . . . . . . . . . . . . 68 5 Conclusion 74 A Test Results for Rule 30 and Rule 45 77 B 120 Rules with their Shortest Boolean Formulae 80 Bibliograph

    High-capacity Optical Wireless Communication by Directed Narrow Beams

    Get PDF

    Error Correction and Concealment of Bock Based, Motion-Compensated Temporal Predition, Transform Coded Video

    Get PDF
    Error Correction and Concealment of Block Based, Motion-Compensated Temporal Prediction, Transform Coded Video David L. Robie 133 Pages Directed by Dr. Russell M. Mersereau The use of the Internet and wireless networks to bring multimedia to the consumer continues to expand. The transmission of these products is always subject to corruption due to errors such as bit errors or lost and ill-timed packets; however, in many cases, such as real time video transmission, retransmission request (ARQ) is not practical. Therefore receivers must be capable of recovering from corrupted data. Errors can be mitigated using forward error correction in the encoder or error concealment techniques in the decoder. This thesis investigates the use of forward error correction (FEC) techniques in the encoder and error concealment in the decoder in block-based, motion-compensated, temporal prediction, transform codecs. It will show improvement over standard FEC applications and improvements in error concealment relative to the Motion Picture Experts Group (MPEG) standard. To this end, this dissertation will describe the following contributions and proofs-of-concept in the area of error concealment and correction in block-based video transmission. A temporal error concealment algorithm which uses motion-compensated macroblocks from previous frames. A spatial error concealment algorithm which uses the Hough transform to detect edges in both foreground and background colors and using directional interpolation or directional filtering to provide improved edge reproduction. A codec which uses data hiding to transmit error correction information. An enhanced codec which builds upon the last by improving the performance of the codec in the error-free environment while maintaining excellent error recovery capabilities. A method to allocate Reed-Solomon (R-S) packet-based forward error correction that will decrease distortion (using a PSNR metric) at the receiver compared to standard FEC techniques. Finally, under the constraints of a constant bit rate, the tradeoff between traditional R-S FEC and alternate forward concealment information (FCI) is evaluated. Each of these developments is compared and contrasted to state of the art techniques and are able to show improvements using widely accepted metrics. The dissertation concludes with a discussion of future work.Ph.D.Committee Chair: Mersereau, Russell; Committee Member: Altunbasak, Yucel; Committee Member: Fekri, Faramarz; Committee Member: Lanterman, Aaron; Committee Member: Zhou, Haomi
    corecore