28,219 research outputs found

    Comparison analysis of stream cipher algorithms for digital communication

    Get PDF
    The broadcast nature of radio communication such as in the HF (High Frequency) spectrum exposes the transmitted information to unauthorized third parties. Confidentiality is ensured by employing cipher system. For bulk transmission of data, stream ciphers are ideal choices over block ciphers due to faster implementation speed and not introducing error propagation. The stream cipher algorithms evaluated are based on the linear feedback shift register (LFSR) with nonlinear combining function. By using a common key length and worst case conditions, the strength of several stream cipher algorithms are evaluated using statistical tests, correlation attack, linear complexity profile and nonstandard test. The best algorithm is the one that exceeds all of the tests

    Тест на профиль линейной сложности

    Get PDF
    Розроблено новий тест для оцінювання якості випадкових послідовностей, який базується на профілі лінійної складності. Тест оперує кількістю стрибків лінійної складності і у певних випадках значно ефективніший за відповідний тест з набору NIST. Задача побудови нових ефективних критеріїв для виявлення відхилень від випадковості є актуальною. В основу тесту, названого авторами LP-тестом (Linear Profile-тест), покладена випадкова величина Sn=(Nn-n/4)/√n/8. Швидкість роботи побудованого тесту приблизно така сама, як і швидкість тесту на лінійну складність з пакету NIST, так як в основі обох тестів лежить алгоритм Берлекемпа-Мессі. Проте реалізація LP-тесту дещо простіша, адже в ньому статистика має стандартний розподіл на відміну від специфічного розподілу статистики у тесті NIST. В результаті досліджень авторів було виявлено, що тест LP набагато ефективніший принаймні на наступних типах неякісних вхідних послідовностей (тут шум означає інвертування кожного біту з імовірністю p): a. Лінійні рекурентні послідовності з шумом. b. Послідовності, сформовані шляхом регулярного або випадкового чергування відрізків різних лінійних рекурентних послідовностей. c. Послідовності, сформовані шляхом регулярного або випадкового чергування відрізків лінійних рекурентних послідовностей та відрізків, утворених за допомогою гарного генератора псевдовипадкових чисел. d. Послідовності, сформовані, як зазначено у попередньому пункті, з шумом. e. Послідовності, сформовані з лінійних рекурентних послідовностей шляхом випадкового видалення бітів.A new test for assessing the quality of random sequences is developed, based on the profile of linear complexity. Test operates the number of leaps of the linear complexity and in some cases much more effective than the corresponding test set of NIST. The task of composition a new effective criteria for detecting deviations from randomness is relevant. As a basis of the test, called by the author the LP test (Linear Profile-test), is laid the random value Sn=(Nn-n/4)/√n/8. The speed of the created test approximately the same as the speed test on the linear complexity of the package of NIST, as the basis of both tests is the algorithm Berlekempa-Messi. But the realization of LP-test somewhat easier, because it has a normal distribution statistics as against to specific distribution of test statistics in NIST. As a result of investigations of authors it was found that the LP test is much more effective at least for these types of low-quality input sequences : a. Linear recurrent sequences with the noise. b. Sequence generated by a regular or random alternation of the different segments of linear recurrent sequences. c. Sequence generated by a regular or random alternation of the segments of linear recurrent sequences and the segments created by a good generator of the pseudorandom numbers. d. Sequence generated with the noise. e. Sequence generated of linear recurrent sequences by accidental deletion of bits.Разработано новый тест для оценки качества случайных последовательностей, основанный на профили линейной сложности. Тест оперирует количеством прыжков линейной сложности и в определенных случаях значительно эффективнее чем соответствующий тест из набора NIST. Задача построения новых эффективных критериев для выявления отклонений от случайности актуальна. В основу теста, названного авторами LP-тестом (Linear Profile-тест), положена случайная величина Sn=(Nn-n/4)/√n/8. Скорость работы построенного теста примерно такая же, как и скорость теста на линейную сложность из пакета NIST, так как в основе обоих тестов лежит алгоритм Берлекемпа-Месси. Однако реализация LP-теста несколько проще, ведь в нем статистика имеет стандартное распределение в отличие от специфического распределения статистики в тесте NIST. В результате исследований было выявлено, что тест LP намного эффективнее по крайней мере на следующих типах некачественных входных последовательностей (тут шум означает инвертирование каждого бита с вероятностью p): a. Линейные рекуррентные последовательности с шумом. b. Последовательности, сформированные путем регулярного или случайного чередование отрезков разных линейных рекуррентных последовательностей. c. Последовательности, сформированные путем регулярного или случайного чередование отрезков линейных рекуррентных последовательностей и отрезков, образованных с помощью хорошего генератора псевдослучайных чисел. d. Последовательности, сформированы, как указано в предыдущем пункте, с шумом. e. Последовательности, сформированные из линейных рекуррентных последовательностей путем случайного удаления битов

    Postprocessing for quantum random number generators: entropy evaluation and randomness extraction

    Full text link
    Quantum random-number generators (QRNGs) can offer a means to generate information-theoretically provable random numbers, in principle. In practice, unfortunately, the quantum randomness is inevitably mixed with classical randomness due to classical noises. To distill this quantum randomness, one needs to quantify the randomness of the source and apply a randomness extractor. Here, we propose a generic framework for evaluating quantum randomness of real-life QRNGs by min-entropy, and apply it to two different existing quantum random-number systems in the literature. Moreover, we provide a guideline of QRNG data postprocessing for which we implement two information-theoretically provable randomness extractors: Toeplitz-hashing extractor and Trevisan's extractor.Comment: 13 pages, 2 figure

    Non-Cooperative Rational Interactive Proofs

    Get PDF
    Interactive-proof games model the scenario where an honest party interacts with powerful but strategic provers, to elicit from them the correct answer to a computational question. Interactive proofs are increasingly used as a framework to design protocols for computation outsourcing. Existing interactive-proof games largely fall into two categories: either as games of cooperation such as multi-prover interactive proofs and cooperative rational proofs, where the provers work together as a team; or as games of conflict such as refereed games, where the provers directly compete with each other in a zero-sum game. Neither of these extremes truly capture the strategic nature of service providers in outsourcing applications. How to design and analyze non-cooperative interactive proofs is an important open problem. In this paper, we introduce a mechanism-design approach to define a multi-prover interactive-proof model in which the provers are rational and non-cooperative - they act to maximize their expected utility given others\u27 strategies. We define a strong notion of backwards induction as our solution concept to analyze the resulting extensive-form game with imperfect information. We fully characterize the complexity of our proof system under different utility gap guarantees. (At a high level, a utility gap of u means that the protocol is robust against provers that may not care about a utility loss of 1/u.) We show, for example, that the power of non-cooperative rational interactive proofs with a polynomial utility gap is exactly equal to the complexity class P^{NEXP}

    When Can Limited Randomness Be Used in Repeated Games?

    Full text link
    The central result of classical game theory states that every finite normal form game has a Nash equilibrium, provided that players are allowed to use randomized (mixed) strategies. However, in practice, humans are known to be bad at generating random-like sequences, and true random bits may be unavailable. Even if the players have access to enough random bits for a single instance of the game their randomness might be insufficient if the game is played many times. In this work, we ask whether randomness is necessary for equilibria to exist in finitely repeated games. We show that for a large class of games containing arbitrary two-player zero-sum games, approximate Nash equilibria of the nn-stage repeated version of the game exist if and only if both players have Ω(n)\Omega(n) random bits. In contrast, we show that there exists a class of games for which no equilibrium exists in pure strategies, yet the nn-stage repeated version of the game has an exact Nash equilibrium in which each player uses only a constant number of random bits. When the players are assumed to be computationally bounded, if cryptographic pseudorandom generators (or, equivalently, one-way functions) exist, then the players can base their strategies on "random-like" sequences derived from only a small number of truly random bits. We show that, in contrast, in repeated two-player zero-sum games, if pseudorandom generators \emph{do not} exist, then Ω(n)\Omega(n) random bits remain necessary for equilibria to exist

    POPE: Partial Order Preserving Encoding

    Get PDF
    Recently there has been much interest in performing search queries over encrypted data to enable functionality while protecting sensitive data. One particularly efficient mechanism for executing such queries is order-preserving encryption/encoding (OPE) which results in ciphertexts that preserve the relative order of the underlying plaintexts thus allowing range and comparison queries to be performed directly on ciphertexts. In this paper, we propose an alternative approach to range queries over encrypted data that is optimized to support insert-heavy workloads as are common in "big data" applications while still maintaining search functionality and achieving stronger security. Specifically, we propose a new primitive called partial order preserving encoding (POPE) that achieves ideal OPE security with frequency hiding and also leaves a sizable fraction of the data pairwise incomparable. Using only O(1) persistent and O(nϵ)O(n^\epsilon) non-persistent client storage for 0<ϵ<10<\epsilon<1, our POPE scheme provides extremely fast batch insertion consisting of a single round, and efficient search with O(1) amortized cost for up to O(n1ϵ)O(n^{1-\epsilon}) search queries. This improved security and performance makes our scheme better suited for today's insert-heavy databases.Comment: Appears in ACM CCS 2016 Proceeding
    corecore