18,060 research outputs found

    Quantifying Shannon's Work Function for Cryptanalytic Attacks

    Full text link
    Attacks on cryptographic systems are limited by the available computational resources. A theoretical understanding of these resource limitations is needed to evaluate the security of cryptographic primitives and procedures. This study uses an Attacker versus Environment game formalism based on computability logic to quantify Shannon's work function and evaluate resource use in cryptanalysis. A simple cost function is defined which allows to quantify a wide range of theoretical and real computational resources. With this approach the use of custom hardware, e.g., FPGA boards, in cryptanalysis can be analyzed. Applied to real cryptanalytic problems, it raises, for instance, the expectation that the computer time needed to break some simple 90 bit strong cryptographic primitives might theoretically be less than two years.Comment: 19 page

    Quantifying Resource Use in Computations

    Get PDF
    It is currently not possible to quantify the resources needed to perform a computation. As a consequence, it is not possible to reliably evaluate the hardware resources needed for the application of algorithms or the running of programs. This is apparent in both computer science, for instance, in cryptanalysis, and in neuroscience, for instance, comparative neuro-anatomy. A System versus Environment game formalism is proposed based on Computability Logic that allows to define a computational work function that describes the theoretical and physical resources needed to perform any purely algorithmic computation. Within this formalism, the cost of a computation is defined as the sum of information storage over the steps of the computation. The size of the computational device, eg, the action table of a Universal Turing Machine, the number of transistors in silicon, or the number and complexity of synapses in a neural net, is explicitly included in the computational cost. The proposed cost function leads in a natural way to known computational trade-offs and can be used to estimate the computational capacity of real silicon hardware and neural nets. The theory is applied to a historical case of 56 bit DES key recovery, as an example of application to cryptanalysis. Furthermore, the relative computational capacities of human brain neurons and the C. elegans nervous system are estimated as an example of application to neural nets.Comment: 26 pages, no figure

    Swinging and Tumbling of Fluid Vesicles in Shear Flow

    Get PDF
    The dynamics of fluid vesicles in simple shear flow is studied using mesoscale simulations of dynamically-triangulated surfaces, as well as a theoretical approach based on two variables, a shape parameter and the inclination angle, which has no adjustable parameters. We show that between the well-known tank-treading and tumbling states, a new ``swinging'' state can appear. We predict the dynamic phase diagram as a function of the shear rate, the viscosities of the membrane and the internal fluid, and the reduced vesicle volume. Our results agree well with recent experiments.Comment: 4 pages, 4 figure

    Coulomb impurity in graphene

    Full text link
    We consider the problem of screening of an electrically charged impurity in a clean graphene sheet. When electron-electron interactions are neglected, the screening charge has a sign opposite to that of the impurity, and is localized near the impurity. Interactions between electrons smear out the induced charge density to give a large-distance tail that follows approximately, but not exactly, an r^{-2} behavior and with a sign which is the same as that of the impurity.Comment: 10 pages, 3 figures; (v2) Corrected sign error in Eq. (13); (v3) corrected figure

    Species Differentiation Of Fish Samples By Restriction Fragment Length Polymorphism Analysis Of Cytochrome B Gene

    Full text link
    Metode pengukuran polimorfisme fragmen hasil pemotongan produkreaksi polimorfik berantai oleh enzim restriksi spesifik (polymerase chainreaction-restriction fragment length polymorphism, RFLP-PCR) telah digunakanuntuk membedakan beberapa jenis ikan mentah. Situs cytochrome b mitokondria,yang diamplifikasi oleh primer universal, dipotong menggunakan empat enzimrestriksi (Bfa I, Hinf I, Msp I, Mbo II) sehingga dapat dianalisa fragment-fragmentpendeknya. Hasil yang diperolah dari pemotongan oleh enzim restriksi tersebutternyata dapat digunakan untuk membedakan tiap jenis ikan sampel. Hasilpenelitian ini menunjukkan bahwa PCR dan RFLP-PCR merupakan metode yangsensitif dan dapat dilakukan dalam waktu singkat untuk membedakan berbagaijenis ikan mentah

    Note on cosmology of dimensionally reduced gravitational Chern-Simons

    Full text link
    We present cosmological solutions from the dimensionally reduced Chern-Simons term and obtain the smooth transition solution from the decelerated phase (AdS) to the accelerated phase (dS).Comment: 3 pages, minor changes, references added, version to appear in PR

    An Analysis of the Search Spaces for Generate and Validate Patch Generation Systems

    Get PDF
    We present the first systematic analysis of the characteristics of patch search spaces for automatic patch generation systems. We analyze the search spaces of two current state-of-the-art systems, SPR and Prophet, with 16 different search space configurations. Our results are derived from an analysis of 1104 different search spaces and 768 patch generation executions. Together these experiments consumed over 9000 hours of CPU time on Amazon EC2. The analysis shows that 1) correct patches are sparse in the search spaces (typically at most one correct patch per search space per defect), 2) incorrect patches that nevertheless pass all of the test cases in the validation test suite are typically orders of magnitude more abundant, and 3) leveraging information other than the test suite is therefore critical for enabling the system to successfully isolate correct patches. We also characterize a key tradeoff in the structure of the search spaces. Larger and richer search spaces that contain correct patches for more defects can actually cause systems to find fewer, not more, correct patches. We identify two reasons for this phenomenon: 1) increased validation times because of the presence of more candidate patches and 2) more incorrect patches that pass the test suite and block the discovery of correct patches. These fundamental properties, which are all characterized for the first time in this paper, help explain why past systems often fail to generate correct patches and help identify challenges, opportunities, and productive future directions for the field

    Does parton saturation at high density explain hadron multiplicities at LHC?

    Full text link
    An addendum to our previous papers in Phys. Lett. B539 (2002) 46 and Phys. Lett. B502 (2001) 51, contributed to the CERN meeting "First data from the LHC heavy ion run", March 4, 2011Comment: 6 pages, contribution to the CERN meeting "First data from the LHC heavy ion run", March 4, 201
    • …
    corecore