1,404 research outputs found

    Guesswork, large deviations and Shannon entropy

    Get PDF
    How hard is it to guess a password? Massey showed that a simple function of the Shannon entropy of the distribution from which the password is selected is a lower bound on the expected number of guesses, but one which is not tight in general. In a series of subsequent papers under ever less restrictive stochastic assumptions, an asymptotic relationship as password length grows between scaled moments of the guesswork and specific R´enyi entropy was identified. Here we show that, when appropriately scaled, as the password length grows the logarithm of the guesswork satisfies a Large Deviation Principle (LDP), providing direct estimates of the guesswork distribution when passwords are long. The rate function governing the LDP possesses a specific, restrictive form that encapsulates underlying structure in the nature of guesswork. Returning to Massey’s original observation, a corollary to the LDP shows that expectation of the logarithm of the guesswork is the specific Shannon entropy of the password selection process

    Guesswork with Quantum Side Information

    Get PDF
    What is the minimum number of guesses needed on average to guess a realization of a random variable correctly The answer to this question led to the introduction of a quantity called guesswork by Massey in 1994, which can be viewed as an alternate security criterion to entropy. In this paper, we consider the guesswork in the presence of quantum side information, and show that a general sequential guessing strategy is equivalent to performing a single quantum measurement and choosing a guessing strategy based on the outcome. We use this result to deduce entropic one-shot and asymptotic bounds on the guesswork in the presence of quantum side information, and to formulate a semi-definite program (SDP) to calculate the quantity. We evaluate the guesswork for a simple example involving the BB84 states, both numerically and analytically, and we prove a continuity result that certifies the security of slightly imperfect key states when the guesswork is used as the security criterion

    Investigating the Distribution of Password Choices

    Get PDF
    In this paper we will look at the distribution with which passwords are chosen. Zipf's Law is commonly observed in lists of chosen words. Using password lists from four different on-line sources, we will investigate if Zipf's law is a good candidate for describing the frequency with which passwords are chosen. We look at a number of standard statistics, used to measure the security of password distributions, and see if modelling the data using Zipf's Law produces good estimates of these statistics. We then look at the the similarity of the password distributions from each of our sources, using guessing as a metric. This shows that these distributions provide effective tools for cracking passwords. Finally, we will show how to shape the distribution of passwords in use, by occasionally asking users to choose a different password

    Guessing a password over a wireless channel (on the effect of noise non-uniformity)

    Get PDF
    A string is sent over a noisy channel that erases some of its characters. Knowing the statistical properties of the string's source and which characters were erased, a listener that is equipped with an ability to test the veracity of a string, one string at a time, wishes to fill in the missing pieces. Here we characterize the influence of the stochastic properties of both the string's source and the noise on the channel on the distribution of the number of attempts required to identify the string, its guesswork. In particular, we establish that the average noise on the channel is not a determining factor for the average guesswork and illustrate simple settings where one recipient with, on average, a better channel than another recipient, has higher average guesswork. These results stand in contrast to those for the capacity of wiretap channels and suggest the use of techniques such as friendly jamming with pseudo-random sequences to exploit this guesswork behavior.Comment: Asilomar Conference on Signals, Systems & Computers, 201

    Exact Probability Distribution versus Entropy

    Full text link
    The problem addressed concerns the determination of the average number of successive attempts of guessing a word of a certain length consisting of letters with given probabilities of occurrence. Both first- and second-order approximations to a natural language are considered. The guessing strategy used is guessing words in decreasing order of probability. When word and alphabet sizes are large, approximations are necessary in order to estimate the number of guesses. Several kinds of approximations are discussed demonstrating moderate requirements concerning both memory and CPU time. When considering realistic sizes of alphabets and words (100) the number of guesses can be estimated within minutes with reasonable accuracy (a few percent). For many probability distributions the density of the logarithm of probability products is close to a normal distribution. For those cases it is possible to derive an analytical expression for the average number of guesses. The proportion of guesses needed on average compared to the total number decreases almost exponentially with the word length. The leading term in an asymptotic expansion can be used to estimate the number of guesses for large word lengths. Comparisons with analytical lower bounds and entropy expressions are also provided

    Centralized vs Decentralized Multi-Agent Guesswork

    Full text link
    We study a notion of guesswork, where multiple agents intend to launch a coordinated brute-force attack to find a single binary secret string, and each agent has access to side information generated through either a BEC or a BSC. The average number of trials required to find the secret string grows exponentially with the length of the string, and the rate of the growth is called the guesswork exponent. We compute the guesswork exponent for several multi-agent attacks. We show that a multi-agent attack reduces the guesswork exponent compared to a single agent, even when the agents do not exchange information to coordinate their attack, and try to individually guess the secret string using a predetermined scheme in a decentralized fashion. Further, we show that the guesswork exponent of two agents who do coordinate their attack is strictly smaller than that of any finite number of agents individually performing decentralized guesswork.Comment: Accepted at IEEE International Symposium on Information Theory (ISIT) 201
    • …
    corecore