2,125 research outputs found

    Naturally Rehearsing Passwords

    Full text link
    We introduce quantitative usability and security models to guide the design of password management schemes --- systematic strategies to help users create and remember multiple passwords. In the same way that security proofs in cryptography are based on complexity-theoretic assumptions (e.g., hardness of factoring and discrete logarithm), we quantify usability by introducing usability assumptions. In particular, password management relies on assumptions about human memory, e.g., that a user who follows a particular rehearsal schedule will successfully maintain the corresponding memory. These assumptions are informed by research in cognitive science and validated through empirical studies. Given rehearsal requirements and a user's visitation schedule for each account, we use the total number of extra rehearsals that the user would have to do to remember all of his passwords as a measure of the usability of the password scheme. Our usability model leads us to a key observation: password reuse benefits users not only by reducing the number of passwords that the user has to memorize, but more importantly by increasing the natural rehearsal rate for each password. We also present a security model which accounts for the complexity of password management with multiple accounts and associated threats, including online, offline, and plaintext password leak attacks. Observing that current password management schemes are either insecure or unusable, we present Shared Cues--- a new scheme in which the underlying secret is strategically shared across accounts to ensure that most rehearsal requirements are satisfied naturally while simultaneously providing strong security. The construction uses the Chinese Remainder Theorem to achieve these competing goals

    Towards Human Computable Passwords

    Get PDF
    An interesting challenge for the cryptography community is to design authentication protocols that are so simple that a human can execute them without relying on a fully trusted computer. We propose several candidate authentication protocols for a setting in which the human user can only receive assistance from a semi-trusted computer --- a computer that stores information and performs computations correctly but does not provide confidentiality. Our schemes use a semi-trusted computer to store and display public challenges Ci[n]kC_i\in[n]^k. The human user memorizes a random secret mapping σ:[n]Zd\sigma:[n]\rightarrow\mathbb{Z}_d and authenticates by computing responses f(σ(Ci))f(\sigma(C_i)) to a sequence of public challenges where f:ZdkZdf:\mathbb{Z}_d^k\rightarrow\mathbb{Z}_d is a function that is easy for the human to evaluate. We prove that any statistical adversary needs to sample m=Ω~(ns(f))m=\tilde{\Omega}(n^{s(f)}) challenge-response pairs to recover σ\sigma, for a security parameter s(f)s(f) that depends on two key properties of ff. To obtain our results, we apply the general hypercontractivity theorem to lower bound the statistical dimension of the distribution over challenge-response pairs induced by ff and σ\sigma. Our lower bounds apply to arbitrary functions ff (not just to functions that are easy for a human to evaluate), and generalize recent results of Feldman et al. As an application, we propose a family of human computable password functions fk1,k2f_{k_1,k_2} in which the user needs to perform 2k1+2k2+12k_1+2k_2+1 primitive operations (e.g., adding two digits or remembering σ(i)\sigma(i)), and we show that s(f)=min{k1+1,(k2+1)/2}s(f) = \min\{k_1+1, (k_2+1)/2\}. For these schemes, we prove that forging passwords is equivalent to recovering the secret mapping. Thus, our human computable password schemes can maintain strong security guarantees even after an adversary has observed the user login to many different accounts.Comment: Fixed bug in definition of Q^{f,j} and modified proofs accordingl
    corecore