10,093 research outputs found

    Naturally Rehearsing Passwords

    Full text link
    We introduce quantitative usability and security models to guide the design of password management schemes --- systematic strategies to help users create and remember multiple passwords. In the same way that security proofs in cryptography are based on complexity-theoretic assumptions (e.g., hardness of factoring and discrete logarithm), we quantify usability by introducing usability assumptions. In particular, password management relies on assumptions about human memory, e.g., that a user who follows a particular rehearsal schedule will successfully maintain the corresponding memory. These assumptions are informed by research in cognitive science and validated through empirical studies. Given rehearsal requirements and a user's visitation schedule for each account, we use the total number of extra rehearsals that the user would have to do to remember all of his passwords as a measure of the usability of the password scheme. Our usability model leads us to a key observation: password reuse benefits users not only by reducing the number of passwords that the user has to memorize, but more importantly by increasing the natural rehearsal rate for each password. We also present a security model which accounts for the complexity of password management with multiple accounts and associated threats, including online, offline, and plaintext password leak attacks. Observing that current password management schemes are either insecure or unusable, we present Shared Cues--- a new scheme in which the underlying secret is strategically shared across accounts to ensure that most rehearsal requirements are satisfied naturally while simultaneously providing strong security. The construction uses the Chinese Remainder Theorem to achieve these competing goals

    Characterizations on microencapsulated sunflower oil as self-healing agent using In situ polymerization method

    Get PDF
    This paper emphasizes the characterization on the microencapsulation of sunflower oil as self-healing agent. In-situ polymerization method mainly implicates in the microencapsulation process. The analysis of microencapsulated sunflower oil via prominent characterization of yield of microcapsules, microcapsules characteristics and Fourier Transmission Infa-Red Spectroscopy (FTIR). The prime optimization used was reaction time of microencapsulation process in the ranges of 2, 3 and 4 h. The higher reaction time of microencapsulation process resulted in a higher yield of microcapsules. The yield of microcapsules increases from 46 to 53% respectively by the increasing of reaction time from 2 to 4 h. The surface morphology study associating the diameter of microcapsules measured to analyse the prepared microcapsules. It was indicated that microcapsules were round in shape with smooth micro-surfaces. It was discovered that the diameter of microcapsules during microencapsulation process after 4 h reaction time was in average of 70.53 μm. This size was measured before filtering the microcapsules with solvent and dried in vacuum oven. Apparently, after filtering and drying stage, the diameter of microcapsules specifically identified under Field Emission Scanning Electron Microscopy (FESEM) showing the size of 2.33 μm may be due to the removing the suspended oil surrounded the microcapsules. Sunflower oil as core content and urea formaldehyde (UF) as shell of microcapsules demonstrated the proven chemical properties on characterization by FTIR with the stretching peak of 1537.99 - 1538.90 cm-1 (-H in -CH2), 1235.49 - 1238.77 cm-1 (C-O-C Vibrations at Ester) and 1017.65 - 1034.11 cm-1 (C-OH Stretching Vibrations). It was showed that sunflower oil can be considered as an alternative nature resource for self-healing agent in microencapsulation process. The characterization of microencapsulated sunflower oil using in-situ polymerization method showed that sunflower oil was viable self-healing agent to be encapsulated and incorporated in metal coating

    Digital image watermarking: its formal model, fundamental properties and possible attacks

    Get PDF
    While formal definitions and security proofs are well established in some fields like cryptography and steganography, they are not as evident in digital watermarking research. A systematic development of watermarking schemes is desirable, but at present their development is usually informal, ad hoc, and omits the complete realization of application scenarios. This practice not only hinders the choice and use of a suitable scheme for a watermarking application, but also leads to debate about the state-of-the-art for different watermarking applications. With a view to the systematic development of watermarking schemes, we present a formal generic model for digital image watermarking. Considering possible inputs, outputs, and component functions, the initial construction of a basic watermarking model is developed further to incorporate the use of keys. On the basis of our proposed model, fundamental watermarking properties are defined and their importance exemplified for different image applications. We also define a set of possible attacks using our model showing different winning scenarios depending on the adversary capabilities. It is envisaged that with a proper consideration of watermarking properties and adversary actions in different image applications, use of the proposed model would allow a unified treatment of all practically meaningful variants of watermarking schemes

    Authentication with Distortion Criteria

    Full text link
    In a variety of applications, there is a need to authenticate content that has experienced legitimate editing in addition to potential tampering attacks. We develop one formulation of this problem based on a strict notion of security, and characterize and interpret the associated information-theoretic performance limits. The results can be viewed as a natural generalization of classical approaches to traditional authentication. Additional insights into the structure of such systems and their behavior are obtained by further specializing the results to Bernoulli and Gaussian cases. The associated systems are shown to be substantially better in terms of performance and/or security than commonly advocated approaches based on data hiding and digital watermarking. Finally, the formulation is extended to obtain efficient layered authentication system constructions.Comment: 22 pages, 10 figure

    Multifactor Authentication Methods: A Framework for Their Comparison and Selection

    Get PDF
    There are multiple techniques for users to authenticate themselves in software applications, such as text passwords, smart cards, and biometrics. Two or more of these techniques can be combined to increase security, which is known as multifactor authentication. Systems commonly utilize authentication as part of their access control with the objective of protecting the information stored within them. However, the decision of what authentication technique to implement in a system is often taken by the software development team in charge of it. A poor decision during this step could lead to a fatal mistake in relation to security, creating the necessity for a method that systematizes this task. Thus, this book chapter presents a theoretical decision framework that tackles this issue by providing guidelines based on the evaluated application’s characteristics and target context. These guidelines were defined through the application of an extensive action-research methodology in collaboration with experts from a multinational software development company

    Towards Human Computable Passwords

    Get PDF
    An interesting challenge for the cryptography community is to design authentication protocols that are so simple that a human can execute them without relying on a fully trusted computer. We propose several candidate authentication protocols for a setting in which the human user can only receive assistance from a semi-trusted computer --- a computer that stores information and performs computations correctly but does not provide confidentiality. Our schemes use a semi-trusted computer to store and display public challenges Ci[n]kC_i\in[n]^k. The human user memorizes a random secret mapping σ:[n]Zd\sigma:[n]\rightarrow\mathbb{Z}_d and authenticates by computing responses f(σ(Ci))f(\sigma(C_i)) to a sequence of public challenges where f:ZdkZdf:\mathbb{Z}_d^k\rightarrow\mathbb{Z}_d is a function that is easy for the human to evaluate. We prove that any statistical adversary needs to sample m=Ω~(ns(f))m=\tilde{\Omega}(n^{s(f)}) challenge-response pairs to recover σ\sigma, for a security parameter s(f)s(f) that depends on two key properties of ff. To obtain our results, we apply the general hypercontractivity theorem to lower bound the statistical dimension of the distribution over challenge-response pairs induced by ff and σ\sigma. Our lower bounds apply to arbitrary functions ff (not just to functions that are easy for a human to evaluate), and generalize recent results of Feldman et al. As an application, we propose a family of human computable password functions fk1,k2f_{k_1,k_2} in which the user needs to perform 2k1+2k2+12k_1+2k_2+1 primitive operations (e.g., adding two digits or remembering σ(i)\sigma(i)), and we show that s(f)=min{k1+1,(k2+1)/2}s(f) = \min\{k_1+1, (k_2+1)/2\}. For these schemes, we prove that forging passwords is equivalent to recovering the secret mapping. Thus, our human computable password schemes can maintain strong security guarantees even after an adversary has observed the user login to many different accounts.Comment: Fixed bug in definition of Q^{f,j} and modified proofs accordingl

    Protocol for a Systematic Literature Review on Security-related Research in Ubiquitous Computing

    Get PDF
    Context: This protocol is as a supplementary document to our review paper that investigates security-related challenges and solutions that have occurred during the past decade (from January 2003 to December 2013). Objectives: The objective of this systematic review is to identify security-related challenges, security goals and defenses in ubiquitous computing by answering to three main research questions. First, demographic data and trends will be given by analyzing where, when and by whom the research has been carried out. Second, we will identify security goals that occur in ubiquitous computing, along with attacks, vulnerabilities and threats that have motivated the research. Finally, we will examine the differences in addressing security in ubiquitous computing with those in traditional distributed systems. Method: In order to provide an overview of security-related challenges, goals and solutions proposed in the literature, we will use a systematic literature review (SLR). This protocol describes the steps which are to be taken in order to identify papers relevant to the objective of our review. The first phase of the method includes planning, in which we define the scope of our review by identifying the main research questions, search procedure, as well as inclusion and exclusion criteria. Data extracted from the relevant papers are to be used in the second phase of the method, data synthesis, to answer our research questions. The review will end by reporting on the results. Results and conclusions: The expected results of the review should provide an overview of attacks, vulnerabilities and threats that occur in ubiquitous computing and that have motivated the research in the last decade. Moreover, the review will indicate which security goals are gaining on their significance in the era of ubiquitous computing and provide a categorization of the security-related countermeasures, mechanisms and techniques found in the literature. (authors' abstract)Series: Working Papers on Information Systems, Information Business and Operation
    corecore