2,626 research outputs found

    Universal Neural-Cracking-Machines: Self-Configurable Password Models from Auxiliary Data

    Full text link
    We develop the first universal password model -- a password model that, once pre-trained, can automatically adapt to any password distribution. To achieve this result, the model does not need to access any plaintext passwords from the target set. Instead, it exploits users' auxiliary information, such as email addresses, as a proxy signal to predict the underlying target password distribution. The model uses deep learning to capture the correlation between the auxiliary data of a group of users (e.g., users of a web application) and their passwords. It then exploits those patterns to create a tailored password model for the target community at inference time. No further training steps, targeted data collection, or prior knowledge of the community's password distribution is required. Besides defining a new state-of-the-art for password strength estimation, our model enables any end-user (e.g., system administrators) to autonomously generate tailored password models for their systems without the often unworkable requirement of collecting suitable training data and fitting the underlying password model. Ultimately, our framework enables the democratization of well-calibrated password models to the community, addressing a major challenge in the deployment of password security solutions on a large scale.Comment: v0.0

    Advances in Information Security and Privacy

    Get PDF
    With the recent pandemic emergency, many people are spending their days in smart working and have increased their use of digital resources for both work and entertainment. The result is that the amount of digital information handled online is dramatically increased, and we can observe a significant increase in the number of attacks, breaches, and hacks. This Special Issue aims to establish the state of the art in protecting information by mitigating information risks. This objective is reached by presenting both surveys on specific topics and original approaches and solutions to specific problems. In total, 16 papers have been published in this Special Issue

    On the Gold Standard for Security of Universal Steganography

    Get PDF
    While symmetric-key steganography is quite well understood both in the information-theoretic and in the computational setting, many fundamental questions about its public-key counterpart resist persistent attempts to solve them. The computational model for public-key steganography was proposed by von Ahn and Hopper in EUROCRYPT 2004. At TCC 2005, Backes and Cachin gave the first universal public-key stegosystem - i.e. one that works on all channels - achieving security against replayable chosen-covertext attacks (SS-RCCA) and asked whether security against non-replayable chosen-covertext attacks (SS-CCA) is achievable. Later, Hopper (ICALP 2005) provided such a stegosystem for every efficiently sampleable channel, but did not achieve universality. He posed the question whether universality and SS-CCA-security can be achieved simultaneously. No progress on this question has been achieved since more than a decade. In our work we solve Hopper's problem in a somehow complete manner: As our main positive result we design an SS-CCA-secure stegosystem that works for every memoryless channel. On the other hand, we prove that this result is the best possible in the context of universal steganography. We provide a family of 0-memoryless channels - where the already sent documents have only marginal influence on the current distribution - and prove that no SS-CCA-secure steganography for this family exists in the standard non-look-ahead model.Comment: EUROCRYPT 2018, llncs styl

    Passwords and the evolution of imperfect authentication

    Get PDF
    Theory on passwords has lagged practice, where large providers use back-end smarts to survive with imperfect technology.This is the author accepted manuscript. The final version is available from ACM via http://dx.doi.org/10.1145/269939

    Optimisation of John the Ripper in a clustered Linux environment

    Get PDF
    To aid system administrators in enforcing strict password policies, the use of password cracking tools such as Cisilia (C.I.S.I.ar, 2003) and John the Ripper (Solar Designer, 2002), have been employed as software utilities to look for weak passwords. John the Ripper (JtR) attempts to crack the passwords by using a dictionary, brute-force or other mode of attack. The computational intensity of cracking passwords has led to the utilisation of parallel-processing environments to increase the speed of the password-cracking task. Parallel-processing environments can consist of either single systems with multiple processors, or a collection of separate computers working together as a single, logical computer system; both of these configurations allow operations to run concurrently. This study aims to optimise and compare the execution of JtR on a pair of Beowulf clusters, which arc a collection of computers configured to run in a parallel manner. Each of the clusters will run the Rocks cluster distribution, which is a Linux RedHat based cluster-toolkit. An implementation of the Message Passing Interface (MPI), MPICH, will be used for inter-node communication, allowing the password cracker to run in a parallel manner. Experiments were performed to test the reliability of cracking a single set of password samples on both a 32-bit and 64-bit Beowulf cluster comprised of Intel Pentium and AMD64 Opteron processors respectively. These experiments were also used to test the effectiveness of the brute-force attack against the dictionary attack of JtR. The results from this thesis may provide assistance to organisations in enforcing strong password policies on user accounts through the use of computer clusters and also to examine the possibility of using JtR as a tool to reliably measure password strength

    A Task Allocation Algorithm with Weighted Average Velocity Based on Online Active Period

    Get PDF
    In some complex scientific calculation, the resources of the calculation are very large. To a certain extent, the improvement of the computer level has met the needs of many computing, but a lot of more complex calculation cannot still be effectively resolved. Volunteer computing is a computational method that divides the complexity of computing tasks into simple subtasks, and collects the results of volunteer computing resources to solve the subtasks. In this calculation process, the task assignment module is an extremely important part of the whole computing platform. Many of the existing task allocation algorithms (TAA) are used to group by the similar conditions of the volunteer computer. TAA used in this work grouped by the computers with similar online active period, and the computation efficiency is improved by using the weighted average velocity as a parameter. The experimental results showed that TAA with the weighted average velocity based on online active period can effectively improve the performance of the volunteer computing platform. Keywords: Volunteer computing; Task allocation algorithm; Weighted average velocity; Online active perio

    The Enhancement of Graduate Digital Forensics Education via the DC3 Digital Forensics Challenge

    Get PDF
    corecore