960 research outputs found

    Password Cracking and Countermeasures in Computer Security: A Survey

    Full text link
    With the rapid development of internet technologies, social networks, and other related areas, user authentication becomes more and more important to protect the data of the users. Password authentication is one of the widely used methods to achieve authentication for legal users and defense against intruders. There have been many password cracking methods developed during the past years, and people have been designing the countermeasures against password cracking all the time. However, we find that the survey work on the password cracking research has not been done very much. This paper is mainly to give a brief review of the password cracking methods, import technologies of password cracking, and the countermeasures against password cracking that are usually designed at two stages including the password design stage (e.g. user education, dynamic password, use of tokens, computer generations) and after the design (e.g. reactive password checking, proactive password checking, password encryption, access control). The main objective of this work is offering the abecedarian IT security professionals and the common audiences with some knowledge about the computer security and password cracking, and promoting the development of this area.Comment: add copyright to the tables to the original authors, add acknowledgement to helpe

    Investigations into the feasibility of an on-line test methodology

    Get PDF
    This thesis aims to understand how information coding and the protocol that it supports can affect the characteristics of electronic circuits. More specifically, it investigates an on-line test methodology called IFIS (If it Fails It Stops) and its impact on the design, implementation and subsequent characteristics of circuits intended for application specific lC (ASIC) technology. The first study investigates the influences of information coding and protocol on the characteristics of IFIS systems. The second study investigates methods of circuit design applicable to IFIS cells and identifies the· technique possessing the characteristics most suitable for on-line testing. The third study investigates the characteristics of a 'real-life' commercial UART re-engineered using the techniques resulting from the previous two studies. The final study investigates the effects of the halting properties endowed by the protocol on failure diagnosis within IFIS systems. The outcome of this work is an identification and characterisation of the factors that influence behaviour, implementation costs and the ability to test and diagnose IFIS designs

    Efficient modular arithmetic units for low power cryptographic applications

    Get PDF
    The demand for high security in energy constrained devices such as mobiles and PDAs is growing rapidly. This leads to the need for efficient design of cryptographic algorithms which offer data integrity, authentication, non-repudiation and confidentiality of the encrypted data and communication channels. The public key cryptography is an ideal choice for data integrity, authentication and non-repudiation whereas the private key cryptography ensures the confidentiality of the data transmitted. The latter has an extremely high encryption speed but it has certain limitations which make it unsuitable for use in certain applications. Numerous public key cryptographic algorithms are available in the literature which comprise modular arithmetic modules such as modular addition, multiplication, inversion and exponentiation. Recently, numerous cryptographic algorithms have been proposed based on modular arithmetic which are scalable, do word based operations and efficient in various aspects. The modular arithmetic modules play a crucial role in the overall performance of the cryptographic processor. Hence, better results can be obtained by designing efficient arithmetic modules such as modular addition, multiplication, exponentiation and squaring. This thesis is organized into three papers, describes the efficient implementation of modular arithmetic units, application of these modules in International Data Encryption Algorithm (IDEA). Second paper describes the IDEA algorithm implementation using the existing techniques and using the proposed efficient modular units. The third paper describes the fault tolerant design of a modular unit which has online self-checking capability --Abstract, page iv

    Performance and area optimization for reliable FPGA-based shifter design

    Get PDF
    This thesis addresses the problem of implementing reliable FPGA-based shifters. An FPGA-based design requires optimization between performance and resource utilization, and an effective verification methodology to validate design behavior. The FPGA-based implementation of a large shifter design is restricted by an I/O resource bottleneck. The verification of the design behavior presents a further challenge due to the \u27black-box\u27 nature of FPGAs. To tackle these design challenges, we propose a novel approach to implement FPGA-based shifters. The proposed design alleviates the I/O bottleneck while significantly reducing the logic resources required. This is achieved with a minimal increase in the design delay. The design is seamlessly scalable to a multi-FPGA chip setup to improve performance or to implement larger shifters. It is configured using assertion checkers for efficient design verification. The assertion-based design is further optimized to alleviate the performance degradation caused by the assertion checkers

    Towards cooperative software verification with test generation and formal verification

    Get PDF
    There are two major methods for software verification: testing and formal verification. To increase our confidence in software on a large scale, we require tools that apply these methods automatically and reliably. Testing with manually written tests is widespread, but for automatically generated tests for the C programming language there is no standardized format. This makes the use and comparison of automated test generators expensive. In addition, testing can never provide full confidence in software—it can show the presence of bugs, but not their absence. In contrast, formal verification uses established, standardized formats and can prove the absence of bugs. Unfortunately, even successful formal-verification techniques suffer from different weaknesses. Compositions of multiple techniques try to combine the strengths of complementing techniques, but such combinations are often designed as cohesive, monolithic units. This makes them inflexible and it is costly to replace components. To improve on this state of the art, we work towards an off-the-shelf cooperation between verification tools through standardized exchange formats. First, we work towards standardization of automated test generation for C. We increase the comparability of test generators through a common benchmarking framework and reliable tooling, and provide means to reliably compare the bug-finding capabilities of test generators and formal verifiers. Second, we introduce new concepts for the off-the-shelf cooperation between verifiers (both test generators and formal verifiers). We show the flexibility of these concepts through an array of combinations and through an application to incremental verification. We also show how existing, strongly coupled techniques in software verification can be decomposed into stand-alone components that cooperate through clearly defined interfaces and standardized exchange formats. All our work is backed by rigorous implementation of the proposed concepts and thorough experimental evaluations that demonstrate the benefits of our work. Through these means we are able to improve the comparability of automated verifiers, allow the cooperation between a large array of existing verifiers, increase the effectiveness of software verification, and create new opportunities for further research on cooperative verification

    Hybrid Modular Redundancy: Exploring Modular Redundancy Approaches in RISC-V Multi-Core Computing Clusters for Reliable Processing in Space

    Full text link
    Space Cyber-Physical Systems (S-CPS) such as spacecraft and satellites strongly rely on the reliability of onboard computers to guarantee the success of their missions. Relying solely on radiation-hardened technologies is extremely expensive, and developing inflexible architectural and microarchitectural modifications to introduce modular redundancy within a system leads to significant area increase and performance degradation. To mitigate the overheads of traditional radiation hardening and modular redundancy approaches, we present a novel Hybrid Modular Redundancy (HMR) approach, a redundancy scheme that features a cluster of RISC-V processors with a flexible on-demand dual-core and triple-core lockstep grouping of computing cores with runtime split-lock capabilities. Further, we propose two recovery approaches, software-based and hardware-based, trading off performance and area overhead. Running at 430 MHz, our fault-tolerant cluster achieves up to 1160 MOPS on a matrix multiplication benchmark when configured in non-redundant mode and 617 and 414 MOPS in dual and triple mode, respectively. A software-based recovery in triple mode requires 363 clock cycles and occupies 0.612 mm2, representing a 1.3% area overhead over a non-redundant 12-core RISC-V cluster. As a high-performance alternative, a new hardware-based method provides rapid fault recovery in just 24 clock cycles and occupies 0.660 mm2, namely ~9.4% area overhead over the baseline non-redundant RISC-V cluster. The cluster is also enhanced with split-lock capabilities to enter one of the redundant modes with minimum performance loss, allowing execution of a mission-critical or a performance section, with <400 clock cycles overhead for entry and exit. The proposed system is the first to integrate these functionalities on an open-source RISC-V-based compute device, enabling finely tunable reliability vs. performance trade-offs

    Robust design of deep-submicron digital circuits

    Get PDF
    Avec l'augmentation de la probabilité de fautes dans les circuits numériques, les systèmes développés pour les environnements critiques comme les centrales nucléaires, les avions et les applications spatiales doivent être certifies selon des normes industrielles. Cette thèse est un résultat d'une cooperation CIFRE entre l'entreprise Électricité de France (EDF) R&D et Télécom Paristech. EDF est l'un des plus gros producteurs d'énergie au monde et possède de nombreuses centrales nucléaires. Les systèmes de contrôle-commande utilisé dans les centrales sont basés sur des dispositifs électroniques, qui doivent être certifiés selon des normes industrielles comme la CEI 62566, la CEI 60987 et la CEI 61513 à cause de la criticité de l'environnement nucléaire. En particulier, l'utilisation des dispositifs programmables comme les FPGAs peut être considérée comme un défi du fait que la fonctionnalité du dispositif est définie par le concepteur seulement après sa conception physique. Le travail présenté dans ce mémoire porte sur la conception de nouvelles méthodes d'analyse de la fiabilité aussi bien que des méthodes d'amélioration de la fiabilité d'un circuit numérique.The design of circuits to operate at critical environments, such as those used in control-command systems at nuclear power plants, is becoming a great challenge with the technology scaling. These circuits have to pass through a number of tests and analysis procedures in order to be qualified to operate. In case of nuclear power plants, safety is considered as a very high priority constraint, and circuits designed to operate under such critical environment must be in accordance with several technical standards such as the IEC 62566, the IEC 60987, and the IEC 61513. In such standards, reliability is treated as a main consideration, and methods to analyze and improve the circuit reliability are highly required. The present dissertation introduces some methods to analyze and to improve the reliability of circuits in order to facilitate their qualification according to the aforementioned technical standards. Concerning reliability analysis, we first present a fault-injection based tool used to assess the reliability of digital circuits. Next, we introduce a method to evaluate the reliability of circuits taking into account the ability of a given application to tolerate errors. Concerning reliability improvement techniques, first two different strategies to selectively harden a circuit are proposed. Finally, a method to automatically partition a TMR design based on a given reliability requirement is introduced.PARIS-Télécom ParisTech (751132302) / SudocSudocFranceF

    DESIGN FOR TESTABILITY TECHNIQUES FOR VIDEO CODING SYSTEMS

    Get PDF
    Motion estimation algorithms are used in various video coding systems. While focusing on the testing of ME in a video coding system, this work presents an error detection and data recovery (EDDR) design, based on the residue-andquotient (RQ) code, to embed into ME for video coding testing applications. An error in processing elements (PEs), i.e. key components of a ME, can be detected and recovered effectively by using the proposed EDDR design. Therefore, paper describes a novel testing scheme of motion estimation. The key part of this scheme is to offer high reliability for motion estimation architecture. The experimental result shows the design achieve 100% fault coverage. And, the main advantages of this scheme are minimal performance degradation, small cost of hardware overhead and the benefit of at speed testing
    • …
    corecore