7 research outputs found

    АНАЛІЗ ВИКОРИСТАННЯ ВИСОКОЕФЕКТИВНОЇ РЕАЛІЗАЦІЇ ФУНКЦІЙ ХЕШУВАННЯ SHA-512 ДЛЯ РОЗРОБКИ ПРОГРАМНИХ СИСТЕМ

    Get PDF
    Hashing functions play an applied and fundamental role in the current protection of programs and data by cryptography techniques. Typically, these security features transmit latency data at the same time, producing a small and fixed-size signal. Along with an avalanche-like growing volume of data requiring quick validation, the hash function throughput is becoming a key factor. According to scientific research published in the technical literature, one of the fastest implementations of SHA-512 is the SHA-2 implementation, which provides bandwidth of the algorithm over 1550 Mbps, but is also faster such as the Whirlpool where bandwidths exceed 4896 Mbps At present, many papers have been published discussing the hardware implementation of the SHA-512. All considered implementations are usually aimed at high bandwidth or efficient use of computing resources. In general, it is impossible to know in advance which choice of functional design for this component will be the best in achieving the specific design purpose. After implementation and implementation of the algorithm with different components, it was possible to carry out system analysis and comment on the quality of this implementation, since the goal relates to the achievement of high bandwidth or low overall computing power. We systematized the results of all the calculations performed and analyzed each implementation separately. A detailed description of the stages of the expansion and compression of messages. Similarly, at different stages and refers to the stage of update hash, but its implementation is not always clearly defined. One of the reasons to skip the details of the previous stage and the stage of the hash update is that it assumes that these steps will be implemented in such a way as to minimize the negative impact on it. The data mixing function in the article does not claim to be the highest bandwidth of the algorithm, but it proved to be sufficiently stable for third-party decoding. Summarizing our research in the field of cryptographic protection by various methods, we can state that the application developed on the basis of the SHA-512 algorithm application software corresponds to the following technical parameters, namely verification of the integrity of programs and data and a sufficiently reliable authentication algorithm.Функції хешування відіграють прикладну та фундаментальну роль у сучасному захисті  програм та даних методами криптографії. Як правило такі функції захисту передають дані кінцевої довжини у той-же час виробляючи незначний та фіксованого розміру сигнал. Поряд із лавиноподібним зростаючим обсягом даних, які потребують швидкої  перевірки, пропускна властивість хеш-функцій стає ключовим фактором. Згідно з науковими дослідженнями опублікованими у даний час в технічній літературі, одна із найбільш швидких реалізацій SHA-512 а це варіант реалізації SHA-2, який забезпечує пропускну здатність алгоритму понад 1550 Мбіт/с проте є і швидші такі як, Whirlpool де пропускна здатності понад 4896 Мбіт/с. На даний час було опубліковано багато робіт, що обговорюють апаратні реалізації SHA-512. Усі розглянуті реалізації, як правило, спрямовані на високу пропускну здатність або ефективне використання обчислювальних ресурсів. Взагалі неможливо завчасно знати, який вибір функціонального дизайну для даного компонента буде найкращим у досягненні специфічної мети дизайну. Після реалізації та виконання алгоритму з різними компонентами можна було провести системний аналіз та прокоментувати якість даної реалізації, оскільки мета стосується досягнення високої пропускної здатності або низької загальної обчислювальної потужності. Ми систематизували результати усіх проведених обчислень та провели аналіз кожної реалізації окремо. Детально сформували опис етапів розширення та стиснення повідомлень. Аналогічно на різних етапах і згадується стадія оновлення хешу, однак її реалізація не завжди чітко визначена. Однією з причин пропускати подробиці попереднього етапу і етапу оновлення хешу є те, що він передбачає, що ці етапи будуть реалізовані таким чином, щоб мінімізувати негативний вплив на нього. Розглянута у статті функція перемішування даних не претендує на найвищу пропускну здатність алгоритму, проте вона виявилась достатньо стійкою для стороннього декодування. Підсумовуючи наші наукові дослідження в області криптографічного захисту різними методами ми можемо стверджувати, що розроблені на основі алгоритму SHA-512 прикладне програмне забезпечення відповідає наступним технічним параметрам, а саме верифікацію цілісності програм та даних і достатньо надійний алгоритм автентифікації

    A solution for strong authentication in sensor-based healthcare environments / Uma solução de autenticação forte para ambientes de saúde baseados em sensores

    Get PDF
    Medical devices equipped with network interfaces, classified as sensors, transmit sensitive information over the network. This information need to be secured applying security mechaninsms, in order to mitigate vulnerabilities.  Because  of the vulnerabilities, strong means of authentication have been investigated. However, existing strong authentication solutions require user interaction, not respecting their individuality. This paper proposes a strong authentication solution on sensor-based healthcare environments in order to support the authentication process of patients with special needs. The authentication was based on a combination of two methods acquired from sensors  of a healthcare environment: biometrics and location. In addition, standardizations provided by ISO/IEC 27799 and SBIS was followed for a safe development.

    Data Integrity Protection For Security in Industrial Networks

    Get PDF
    Modern industrial systems are increasingly based on computer networks. Network- based control systems connect the devices at the field level of industrial environments together and to the devices at the upper levels for monitoring, configuration and management purposes. Contrary to traditional industrial networks which axe con­ sidered stand-alone and proprietary networks, modern industrial networks are highly connected systems which use open protocols and standards at different levels. This new structure of industrial systems has made them vulnerable to security attacks. Among various security needs of computer networks, data integrity protection is the major issue in industrial networks. Any unauthorized modification of information during transmission could result in significant damages in industrial environments. In this thesis, the security needs of industrial environments are considered first. The need for security in industrial systems, challenges of security in these systems and security status of protocols used in industrial networks are presented. Furthermore, the hardware implementation of the Secure Hash Algorithm (SHA) which is used in security protocols for data integrity protection is the main focus of this thesis. A scheme has been proposed for the implementation of the SHA-1 and SHA-512 hash functions on FPGAs with fault detection capability. The proposed scheme is based on time redundancy and pipelining and is capable of detecting permanent as well as transient faults. The implementation results of the proposed scheme on Xilinx FPGAs show small area and timing overhead compared to the original implementation without fault detection. Moreover, the implementation of SHA-1 and SHA-512 on Wireless Sensor Boards has been presented taking into account their memory usage and execution time. There is an improvement in the execution time of the proposed implementation compared to the previous works

    Secure Remote Control and Configuration of FPX Platform in Gigabit Ethernet Environment

    Get PDF
    Because of its flexibility and high performance, reconfigurable logic functions implemented on the Field-programmable Port Extender (FPX ) are well suited for implementing network processing such as packet classification, filtering and intrusion detection functions. This project focuses on two key aspects of the FPX system. One is providing a Gigabit Ethernet interface by designing logic for a FPGA which is located on a line card. Address Resolution Protocol (ARP) packets are handled in hardware and Ethernet frames are processed and transformed into cells suitable for standard FPX application. The other effort is to provide a secure channel to enable remote control and configuration of the FPX system through public internet. A suite of security hardware cores were implemented that include the Advanced Encryption Standard (AES), Triple Data Encryption Standard (3DES), Hashed Message Authentication Code (HMAC), Message Digest Version 5 (MD5) and Secure Hash Algorithm (SHA-1). An architecture and an associated protocol have been developed which provide a secure communication channel between a control console and a hardware-based reconfigurable network node. This solution is unique in that it does not require a software process to run on the network stack, so that it has both higher performance and prevents the node from being hacked using traditional vulnerabilities found in common operating systems. The mechanism can be applied to the design and implementation of re-motely managed FPX systems. A hardware module called the Secure Control Packet Processor (SCPP) has been designed for a FPX based firewall. It utilizes AES or 3DES in Error Propagation Block Chaining (EPBC) mode to ensure data confidentiality and data integrity. There is also an authenticated engine that uses HMAC. to generate the acknowledgments. The system can protect the FPX system against attacks that may be sent over the control and configuration channel. Based on this infrastructure, an enhanced protocol is addressed that provides higher efficiency and can defend against replay attack. To support that, a control cell encryption module was designed and tested in the FPX system

    Comparative Analysis of the Hardware Implementations of Hash Functions SHA-1 and SHA-512

    No full text
    Hash functions are among the most widespread cryptographic primitives, and are currently used in multiple cryptographic schemes and security protocols such as IPSec and SSL. In this paper, we compare and contrast hardware implementations of the newly proposed draft hash standard SHA-512, and the old standard, SHA-1. In our implementation based on Xilinx Virtex FPGAs, the throughput of SHA-512 is equal to 670 Mbit/s, compared to 530 Mbit/s for SHA-1. Our analysis shows that the newly proposed hash standard is not only orders of magnitude more secure, but also significantly faster than the old standard. The basic iterative architectures of both hash functions are faster than the basic iterative architectures of symmetric-key ciphers with equivalent security

    Analysis and Test of the Effects of Single Event Upsets Affecting the Configuration Memory of SRAM-based FPGAs

    Get PDF
    SRAM-based FPGAs are increasingly relevant in a growing number of safety-critical application fields, ranging from automotive to aerospace. These application fields are characterized by a harsh radiation environment that can cause the occurrence of Single Event Upsets (SEUs) in digital devices. These faults have particularly adverse effects on SRAM-based FPGA systems because not only can they temporarily affect the behaviour of the system by changing the contents of flip-flops or memories, but they can also permanently change the functionality implemented by the system itself, by changing the content of the configuration memory. Designing safety-critical applications requires accurate methodologies to evaluate the system’s sensitivity to SEUs as early as possible during the design process. Moreover it is necessary to detect the occurrence of SEUs during the system life-time. To this purpose test patterns should be generated during the design process, and then applied to the inputs of the system during its operation. In this thesis we propose a set of software tools that could be used by designers of SRAM-based FPGA safety-critical applications to assess the sensitivity to SEUs of the system and to generate test patterns for in-service testing. The main feature of these tools is that they implement a model of SEUs affecting the configuration bits controlling the logic and routing resources of an FPGA device that has been demonstrated to be much more accurate than the classical stuck-at and open/short models, that are commonly used in the analysis of faults in digital devices. By keeping this accurate fault model into account, the proposed tools are more accurate than similar academic and commercial tools today available for the analysis of faults in digital circuits, that do not take into account the features of the FPGA technology.. In particular three tools have been designed and developed: (i) ASSESS: Accurate Simulator of SEuS affecting the configuration memory of SRAM-based FPGAs, a simulator of SEUs affecting the configuration memory of an SRAM-based FPGA system for the early assessment of the sensitivity to SEUs; (ii) UA2TPG: Untestability Analyzer and Automatic Test Pattern Generator for SEUs Affecting the Configuration Memory of SRAM-based FPGAs, a static analysis tool for the identification of the untestable SEUs and for the automatic generation of test patterns for in-service testing of the 100% of the testable SEUs; and (iii) GABES: Genetic Algorithm Based Environment for SEU Testing in SRAM-FPGAs, a Genetic Algorithm-based Environment for the generation of an optimized set of test patterns for in-service testing of SEUs. The proposed tools have been applied to some circuits from the ITC’99 benchmark. The results obtained from these experiments have been compared with results obtained by similar experiments in which we considered the stuck-at fault model, instead of the more accurate model for SEUs. From the comparison of these experiments we have been able to verify that the proposed software tools are actually more accurate than similar tools today available. In particular the comparison between results obtained using ASSESS with those obtained by fault injection has shown that the proposed fault simulator has an average error of 0:1% and a maximum error of 0:5%, while using a stuck-at fault simulator the average error with respect of the fault injection experiment has been 15:1% with a maximum error of 56:2%. Similarly the comparison between the results obtained using UA2TPG for the accurate SEU model, with the results obtained for stuck-at faults has shown an average difference of untestability of 7:9% with a maximum of 37:4%. Finally the comparison between fault coverages obtained by test patterns generated for the accurate model of SEUs and the fault coverages obtained by test pattern designed for stuck-at faults, shows that the former detect the 100% of the testable faults, while the latter reach an average fault coverage of 78:9%, with a minimum of 54% and a maximum of 93:16%
    corecore