44 research outputs found

    A LITERATURE STUDY ON PARALLEL KEY CRYPTOGRAPHIC ALGORITHM

    Get PDF
    In the field of computer security there are a large number of papers discussing on the topic of cryptography. Cryptography is an art of sending data to the intended recipient by preserving the integrity, confidentiality and authenticity of the data. Cryptography includes techniques such as microdots, merging words with images, and other ways to hide information in storage or transit. However, in today's computer-centric world, cryptography is most often associated with converting plain-text (ordinary text, also referred as clear-text) into cipher-text (by a process called encryption), then back again (by a process known as decryption) to plain-text that is the original message. The main objectives of cryptography are Confidentiality (the message cannot by understood by anyone other than the intended recipient), Integrity (the message cannot be altered during its storage or transmission.), Non- repudiation (the creator/sender of the information cannot deny at a later stage his or her intentions in the creation or transmission of the information), Authentication (the sender and receiver can confirm each other’s identity and the origin/destination of the information)

    Hardware accelerated authentication system for dynamic time-critical networks

    Get PDF
    The secure and efficient operation of time-critical networks, such as vehicular networks, smart-grid and other smart-infrastructures, is of primary importance in today’s society. It is crucial to minimize the impact of security mechanisms over such networks so that the safe and reliable operations of time-critical systems are not being interfered. Even though there are several security mechanisms, their application to smart-infrastructure and Internet of Things (IoT) deployments may not meet the ubiquitous and time-sensitive needs of these systems. That is, existing security mechanisms either introduce a significant computation and communication overhead, or they are not scalable for a large number of IoT components. In particular, as a primary authentication mechanism, existing digital signatures cannot meet the real-time processing requirements of time-critical networks, and also do not fully benefit from advancements in the underlying hardware/software of IoTs. As a part of this thesis, we create a reliable and scalable authentication system to ensure secure and reliable operation of dynamic time-critical networks like vehicular networks through hardware acceleration. The system is implemented on System-On-Chips (SoC) leveraging the parallel processing capabilities of the embedded Graphical Processing Units (GPUs) along with the CPUs (Central Processing Units). We identify a set of cryptographic authentication mechanisms, which consist of operations that are highly parallelizable while still maintain high standards of security and are also secure against various malicious adversaries. We also focus on creating a fully functional prototype of the system which we call a “Dynamic Scheduler” which will take care of scheduling the messages for signing or verification on the basis of their priority level and the number of messages currently in the system, so as to derive maximum throughput or minimum latency from the system, whatever the requirement may be

    A distributed authentication architecture and protocol

    Get PDF
    Većina metoda autentifikacije korisnika oslanjaju se na jedan verifikator koji se pohranjuje na središnjem mjestu unutar informacijskog sustava. Takva pohrana osjetljivih informacija predstavlja jedinstvenu točku ispada iz sigurnosne perspektive. Kompromitacija verifikatora jednog sustava predstavlja izravnu prijetnju korisnikovom digitalnom identitetu. U radu se predlaže raspodijeljeno okruženje za autentifikaciju u kojem ne postoji takva točka ispada. Rad opisuje arhitekturu koja omogućuje raspodijeljenu autentifikaciju korisnika u kojoj više autentifikacijskih poslužitelja sudjeluju u provjeri autentičnosti korisnika. Razmatra se autentifikacijsko okruženje u kojem se proces autentifikacije korisnika raspodjeljuje na više nezavisnih poslužitelja. Svaki poslužitelj samostalno obavlja autentifikaciju korisnika, na primjer tražeći od korisnika da odgovori na izazov kako bi dokazao da je vlasnik digitalnog identiteta. Predložena arhitektura omogućuje svakom poslužitelju da koristi drugi autentifikacijski faktor. Provedena je sigurnosna analiza predložene arhitekture i protokola, čime se dokazuje otpornost sustava od napada odabranih u analizi.Most user authentication methods rely on a single verifier being stored at a central location within the information system. Such information storage presents a single point of compromise from a security perspective. If this system is compromised it poses a direct threat to users’ digital identities if the verifier can be extracted from the system. This paper proposes a distributed authentication environment in which there is no such single point of compromise. We propose an architecture that does not rely on a single verifier to authenticate users, but rather a distributed authentication architecture where several authentication servers are used to authenticate a user. We consider an authentication environment in which the user authentication process is distributed among independent servers. Each server independently performs its own authentication of the user, for example by asking the user to complete a challenge in order to prove his claim to a digital identity. The proposed architecture allows each server to use any authentication factor. We provide a security analysis of the proposed architecture and protocol, which shows they are secure against the attacks chosen in the analysis

    Improving GPU SIMD Control Flow Efficiency via Hybrid Warp Size Mechanism

    Get PDF
    High single instruction multiple data (SIMD) efficiency and low power consumption have made graphic processing units (GPUs) an ideal platform for many complex computational applications. Thousands of threads can be created by programmers and grouped into fixed-size SIMD batches, known as warps. High throughput is then achieved by concurrently executing such warps with minimal control overhead. However, if a branch instruction occurs, which assigns different paths to different threads, this warp will be broken into multiple warps that have to be executed serially, consequently reducing the efficiency advantage of SIMD. In this thesis, the contemporary fixed-size warp design is abandoned and a hybrid warp size (HWS) mechanism is proposed. Mixed-size warps are generated according to HWS and are scheduled and issued flexibly. Once a branch divergence occurs, split warps are squeezed according to the proposed algorithm, and warp sizes are downscaled wherever applicable. Based on updated warp sizes, warp schedulers calculate the number of cycles the current warp needs and issue the next warp accordingly. As a result, hybrid warps are pushed into pipelines as soon as possible and more pipeline stages are overlapped. The simulation results show that this mechanism yields an average speedup of 1.20 over the baseline architecture for a wide variety of general purpose GPU applications. This work also integrates HWS with dynamic warp formation (DWF), which is a well-known branch handling mechanism aimed at improving SIMD utilization by forming new warps out of split warps in real time. The warp forming policy is modified to better tolerate warp conflicts. Also, squeeze operations are added before a warp merges with other warps. The simulation shows that the combination of DWF and HWS generates an average speedup of 1.27 over the DWF-only platform for the same set of GPU benchmarks

    Exploring ICMetrics to detect abnormal program behaviour on embedded devices

    Get PDF
    Execution of unknown or malicious software on an embedded system may trigger harmful system behaviour targeted at stealing sensitive data and/or causing damage to the system. It is thus considered a potential and significant threat to the security of embedded systems. Generally, the resource constrained nature of Commercial off-the-shelf (COTS) embedded devices, such as embedded medical equipment, does not allow computationally expensive protection solutions to be deployed on these devices, rendering them vulnerable. A Self-Organising Map (SOM) based and Fuzzy C-means based approaches are proposed in this paper for detecting abnormal program behaviour to boost embedded system security. The presented technique extracts features derived from processor's Program Counter (PC) and Cycles per Instruction (CPI), and then utilises the features to identify abnormal behaviour using the SOM. Results achieved in our experiment show that the proposed SOM based and Fuzzy C-means based methods can identify unknown program behaviours not included in the training set with 90.9% and 98.7% accuracy

    Data Partitioning and Asynchronous Processing to Improve the Embedded Software Performance on Multicore Processors

    Get PDF
    Nowadays, ensuring information security is extremely inevitable and urgent. We are also witnessing the strong development of embedded systems, IoT. As a result, research to ensure information security for embedded software is being focused. However, studies on optimizing embedded software on multi-core processors to ensure information security and increase the performance of embedded software have not received much attention. The paper proposes and develops the embedded software performance improvement method on multi-core processors based on data partitioning and asynchronous processing. Data are used globally to be retrieved by any threads. The data are divided into different partitions, and the program is also installed according to the multi-threaded model. Each thread handles a partition of the divided data. The size of each data portion is proportional to the processing speed and the cache size of the core in the multi-core processor. Threads run in parallel and do not need synchronization, but it is necessary to share a general global variable to check the executing status of the system. Our research on embedded software is based on data security, so we have tested and assessed the method with several block ciphers like AES, DES, etc., on Raspberry PI3. The average performance improvement rate achieved was 59.09%

    Computer science I like proceedings of miniconference on 4.11.2011

    Get PDF

    A Method for Detecting Abnormal Program Behavior on Embedded Devices

    Get PDF
    A potential threat to embedded systems is the execution of unknown or malicious software capable of triggering harmful system behavior, aimed at theft of sensitive data or causing damage to the system. Commercial off-the-shelf embedded devices, such as embedded medical equipment, are more vulnerable as these type of products cannot be amended conventionally or have limited resources to implement protection mechanisms. In this paper, we present a self-organizing map (SOM)-based approach to enhance embedded system security by detecting abnormal program behavior. The proposed method extracts features derived from processor's program counter and cycles per instruction, and then utilises the features to identify abnormal behavior using the SOM. Results achieved in our experiment show that the proposed method can identify unknown program behaviors not included in the training set with over 98.4% accuracy

    Doctor of Philosophy

    Get PDF
    dissertationAs the base of the software stack, system-level software is expected to provide ecient and scalable storage, communication, security and resource management functionalities. However, there are many computationally expensive functionalities at the system level, such as encryption, packet inspection, and error correction. All of these require substantial computing power. What's more, today's application workloads have entered gigabyte and terabyte scales, which demand even more computing power. To solve the rapidly increased computing power demand at the system level, this dissertation proposes using parallel graphics pro- cessing units (GPUs) in system software. GPUs excel at parallel computing, and also have a much faster development trend in parallel performance than central processing units (CPUs). However, system-level software has been originally designed to be latency-oriented. GPUs are designed for long-running computation and large-scale data processing, which are throughput-oriented. Such mismatch makes it dicult to t the system-level software with the GPUs. This dissertation presents generic principles of system-level GPU computing developed during the process of creating our two general frameworks for integrating GPU computing in storage and network packet processing. The principles are generic design techniques and abstractions to deal with common system-level GPU computing challenges. Those principles have been evaluated in concrete cases including storage and network packet processing applications that have been augmented with GPU computing. The signicant performance improvement found in the evaluation shows the eectiveness and eciency of the proposed techniques and abstractions. This dissertation also presents a literature survey of the relatively young system-level GPU computing area, to introduce the state of the art in both applications and techniques, and also their future potentials
    corecore