258 research outputs found

    A Parallel Computational Approach for String Matching- A Novel Structure with Omega Model

    Get PDF
    In r e cent day2019;s parallel string matching problem catch the attention of so many researchers because of the importance in different applications like IRS, Genome sequence, data cleaning etc.,. While it is very easily stated and many of the simple algorithms perform very well in practice, numerous works have been published on the subject and research is still very active. In this paper we propose a omega parallel computing model for parallel string matching. The algorithm is designed to work on omega model pa rallel architecture where text is divided for parallel processing and special searching at division point is required for consistent and complete searching. This algorithm reduces the number of comparisons and parallelization improves the time efficiency. Experimental results show that, on a multi - processor system, the omega model implementation of the proposed parallel string matching algorithm can reduce string matching time

    A study on the effect of stroop test on the formation of students discipline by using the Heart Rate Variability (HRV) technique

    Get PDF
    Discipline refers to self-control and individual behaviour. Other than that, discipline is an important element in the formation of integrity level. The objective of the study is to assess the effects of using the Stroop test of biofeedback protocol in order to evaluate individual level of discipline. A clinical study has been conducted on 50 participants which is the participants is a undergraduate student from Universiti Malaysia Pahang, who were divided into two groups. First group is students get high achiever and second group is students get low achierver in academic. The Heart Rate Variability (HRV) technique has been used in the assessment of this protocol. The findings show that there was a positive relationship between the Stroop test and the students discipline that those who excelled managed to get higher score of LF spectrum as compared to HF and VLF, while the students with lower achievement showed higher score of VLF and HF spectrum than LF. In conclusion, this test is one of the tests that can be used in increasing the level of individual discipline

    FPGA based Network Security Architecture for High Speed Networks

    Get PDF
    Cryptography and Network Security in high speed networks demands for specialized hardware in order to match up with the network speed. These hardware modules are being realized using reconfigurable FPGA technology to support heavy computation. Our work is mainly based on designing an efficient architecture for a cryptographic module and a network intrusion detection system for a high speed network. All the designs are coded using VHDL and are synthesized using Xilinx ISE for verifying their functionality. Virtex II pro FPGA is chosen as the target device for realization of the proposed design. In the cryptographic module, International Data Encryption Algorithm (IDEA), a symmetric key block cipher is chosen as the algorithm for implementation. The design goal is to increase the data conversion rate i.e the throughput to a substantial value so that the design can be used as a cryptographic coprocessor in high speed network applications. We have proposed a new n bit multiplier in the design which generates less number of partial products less than n/2 and the operands are in diminished-one representation. The multiplication is based on Radix-8 Booth's recoding with different combinations of outer round and inner round pipelining approach and a substantial high throughput to area ratio is achieved. The Network Intrusion Detection System (NIDS) module is designed for scanning suspicious patterns in data packets incoming to the network. Scanning a data packet against multiple patterns in quick time is a highly computational intensive task. A string matching module is realized using a memory efficient multi hashing data structure called Bloom Filter, in which multiple patterns can be matched in a single clock cycle. A separate parallel hash module is also designed for eliminating the packets which are treated as false positives. The string matching module is coded and functionally verified using VHDL targeting Virtex II pro FPGA and performance evaluation is made in terms of speed and resource utilization

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Techniques For Accelerating Large-Scale Automata Processing

    Get PDF
    The big-data era has brought new challenges to computer architectures due to the large-scale computation and data. Moreover, this problem becomes critical in several domains where the computation is also irregular, among which we focus on automata processing in this dissertation. Automata are widely used in applications from different domains such as network intrusion detection, machine learning, and parsing. Large-scale automata processing is challenging for traditional von Neumann architectures. To this end, many accelerator prototypes have been proposed. Micron\u27s Automata Processor (AP) is an example. However, as a spatial architecture, it is unable to handle large automata programs without repeated reconfiguration and re-execution. We found a large number of automata states are never enabled in the execution but still configured on the AP chips, leading to its underutilization. To address this issue, we proposed a lightweight offline profiling technique to predict the never-enabled states and keep them out of the AP. Furthermore, we develop SparseAP, a new execution mode for AP to handle the misprediction efficiently. Our software and hardware co-optimization obtains 2.1x speedup over the baseline AP execution across 26 applications. Since the AP is not publicly available, we aim to reduce the performance gap between a general-purpose accelerator---Graphics Processing Unit (GPU) and AP. We identify excessive data movement in the GPU memory hierarchy and propose optimization techniques to reduce the data movement. Although our optimization techniques significantly alleviate these memory-related bottlenecks, a side effect of them is the static assignment of work to cores. This leads to poor compute utilization as GPU cores are wasted on idle automata states. Therefore, we propose a new dynamic scheme that effectively balances compute utilization with reduced memory usage. Our combined optimizations provide a significant improvement over the previous state-of-the-art GPU implementations of automata. Moreover, they enable current GPUs to outperform the AP across several applications while performing within an order of magnitude for the rest of them. To make automata processing on GPU more generic to tasks with different amounts of parallelism, we propose AsyncAP, a lightweight approach that scales with the input length. Threads run asynchronously in AsyncAP, alleviating the bottleneck of thread block synchronization. The evaluation and detailed analysis demonstrate that AsyncAP achieves significant speedup or at least comparable performance under various scenarios for most of the applications. The future work aims to design automatic ways to generate optimizations and mappings between automata and computation resources for different GPUs. We will broaden the scope of this dissertation to domains such as graph computing
    corecore