6,526 research outputs found

    Hybrid Intelligent Testing in Simulation-Based Verification

    Get PDF

    Accelerating Coverage Closure For Hardware Verification Using Machine Learning

    Get PDF
    Functional verification is used to confirm that the logic of a design meets its specification. The most commonly used method for verifying complex designs is simulation-based verification. The quality of simulation-based verification is based on the quality and diversity of the tests that are simulated. However, it is time consuming and compute intensive on account of the fact that a large volume of tests must be simulated to exhaustively exercise the design functionality in order to find and fix logic bugs. A common measure of success of this exercise is in the form of a metric known as functional coverage. Coverage is typically indicated as a percentage of functionality covered by the test suite. This thesis proposes a novel methodology to construct a model using SVM, Gradient Boosting Classifier and Neural Networks aimed at replacing random test generation for speeding up coverage collection

    Medical imaging analysis with artificial neural networks

    Get PDF
    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Deep learning : enhancing the security of software-defined networks

    Get PDF
    Software-defined networking (SDN) is a communication paradigm that promotes network flexibility and programmability by separating the control plane from the data plane. SDN consolidates the logic of network devices into a single entity known as the controller. SDN raises significant security challenges related to its architecture and associated characteristics such as programmability and centralisation. Notably, security flaws pose a risk to controller integrity, confidentiality and availability. The SDN model introduces separation of the forwarding and control planes. It detaches the control logic from switching and routing devices, forming a central plane or network controller that facilitates communications between applications and devices. The architecture enhances network resilience, simplifies management procedures and supports network policy enforcement. However, it is vulnerable to new attack vectors that can target the controller. Current security solutions rely on traditional measures such as firewalls or intrusion detection systems (IDS). An IDS can use two different approaches: signature-based or anomaly-based detection. The signature-based approach is incapable of detecting zero-day attacks, while anomaly-based detection has high false-positive and false-negative alarm rates. Inaccuracies related to false-positive attacks may have significant consequences, specifically from threats that target the controller. Thus, improving the accuracy of the IDS will enhance controller security and, subsequently, SDN security. A centralised network entity that controls the entire network is a primary target for intruders. The controller is located at a central point between the applications and the data plane and has two interfaces for plane communications, known as northbound and southbound, respectively. Communications between the controller, the application and data planes are prone to various types of attacks, such as eavesdropping and tampering. The controller software is vulnerable to attacks such as buffer and stack overflow, which enable remote code execution that can result in attackers taking control of the entire network. Additionally, traditional network attacks are more destructive. This thesis introduces a threat detection approach aimed at improving the accuracy and efficiency of the IDS, which is essential for controller security. To evaluate the effectiveness of the proposed framework, an empirical study of SDN controller security was conducted to identify, formalise and quantify security concerns related to SDN architecture. The study explored the threats related to SDN architecture, specifically threats originating from the existence of the control plane. The framework comprises two stages, involving the use of deep learning (DL) algorithms and clustering algorithms, respectively. DL algorithms were used to reduce the dimensionality of inputs, which were forwarded to clustering algorithms in the second stage. Features were compressed to a single value, simplifying and improving the performance of the clustering algorithm. Rather than using the output of the neural network, the framework presented a unique technique for dimensionality reduction that used a single value—reconstruction error—for the entire input record. The use of a DL algorithm in the pre-training stage contributed to solving the problem of dimensionality related to k-means clustering. Using unsupervised algorithms facilitated the discovery of new attacks. Further, this study compares generative energy-based models (restricted Boltzmann machines) with non-probabilistic models (autoencoders). The study implements TensorFlow in four scenarios. Simulation results were statistically analysed using a confusion matrix, which was evaluated and compared with similar related works. The proposed framework, which was adapted from existing similar approaches, resulted in promising outcomes and may provide a robust prospect for deployment in modern threat detection systems in SDN. The framework was implemented using TensorFlow and was benchmarked to the KDD99 dataset. Simulation results showed that the use of the DL algorithm to reduce dimensionality significantly improved detection accuracy and reduced false-positive and false-negative alarm rates. Extensive simulation studies on benchmark tasks demonstrated that the proposed framework consistently outperforms all competing approaches. This improvement is a further step towards the development of a reliable IDS to enhance the security of SDN controllers

    Acceleration of hardware code coverage closure using machine learning

    Get PDF
    Abstract. With the ever-increasing system-on-chip (SoC) design complexity, the verification of such systems is becoming more and more challenging and extremely time consuming. Hence, the human efforts put in this task seem neither to be sufficient, nor efficient enough anymore to maintain a good pace with the escalating market demands. In this work, we will present a descent way of utilizing machine learning (ML) for reducing the overhead of hardware design verification in terms of resources consumption. Our focus in this thesis is especially about the time spent on coverage closure that usually occupies a great deal of the whole verification time. Both deep learning (DL) and reinforcement learning (RL) are deployed for this purpose, in two different experiments, in order to come out with the most coherent way to accomplish the coverage closure task. On one hand, neural networks (NNs) were used to help visualize whether a stimulus is worth to run the simulation with, by predicting the coverage number that it would generate. On the other hand, Q-learning was used to predict the minimal set of tests needed to reach some code coverage goal, by optimizing and reducing the set of tests while still achieving the same coverage levels. The results of these experiments show captivating findings. First, the root mean square error (RMSE) of the neural network models was about 3 and 5 in predicting two different coverage values, respectively, which is quite good for a training run on a small dataset. Second, our Q-agent was able to do better than the coverage ranking utility of the simulation tool by almost 43%, where it reduced the number of tests from 63, as suggested by the simulator, to 36. This should remarkably reduce the required number of simulations in weekly regressions, hence result in a huge gain in time and resources. Both of these approaches aim at reducing the engineers’ efforts through accelerating the verification process and automating it, which frees some of the engineers’ time and allow them to focus on more important matters

    A One-Class Support Vector Machine Calibration Method for Time Series Change Point Detection

    Get PDF
    It is important to identify the change point of a system's health status, which usually signifies an incipient fault under development. The One-Class Support Vector Machine (OC-SVM) is a popular machine learning model for anomaly detection and hence could be used for identifying change points; however, it is sometimes difficult to obtain a good OC-SVM model that can be used on sensor measurement time series to identify the change points in system health status. In this paper, we propose a novel approach for calibrating OC-SVM models. The approach uses a heuristic search method to find a good set of input data and hyperparameters that yield a well-performing model. Our results on the C-MAPSS dataset demonstrate that OC-SVM can also achieve satisfactory accuracy in detecting change point in time series with fewer training data, compared to state-of-the-art deep learning approaches. In our case study, the OC-SVM calibrated by the proposed model is shown to be useful especially in scenarios with limited amount of training data
    • …
    corecore