207 research outputs found

    Accelerating Deterministic and Stochastic Binarized Neural Networks on FPGAs Using OpenCL

    Full text link
    Recent technological advances have proliferated the available computing power, memory, and speed of modern Central Processing Units (CPUs), Graphics Processing Units (GPUs), and Field Programmable Gate Arrays (FPGAs). Consequently, the performance and complexity of Artificial Neural Networks (ANNs) is burgeoning. While GPU accelerated Deep Neural Networks (DNNs) currently offer state-of-the-art performance, they consume large amounts of power. Training such networks on CPUs is inefficient, as data throughput and parallel computation is limited. FPGAs are considered a suitable candidate for performance critical, low power systems, e.g. the Internet of Things (IOT) edge devices. Using the Xilinx SDAccel or Intel FPGA SDK for OpenCL development environment, networks described using the high-level OpenCL framework can be accelerated on heterogeneous platforms. Moreover, the resource utilization and power consumption of DNNs can be further enhanced by utilizing regularization techniques that binarize network weights. In this paper, we introduce, to the best of our knowledge, the first FPGA-accelerated stochastically binarized DNN implementations, and compare them to implementations accelerated using both GPUs and FPGAs. Our developed networks are trained and benchmarked using the popular MNIST and CIFAR-10 datasets, and achieve near state-of-the-art performance, while offering a >16-fold improvement in power consumption, compared to conventional GPU-accelerated networks. Both our FPGA-accelerated determinsitic and stochastic BNNs reduce inference times on MNIST and CIFAR-10 by >9.89x and >9.91x, respectively.Comment: 4 pages, 3 figures, 1 tabl

    Network Intrusion Detection System using Deep Learning Technique

    Get PDF
    The rise in the usage of the internet in this recent time had led to tremendous development in computer networks with large volumes of information transported daily. This development has generated lots of security threats and privacy concerns on networks and data. To tackle these issues, several protective measures have been developed including the Intrusion Detection Systems (IDSs). IDS plays a major backbone in network security and provides an extra layer of security to other security defence mechanisms in a network. However, existing IDS built on a signature base such as snort and the likes are unable to detect unknown and novel threats. Anomaly detection-based IDSs that use Machine Learning (ML) approaches are not scalable when enormous data are presented, and during modelling, the runtime increases as the dataset size increases which needs high computational resources to fulfil the runtime requirements. This thesis proposes a Feedforward Deep Neural Network (FFDNN) for an intrusion detection system that performs a binary classification on the popular NSL-Knowledge discovery and data mining (NSL-KDD) dataset. The model was developed from Keras API integrated into TensorFlow in Google's colaboratory software environment. Three variants of FFDNNs were trained using the NSL-KDD dataset and the network architecture consisted of two hidden layers with 64 and 32; 32 and 16; 512 and 256 neurons respectively, and each with the ReLu activation function. The sigmoid activation function for binary classification was used in the output layer and the prediction loss function used was the binary cross-entropy. Regularization was set to a dropout rate of 0.2 and the Adam optimizer was used. The deep neural networks were trained for 16, 20, 20 epochs respectively for batch sizes of 256, 64, and 128. After evaluating the performances of the FFDNNs on the training data, the prediction was made on test data, and accuracies of 89%, 84%, and 87% were achieved. The experiment was also conducted on the same training dataset (NSL-KDD) using the conventional machine learning algorithms (Random Forest; K-nearest neighbor; Logistic regression; Decision tree; and Naïve Bayes) and predictions of each algorithm on the test data gave different performance accuracies of 81%, 76%, 77%, 77%, 77%, respectively. The performance results of the FFDNNs were calculated based on some important metrics (FPR, FAR, F1 Measure, Precision), and these were compared to the conventional ML algorithms and the outcome shows that the deep neural networks performed best due to their dense architecture that made it scalable with the large size of the dataset and also offered a faster run time during training in contrast to the slow run time of the Conventional ML. This implies that when the dataset is large and a faster computation is required, then FFDNN is a better choice for best performance accuracy
    • …
    corecore