2 research outputs found

    FPGA-Based Reconfigurable Convolutional Neural Network Accelerator Using Sparse and Convolutional Optimization

    No full text
    Nowadays, the data flow architecture is considered as a general solution for the acceleration of a deep neural network (DNN) because of its higher parallelism. However, the conventional DNN accelerator offers only a restricted flexibility for diverse network models. In order to overcome this, a reconfigurable convolutional neural network (RCNN) accelerator, i.e., one of the DNN, is required to be developed over the field-programmable gate array (FPGA) platform. In this paper, the sparse optimization of weight (SOW) and convolutional optimization (CO) are proposed to improve the performances of the RCNN accelerator. The combination of SOW and CO is used to optimize the feature map and weight sizes of the RCNN accelerator; therefore, the hardware resources consumed by this RCNN are minimized in FPGA. The performances of RCNN-SOW-CO are analyzed by means of feature map size, weight size, sparseness of the input feature map (IFM), weight parameter proportion, block random access memory (BRAM), digital signal processing (DSP) elements, look-up tables (LUTs), slices, delay, power, and accuracy. An existing architectures OIDSCNN, LP-CNN, and DPR-NN are used to justify efficiency of the RCNN-SOW-CO. The LUT of RCNN-SOW-CO with Alexnet designed in the Zynq-7020 is 5150, which is less than the OIDSCNN and DPR-NN

    A Long Short-Term Memory Network-Based Radio Resource Management for 5G Network

    No full text
    Nowadays, the Long-Term Evolution-Advanced system is widely used to provide 5G communication due to its improved network capacity and less delay during communication. The main issues in the 5G network are insufficient user resources and burst errors, because it creates losses in data transmission. In order to overcome this, an effective Radio Resource Management (RRM) is required to be developed in the 5G network. In this paper, the Long Short-Term Memory (LSTM) network is proposed to develop the radio resource management in the 5G network. The proposed LSTM-RRM is used for assigning an adequate power and bandwidth to the desired user equipment of the network. Moreover, the Grid Search Optimization (GSO) is used for identifying the optimal hyperparameter values for LSTM. In radio resource management, a request queue is used to avoid the unwanted resource allocation in the network. Moreover, the losses during transmission are minimized by using frequency interleaving and guard level insertion. The performance of the LSTM-RRM method has been analyzed in terms of throughput, outage percentage, dual connectivity, User Sum Rate (USR), Threshold Sum Rate (TSR), Outdoor Sum Rate (OSR), threshold guaranteed rate, indoor guaranteed rate, and outdoor guaranteed rate. The indoor guaranteed rate of LSTM-RRM for 1400 m of building distance improved up to 75.38% compared to the existing QOC-RRM
    corecore