182 research outputs found

    Quantum Gauss Jordan Elimination

    Full text link
    In this paper we construct the Quantum Gau\ss Jordan Elimination (QGJE) Algorithm and estimate the complexity time of computation of Reduced Row Echelon Form (RREF) of an N×NN\times N matrix using QGJE procedure. The main theorem asserts that QGJE has computation time of order 2N/22^{N/2}

    Real-Time Network Slicing with Uncertain Demand: A Deep Learning Approach

    Full text link
    © 2019 IEEE. Practical and efficient network slicing often faces real-time dynamics of network resources and uncertain customer demands. This work provides an optimal and fast resource slicing solution under such dynamics by leveraging the latest advances in deep learning. Specifically, we first introduce a novel system model which allows the network provider to effectively allocate its combinatorial resources, i.e., spectrum, computing, and storage, to various classes of users. To allocate resources to users while taking into account the dynamic demands of users and resources constraints of the network provider, we employ a semi-Markov decision process framework. To obtain the optimal resource allocation policy for the network provider without requiring environment parameters, e.g., uncertain service time and resource demands, a Q-learning algorithm is adopted. Although this algorithm can maximize the revenue of the network provider, its convergence to the optimal policy is particularly slow, especially for problems with large state/action spaces. To overcome this challenge, we propose a novel approach using an advanced deep Q-learning technique, called deep dueling that can achieve the optimal policy at few thousand times faster than that of the conventional Q-learning algorithm. Simulation results show that our proposed framework can improve the long-term average return of the network provider up to 40% compared with other current approaches

    Offloading Energy Efficiency with Delay Constraint for Cooperative Mobile Edge Computing Networks

    Full text link
    © 2018 IEEE. We propose a novel edge computing network architecture that enables edge nodes to cooperate in sharing computing and radio resources to minimize the total energy consumption of mobile users while meeting their delay requirements. To find the optimal task offloading decisions for mobile users, we first formulate the joint task offloading and resource allocation optimization problem as a mixed integer non-linear programming (MINLP). The optimization involves both binary (offloading decisions) and real variables (resource allocations), making it an NP-hard and computational intractable problem. To circumvent, we relax the binary decision variables to transform the MINLP to a relaxed optimization problem with real variables. After proving that the relaxed problem is a convex one, we propose two solutions namely ROP and IBBA. ROP is adopted from the interior point method and IBBA is developed from the branch and bound algorithm. Through the numerical results, we show that our proposed approaches allow minimizing the total energy consumption and meet all delay requirements for mobile users

    Reinforcement Learning Approach for RF-Powered Cognitive Radio Network with Ambient Backscatter

    Full text link
    © 2018 IEEE. For an RF-powered cognitive radio network with ambient backscattering capability, while the primary channel is busy, the RF-powered secondary user (RSU) can either backscatter the primary signal to transmit its own data or harvest energy from the primary signal (and store in its battery). The harvested energy then can be used to transmit data when the primary channel becomes idle. To maximize the throughput for the secondary system, it is critical for the RSU to decide when to backscatter and when to harvest energy. This optimal decision has to account for the dynamics of the primary channel, energy storage capability, and data to be sent. To tackle that problem, we propose a Markov decision process (MDP)-based framework to optimize RSU's decisions based on its current states, e.g., energy, data as well as the primary channel state. As the state information may not be readily available at the RSU, we then design a low-complexity online reinforcement learning algorithm that guides the RSU to find the optimal solution without requiring prior-and complete-information from the environment. The extensive simulation results then clearly show that the proposed solution achieves higher throughputs, i.e., up to 50%, than that of conventional methods

    Learning Latent Distribution for Distinguishing Network Traffic in Intrusion Detection System

    Full text link
    © 2019 IEEE. We develop a novel deep learning model, Multi-distributed Variational AutoEncoder (MVAE), for the network intrusion detection. To make the traffic more distinguishable, MVAE introduces the label information of data samples into the Kullback-Leibler (KL) term of the loss function of Variational AutoEncoder (VAE). This label information allows MVAEs to force/partition network data samples into different classes with different regions in the latent feature space. As a result, the network traffic samples are more distinguishable in the new representation space (i.e., the latent feature space of MVAE), thereby improving the accuracy in detecting intrusions. To evaluate the efficiency of the proposed solution, we carry out intensive experiments on two popular network intrusion datasets, i.e., NSL-KDD and UNSW-NB15 under four conventional classifiers including Gaussian Naive Bayes (GNB), Support Vector Machine (SVM), Decision Tree (DT), and Random Forest (RF). The experimental results demonstrate that our proposed approach can significantly improve the accuracy of intrusion detection algorithms up to 24.6% compared to the original one (using area under the curve metric)

    A dynamic edge caching framework for mobile 5G networks

    Full text link
    © 2002-2012 IEEE. Mobile edge caching has emerged as a new paradigm to provide computing, networking resources, and storage for a variety of mobile applications. That helps achieve low latency, high reliability, and improve efficiency in handling a very large number of smart devices and emerging services (e.g., IoT, industry automation, virtual reality) in mobile 5G networks. Nonetheless, the development of mobile edge caching is challenged by the decentralized nature of edge nodes, their small coverage, limited computing, and storage resources. In this article, we first give an overview of mobile edge caching in 5G networks. After that, its key challenges and current approaches are discussed. We then propose a novel caching framework. Our framework allows an edge node to authorize the legitimate users and dynamically predicts and updates their content demands using the matrix factorization technique. Based on the prediction, the edge node can adopt advanced optimization methods to determine optimal content to store so as to maximize its revenue and minimize the average delay of its mobile users. Through numerical results, we demonstrate that our proposed framework provides not only an effective caching approach, but also an efficient economic solution for the mobile service provider

    Distributed Deep Learning at the Edge: A Novel Proactive and Cooperative Caching Framework for Mobile Edge Networks

    Full text link
    © 2012 IEEE. We propose two novel proactive cooperative caching approaches using deep learning (DL) to predict users' content demand in a mobile edge caching network. In the first approach, a content server (CS) takes responsibilities to collect information from all mobile edge nodes (MENs) in the network and then performs the proposed DL algorithm to predict the content demand for the whole network. However, such a centralized approach may disclose the private information because MENs have to share their local users' data with the CS. Thus, in the second approach, we propose a novel distributed deep learning (DDL)-based framework. The DDL allows MENs in the network to collaborate and exchange information to reduce the error of content demand prediction without revealing the private information of mobile users. Through simulation results, we show that our proposed approaches can enhance the accuracy by reducing the root mean squared error (RMSE) up to 33.7% and reduce the service delay by 47.4% compared with other machine learning algorithms

    Proof-of-Stake Consensus Mechanisms for Future Blockchain Networks: Fundamentals, Applications and Opportunities

    Full text link
    © 2013 IEEE. The rapid development of blockchain technology and their numerous emerging applications has received huge attention in recent years. The distributed consensus mechanism is the backbone of a blockchain network. It plays a key role in ensuring the network's security, integrity, and performance. Most current blockchain networks have been deploying the proof-of-work consensus mechanisms, in which the consensus is reached through intensive mining processes. However, this mechanism has several limitations, e.g., energy inefficiency, delay, and vulnerable to security threats. To overcome these problems, a new consensus mechanism has been developed recently, namely proof of stake, which enables to achieve the consensus via proving the stake ownership. This mechanism is expected to become a cutting-edge technology for future blockchain networks. This paper is dedicated to investigating proof-of-stake mechanisms, from fundamental knowledge to advanced proof-of-stake-based protocols along with performance analysis, e.g., energy consumption, delay, and security, as well as their promising applications, particularly in the field of Internet of Vehicles. The formation of stake pools and their effects on the network stake distribution are also analyzed and simulated. The results show that the ratio between the block reward and the total network stake has a significant impact on the decentralization of the network. Technical challenges and potential solutions are also discussed

    Energy Demand Prediction with Federated Learning for Electric Vehicle Networks

    Full text link
    In this paper, we propose novel approaches using state-of-the-art machine learning techniques, aiming at predicting energy demand for electric vehicle (EV) networks. These methods can learn and find the correlation of complex hidden features to improve the prediction accuracy. First, we propose an energy demand learning (EDL)-based prediction solution in which a charging station provider (CSP) gathers information from all charging stations (CSs) and then performs the EDL algorithm to predict the energy demand for the considered area. However, this approach requires frequent data sharing between the CSs and the CSP, thereby driving communication overhead and privacy issues for the EVs and CSs. To address this problem, we propose a federated energy demand learning (FEDL) approach which allows the CSs sharing their information without revealing real datasets. Specifically, the CSs only need to send their trained models to the CSP for processing. In this case, we can significantly reduce the communication overhead and effectively protect data privacy for the EV users. To further improve the effectiveness of the FEDL, we then introduce a novel clustering-based EDL approach for EV networks by grouping the CSs into clusters before applying the EDL algorithms. Through experimental results, we show that our proposed approaches can improve the accuracy of energy demand prediction up to 24.63% and decrease communication overhead by 83.4% compared with other baseline machine learning algorithms

    Time Series Analysis for Encrypted Traffic Classification: A Deep Learning Approach

    Full text link
    © 2018 IEEE. We develop a novel time series feature extraction technique to address the encrypted traffic/application classification problem. The proposed method consists of two main steps. First, we propose a feature engineering technique to extract significant attributes of the encrypted network traffic behavior by analyzing the time series of receiving packets. In the second step, we develop a deep learning-based technique to exploit the correlation of time series data samples of the encrypted network applications. To evaluate the efficiency of the proposed solution on the encrypted traffic classification problem, we carry out intensive experiments on a raw network traffic dataset, namely VPN-nonVPN, with three conventional classifier metrics including Precision, Recall, and F1 score. The experimental results demonstrate that our proposed approach can significantly improve the performance in identifying encrypted application traffic in terms of accuracy and computation efficiency
    • …
    corecore