192 research outputs found

    TACT: A Transfer Actor-Critic Learning Framework for Energy Saving in Cellular Radio Access Networks

    Full text link
    Recent works have validated the possibility of improving energy efficiency in radio access networks (RANs), achieved by dynamically turning on/off some base stations (BSs). In this paper, we extend the research over BS switching operations, which should match up with traffic load variations. Instead of depending on the dynamic traffic loads which are still quite challenging to precisely forecast, we firstly formulate the traffic variations as a Markov decision process. Afterwards, in order to foresightedly minimize the energy consumption of RANs, we design a reinforcement learning framework based BS switching operation scheme. Furthermore, to avoid the underlying curse of dimensionality in reinforcement learning, a transfer actor-critic algorithm (TACT), which utilizes the transferred learning expertise in historical periods or neighboring regions, is proposed and provably converges. In the end, we evaluate our proposed scheme by extensive simulations under various practical configurations and show that the proposed TACT algorithm contributes to a performance jumpstart and demonstrates the feasibility of significant energy efficiency improvement at the expense of tolerable delay performance.Comment: 11 figures, 30 pages, accepted in IEEE Transactions on Wireless Communications 2014. IEEE Trans. Wireless Commun., Feb. 201

    GAN-powered Deep Distributional Reinforcement Learning for Resource Management in Network Slicing

    Full text link
    Network slicing is a key technology in 5G communications system. Its purpose is to dynamically and efficiently allocate resources for diversified services with distinct requirements over a common underlying physical infrastructure. Therein, demand-aware resource allocation is of significant importance to network slicing. In this paper, we consider a scenario that contains several slices in a radio access network with base stations that share the same physical resources (e.g., bandwidth or slots). We leverage deep reinforcement learning (DRL) to solve this problem by considering the varying service demands as the environment state and the allocated resources as the environment action. In order to reduce the effects of the annoying randomness and noise embedded in the received service level agreement (SLA) satisfaction ratio (SSR) and spectrum efficiency (SE), we primarily propose generative adversarial network-powered deep distributional Q network (GAN-DDQN) to learn the action-value distribution driven by minimizing the discrepancy between the estimated action-value distribution and the target action-value distribution. We put forward a reward-clipping mechanism to stabilize GAN-DDQN training against the effects of widely-spanning utility values. Moreover, we further develop Dueling GAN-DDQN, which uses a specially designed dueling generator, to learn the action-value distribution by estimating the state-value distribution and the action advantage function. Finally, we verify the performance of the proposed GAN-DDQN and Dueling GAN-DDQN algorithms through extensive simulations

    Traffic Prediction Based on Random Connectivity in Deep Learning with Long Short-Term Memory

    Full text link
    Traffic prediction plays an important role in evaluating the performance of telecommunication networks and attracts intense research interests. A significant number of algorithms and models have been put forward to analyse traffic data and make prediction. In the recent big data era, deep learning has been exploited to mine the profound information hidden in the data. In particular, Long Short-Term Memory (LSTM), one kind of Recurrent Neural Network (RNN) schemes, has attracted a lot of attentions due to its capability of processing the long-range dependency embedded in the sequential traffic data. However, LSTM has considerable computational cost, which can not be tolerated in tasks with stringent latency requirement. In this paper, we propose a deep learning model based on LSTM, called Random Connectivity LSTM (RCLSTM). Compared to the conventional LSTM, RCLSTM makes a notable breakthrough in the formation of neural network, which is that the neurons are connected in a stochastic manner rather than full connected. So, the RCLSTM, with certain intrinsic sparsity, have many neural connections absent (distinguished from the full connectivity) and which leads to the reduction of the parameters to be trained and the computational cost. We apply the RCLSTM to predict traffic and validate that the RCLSTM with even 35% neural connectivity still shows a satisfactory performance. When we gradually add training samples, the performance of RCLSTM becomes increasingly closer to the baseline LSTM. Moreover, for the input traffic sequences of enough length, the RCLSTM exhibits even superior prediction accuracy than the baseline LSTM.Comment: 6 pages, 9 figure

    Deep Learning with Long Short-Term Memory for Time Series Prediction

    Full text link
    Time series prediction can be generalized as a process that extracts useful information from historical records and then determines future values. Learning long-range dependencies that are embedded in time series is often an obstacle for most algorithms, whereas Long Short-Term Memory (LSTM) solutions, as a specific kind of scheme in deep learning, promise to effectively overcome the problem. In this article, we first give a brief introduction to the structure and forward propagation mechanism of the LSTM model. Then, aiming at reducing the considerable computing cost of LSTM, we put forward the Random Connectivity LSTM (RCLSTM) model and test it by predicting traffic and user mobility in telecommunication networks. Compared to LSTM, RCLSTM is formed via stochastic connectivity between neurons, which achieves a significant breakthrough in the architecture formation of neural networks. In this way, the RCLSTM model exhibits a certain level of sparsity, which leads to an appealing decrease in the computational complexity and makes the RCLSTM model become more applicable in latency-stringent application scenarios. In the field of telecommunication networks, the prediction of traffic series and mobility traces could directly benefit from this improvement as we further demonstrate that the prediction accuracy of RCLSTM is comparable to that of the conventional LSTM no matter how we change the number of training samples or the length of input sequences.Comment: 9 pages, 5 figures, 14 reference

    Electropolymerization of Polysilanes with Functional Groups

    Get PDF

    Light-driven kinetic resolution of α-functionalized acids enabled by engineered Fatty Acid Photodecarboxylase

    Get PDF
    Multifunctional chiral molecules such as unnatural α-amino acids and α-hydroxy acids are valuable precursors to a variety of medicines and natural products.[1] The biocatalysis provides a greener and more sustainable process than transition metal catalysts and complex chiral ligands. For example, keto reductases (KRED) and imine reductases (IRED) have been successfully used to convert α-keto acids into α-hydroxy/amino acids.[2] Another widely used method was kinetic resolution (KR) or dynamic kinetic resolution (DKR) by employing lipases.[3] Herein, we described the variants of fatty acid photodecarboxylase (CvFAP), which was used to convert long-chain fatty acids into hydrocarbons,[4] catalyze kinetic resolution of α-amino acids and α-hydroxy acids with high conversion and excellent nonreacted (R)-configured substrate stereoselectivity (ee up to 99%). This efficient light-driven process does not require NADPH recycle nor prerequisite preparation of esters in contrast with other biocatalytic methods (Scheme 1). To our delight, although most biocatalysts are hardly to be universal, the best mutant G462Y displayed a satisfactory substrate scope (Figure 1). The structure-guided engineering strategy was introduced by large-size amino acid scanning at hot position to narrow the substrate binding tunnel. We believed that this research conformed to the conference topic of Enzyme promiscuity, evolution and dynamics. Please click Additional Files below to see the full abstract

    Large-scale Spatial Distribution Identification of Base Stations in Cellular Networks

    Full text link
    The performance of cellular system significantly depends on its network topology, where the spatial deployment of base stations (BSs) plays a key role in the downlink scenario. Moreover, cellular networks are undergoing a heterogeneous evolution, which introduces unplanned deployment of smaller BSs, thus complicating the performance evaluation even further. In this paper, based on large amount of real BS locations data, we present a comprehensive analysis on the spatial modeling of cellular network structure. Unlike the related works, we divide the BSs into different subsets according to geographical factor (e.g. urban or rural) and functional type (e.g. macrocells or microcells), and perform detailed spatial analysis to each subset. After examining the accuracy of Poisson point process (PPP) in BS locations modeling, we take into account the Gibbs point processes as well as Neyman-Scott point processes and compare their accuracy in view of large-scale modeling test. Finally, we declare the inaccuracy of the PPP model, and reveal the general clustering nature of BSs deployment, which distinctly violates the traditional assumption. This paper carries out a first large-scale identification regarding available literatures, and provides more realistic and more general results to contribute to the performance analysis for the forthcoming heterogeneous cellular networks

    Deep Reinforcement Learning for Resource Management in Network Slicing

    Full text link
    Network slicing is born as an emerging business to operators, by allowing them to sell the customized slices to various tenants at different prices. In order to provide better-performing and cost-efficient services, network slicing involves challenging technical issues and urgently looks forward to intelligent innovations to make the resource management consistent with users' activities per slice. In that regard, deep reinforcement learning (DRL), which focuses on how to interact with the environment by trying alternative actions and reinforcing the tendency actions producing more rewarding consequences, is assumed to be a promising solution. In this paper, after briefly reviewing the fundamental concepts of DRL, we investigate the application of DRL in solving some typical resource management for network slicing scenarios, which include radio resource slicing and priority-based core network slicing, and demonstrate the advantage of DRL over several competing schemes through extensive simulations. Finally, we also discuss the possible challenges to apply DRL in network slicing from a general perspective.Comment: The manuscript has been accepted by IEEE Access in Nov. 201
    corecore