8 research outputs found

    Learning RL-Policies for Joint Beamforming Without Exploration: A Batch Constrained Off-Policy Approach

    Full text link
    In this work, we consider the problem of network parameter optimization for rate maximization. We frame this as a joint optimization problem of power control, beam forming, and interference cancellation. We consider the setting where multiple Base Stations (BSs) communicate with multiple user equipment (UEs). Because of the exponential computational complexity of brute force search, we instead solve this nonconvex optimization problem using deep reinforcement learning (RL) techniques. Modern communication systems are notorious for their difficulty in exactly modeling their behavior. This limits us in using RL-based algorithms as interaction with the environment is needed for the agent to explore and learn efficiently. Further, it is ill-advised to deploy the algorithm in the real world for exploration and learning because of the high cost of failure. In contrast to the previous RL-based solutions proposed, such as deep-Q network (DQN) based control, we suggest an offline model-based approach. We specifically consider discrete batch-constrained deep Q-learning (BCQ) and show that performance similar to DQN can be achieved with only a fraction of the data without exploring. This maximizes sample efficiency and minimizes risk in deploying a new algorithm to commercial networks. We provide the entire project resource, including code and data, at the following link: https://github.com/Heasung-Kim/ safe-rl-deployment-for-5g.Comment: 10 pages, 8 figure

    A Deep Learning Approach for Automotive Radar Interference Mitigation

    Full text link
    In automotive systems, a radar is a key component of autonomous driving. Using transmit and reflected radar signal by a target, we can capture the target range and velocity. However, when interference signals exist, noise floor increases and it severely affects the detectability of target objects. For these reasons, previous studies have been proposed to cancel interference or reconstruct original signals. However, the conventional signal processing methods for canceling the interference or reconstructing the transmit signals are difficult tasks, and also have many restrictions. In this work, we propose a novel approach to mitigate interference using deep learning. The proposed method provides high performance in various interference conditions and has low processing time. Moreover, we show that our proposed method achieves better performance compared to existing signal processing methods.Comment: Accepted in 2018 VTC worksho

    A Deep Learning Approach for Automotive Radar Interference Mitigation

    Get PDF
    In automotive systems, a radar is a key component of autonomous driving. Using transmit and reflected radar signal by a target, we can capture the target range and velocity. However, when interference signals exist, noise floor increases and it severely affects the detectability of target objects. For these reasons, previous studies have been proposed to cancel interference or reconstruct original signals. However, the conventional signal processing methods for canceling the interference or reconstructing the transmit signals are difficult tasks, and also have many restrictions. In this work, we propose a novel approach to mitigate interference using deep learning. The proposed method provides high performance in various interference conditions and has low processing time. Moreover, we show that our proposed method achieves better performance compared to existing signal processing methods.Comment: Accepted in 2018 VTC worksho

    A Deep Learning Approach for Automotive Radar Interference Mitigation

    No full text
    In automotive systems, a radar is a key corn ponent of autonomous driving. Using transmit and reflected radar signal by a target, we can capture the target range and velocity. However, when interference signals exist, noise floor increases and it severely affects the detectability of target objects. For these reasons, previous studies have been proposed to cancel interference or reconstruct original signals. However, the onventional signal processing methods for canceling the interference or reconstructing the transmit signals are difficult tasks, and also have many restrictions. In this work, we propose a novel approach to mitigate interference using deep learning. The proposed method provides high performance in various interference conditions and has low processing time. Moreover, ve show that our proposed method achieves better performance compared to existing signal processing methods.N

    Action-Bounding for Reinforcement Learning in Energy Harvesting Communication Systems

    No full text
    In this paper, we consider a power allocation problem for energy harvesting communication systems, where a transmitter wants to send the desired messages to the receiver with the harvested energy in its rechargeable battery. We propose a new power allocation strategy based on deep reinforcement learning technique to maximize the expected total transmitted data for a given random energy arrival and random channel process. The key idea of our scheme is to lead the transmitter, rather than learning the undesirable power allocation policies, by an action-bounding technique using only causal knowledge of the energy and channel processes. This technique helps traditional reinforcement learning algorithms to work more accurately in the systems, and increases the performance of the learning algorithms. Moreover, we show that the proposed scheme achieves better performance with respect to the expected total transmitted data compared to existing power allocation strategies.N

    Rate Maximization with Reinforcement Learning for Time-Varying Energy Harvesting Broadcast Channels

    No full text
    In this paper, we consider a power allocation optimization technique for a time-varying fading broadcast channel in energy harvesting communication systems, in which a transmitter with a rechargeable battery transmits messages to receivers using the harvested energy. We first prove that the optimal online power allocation policy for the sum rate maximization of the transmitter is an increasing function of harvested energy, remaining battery, and each user's channel gain. We then construct an appropriate neural network by relying on increasing behavior of the optimal policy. This two-step approach, by using an effective function approximation as well as providing a fundamental guideline for neural network design, can prevent us from wasting the representational capacity of neural networks. On the basis of the neural network, we apply the policy gradient method to solve the power allocation problem. To validate the performance of our approach, we compare it with the closed-form the optimal policy in a partially observable Markov problem. Through further experiments, it is observed that our online solution achieves a performance close to the theoretical upper bound of the performance in a time-varying fading broadcast channel.N

    An Efficient Neural Network Architecture for Rate Maximization in Energy Harvesting Downlink Channels

    No full text
    This paper deals with the power allocation problem for achieving the upper bound of sum-rate region in energy harvesting downlink channels. We prove that the optimal power allocation policy that maximizes the sum-rate is an increasing function for harvested energy, channel gains, and remaining battery, regardless of the number of users in the downlink channels. We use this proof as a mathematical basis for the construction of a shallow neural network that can fully reflect the increasing property of the optimal policy. This scheme helps us to avoid using big neural networks which requires huge computational resources and causes overfitting. Through experiments, we reveal the inefficiencies and risks of deep neural network that are not optimized enough for the desired policy, and shows that our approach learns a robust policy even with the severe randomness of environments.N

    miR-200b/200a/429 Cluster Stimulates Ovarian Cancer Development by Targeting ING5

    No full text
    Ovarian cancer is the second most common gynaecological malignancy, and microRNAs (miRNAs) play important role in the cancer development. Here, we found that the level of miR-200b/200a/429 was significantly increased in serum and tumor tissues of patients with stage-I ovarian cancer. Consistent with these results, we detected increased expression levels of miR-200b/200a/429 in ovarian cancer cell lines compared with the human nontumorigenic ovarian epithelial cell line T80. The overexpression of miR-200b/200a/429 in T80 cells stimulated proliferation and caused their growth in soft agar and tumor formation in nude mice. Furthermore, we determined that miR-200b/200a/429 targets inhibitor of growth family 5 (ING5) and that the overexpression of ING5 can block miR-200b/200a/429-induced T80 cell transformation and tumorigenesis. Our findings suggest that miR-200b/200a/429 may be a useful biomarker for the early detection of ovarian cancer and that miR-200b/200a/429 significantly contributes to ovarian cancer development through ING5
    corecore