3,350 research outputs found

    Energy sustainability of next generation cellular networks through learning techniques

    Get PDF
    The trend for the next generation of cellular network, the Fifth Generation (5G), predicts a 1000x increase in the capacity demand with respect to 4G, which leads to new infrastructure deployments. To this respect, it is estimated that the energy consumption of ICT might reach the 51% of global electricity production by 2030, mainly due to mobile networks and services. Consequently, the cost of energy may also become predominant in the operative expenses of a Mobile Network Operator (MNO). Therefore, an efficient control of the energy consumption in 5G networks is not only desirable but essential. In fact, the energy sustainability is one of the pillars in the design of the next generation cellular networks. In the last decade, the research community has been paying close attention to the Energy Efficiency (EE) of the radio communication networks, with particular care on the dynamic switch ON/OFF of the Base Stations (BSs). Besides, 5G architectures will introduce the Heterogeneous Network (HetNet) paradigm, where Small BSs (SBSs) are deployed to assist the standard macro BS for satisfying the high traffic demand and reducing the impact on the energy consumption. However, only with the introduction of Energy Harvesting (EH) capabilities the networks might reach the needed energy savings for mitigating both the high costs and the environmental impact. In the case of HetNets with EH capabilities, the erratic and intermittent nature of renewable energy sources has to be considered, which entails some additional complexity. Solar energy has been chosen as reference EH source due to its widespread adoption and its high efficiency in terms of energy produced compared to its costs. To this end, in the first part of the thesis, a harvested solar energy model has been presented based on accurate stochastic Markov processes for the description of the energy scavenged by outdoor solar sources. The typical HetNet scenario involves dense deployments with a high level of flexibility, which suggests the usage of distributed control systems rather than centralized, where the scalability can become rapidly a bottleneck. For this reason, in the second part of the thesis, we propose to model the SBS tier as a Multi-agent Reinforcement Learning (MRL) system, where each SBS is an intelligent and autonomous agent, which learns by directly interacting with the environment and by properly utilizing the past experience. The agents implemented in each SBS independently learn a proper switch ON/OFF control policy, so as to jointly maximize the system performance in terms of throughput, drop rate and energy consumption, while adapting to the dynamic conditions of the environment, in terms of energy inflow and traffic demand. However, MRL might suffer the problem of coordination when finding simultaneously a solution among all the agents that is good for the whole system. In consequence, the Layered Learning paradigm has been adopted to simplify the problem by decomposing it in subtasks. In particular, the global solution is obtained in a hierarchical fashion: the learning process of a subtask is aimed at facilitating the learning of the next higher subtask layer. The first layer implements an MRL approach and it is in charge of the local online optimization at SBS level as function of the traffic demand and the energy incomes. The second layer is in charge of the network-wide optimization and it is based on Artificial Neural Networks aimed at estimating the model of the overall network.Con la llegada de la nueva generación de redes móviles, la quinta generación (5G), se predice un aumento por un factor 1000 en la demanda de capacidad respecto a la 4G, con la consecuente instalación de nuevas infraestructuras. Se estima que el gasto energético de las tecnologías de la información y la comunicación podría alcanzar el 51% de la producción mundial de energía en el año 2030, principalmente debido al impacto de las redes y servicios móviles. Consecuentemente, los costes relacionados con el consumo de energía pasarán a ser una componente predominante en los gastos operativos (OPEX) de las operadoras de redes móviles. Por lo tanto, un control eficiente del consumo energético de las redes 5G, ya no es simplemente deseable, sino esencial. En la última década, la comunidad científica ha enfocado sus esfuerzos en la eficiencia energética (EE) de las redes de comunicaciones móviles, con particular énfasis en algoritmos para apagar y encender las estaciones base (BS). Además, las arquitecturas 5G introducirán el paradigma de las redes heterogéneas (HetNet), donde pequeñas BSs, o small BSs (SBSs), serán desplegadas para ayudar a las grandes macro BSs en satisfacer la gran demanda de tráfico y reducir el impacto en el consumo energético. Sin embargo, solo con la introducción de técnicas de captación de la energía ambiental, las redes pueden alcanzar los ahorros energéticos requeridos para mitigar los altos costes de la energía y su impacto en el medio ambiente. En el caso de las HetNets alimentadas mediante energías renovables, la naturaleza errática e intermitente de esta tipología de energías constituye una complejidad añadida al problema. La energía solar ha sido utilizada como referencia debido a su gran implantación y su alta eficiencia en términos de cantidad de energía producida respecto costes de producción. Por consiguiente, en la primera parte de la tesis se presenta un modelo de captación de la energía solar basado en un riguroso modelo estocástico de Markov que representa la energía capturada por paneles solares para exteriores. El escenario típico de HetNet supondrá el despliegue denso de SBSs con un alto nivel de flexibilidad, lo cual sugiere la utilización de sistemas de control distribuidos en lugar de aquellos que están centralizados, donde la adaptabilidad podría convertirse rápidamente en un reto difícilmente gestionable. Por esta razón, en la segunda parte de la tesis proponemos modelar las SBSs como un sistema multiagente de aprendizaje automático por refuerzo, donde cada SBS es un agente inteligente y autónomo que aprende interactuando directamente con su entorno y utilizando su experiencia acumulada. Los agentes en cada SBS aprenden independientemente políticas de control del apagado y encendido que les permiten maximizar conjuntamente el rendimiento y el consumo energético a nivel de sistema, adaptándose a condiciones dinámicas del ambiente tales como la energía renovable entrante y la demanda de tráfico. No obstante, los sistemas multiagente sufren problemas de coordinación cuando tienen que hallar simultáneamente una solución de forma distribuida que sea buena para todo el sistema. A tal efecto, el paradigma de aprendizaje por niveles ha sido utilizado para simplificar el problema dividiéndolo en subtareas. Más detalladamente, la solución global se consigue de forma jerárquica: el proceso de aprendizaje de una subtarea está dirigido a ayudar al aprendizaje de la subtarea del nivel superior. El primer nivel contempla un sistema multiagente de aprendizaje automático por refuerzo y se encarga de la optimización en línea de las SBSs en función de la demanda de tráfico y de la energía entrante. El segundo nivel se encarga de la optimización a nivel de red del sistema y está basado en redes neuronales artificiales diseñadas para estimar el modelo de todas las BSsPostprint (published version

    Distributed deep reinforcement learning for functional split control in energy harvesting virtualized small cells

    Get PDF
    To meet the growing quest for enhanced network capacity, mobile network operators (MNOs) are deploying dense infrastructures of small cells. This, in turn, increases the power consumption of mobile networks, thus impacting the environment. As a result, we have seen a recent trend of powering mobile networks with harvested ambient energy to achieve both environmental and cost benefits. In this paper, we consider a network of virtualized small cells (vSCs) powered by energy harvesters and equipped with rechargeable batteries, which can opportunistically offload baseband (BB) functions to a grid-connected edge server depending on their energy availability. We formulate the corresponding grid energy and traffic drop rate minimization problem, and propose a distributed deep reinforcement learning (DDRL) solution. Coordination among vSCs is enabled via the exchange of battery state information. The evaluation of the network performance in terms of grid energy consumption and traffic drop rate confirms that enabling coordination among the vSCs via knowledge exchange achieves a performance close to the optimal. Numerical results also confirm that the proposed DDRL solution provides higher network performance, better adaptation to the changing environment, and higher cost savings with respect to a tabular multi-agent reinforcement learning (MRL) solution used as a benchmark

    Network Management, Optimization and Security with Machine Learning Applications in Wireless Networks

    Get PDF
    Wireless communication networks are emerging fast with a lot of challenges and ambitions. Requirements that are expected to be delivered by modern wireless networks are complex, multi-dimensional, and sometimes contradicting. In this thesis, we investigate several types of emerging wireless networks and tackle some challenges of these various networks. We focus on three main challenges. Those are Resource Optimization, Network Management, and Cyber Security. We present multiple views of these three aspects and propose solutions to probable scenarios. The first challenge (Resource Optimization) is studied in Wireless Powered Communication Networks (WPCNs). WPCNs are considered a very promising approach towards sustainable, self-sufficient wireless sensor networks. We consider a WPCN with Non-Orthogonal Multiple Access (NOMA) and study two decoding schemes aiming for optimizing the performance with and without interference cancellation. This leads to solving convex and non-convex optimization problems. The second challenge (Network Management) is studied for cellular networks and handled using Machine Learning (ML). Two scenarios are considered. First, we target energy conservation. We propose an ML-based approach to turn Multiple Input Multiple Output (MIMO) technology on/off depending on certain criteria. Turning off MIMO can save considerable energy of the total site consumption. To control enabling and disabling MIMO, a Neural Network (NN) based approach is used. It learns some network features and decides whether the site can achieve satisfactory performance with MIMO off or not. In the second scenario, we take a deeper look into the cellular network aiming for more control over the network features. We propose a Reinforcement Learning-based approach to control three features of the network (relative CIOs, transmission power, and MIMO feature). The proposed approach delivers a stable state of the cellular network and enables the network to self-heal after any change or disturbance in the surroundings. In the third challenge (Cyber Security), we propose an NN-based approach with the target of detecting False Data Injection (FDI) in industrial data. FDI attacks corrupt sensor measurements to deceive the industrial platform. The proposed approach uses an Autoencoder (AE) for FDI detection. In addition, a Denoising AE (DAE) is used to clean the corrupted data for further processing

    Traffic control for energy harvesting virtual small cells via reinforcement learning

    Get PDF
    Due to the rapid growth of mobile data traffic, future mobile networks are expected to support at least 1000 times more capacity than 4G systems. This trend leads to an increasing energy demand from mobile networks which raises both economic and environmental concerns. Energy costs are becoming an important part of OPEX by Mobile Network Operators (MNOs). As a result, the shift towards energy-oriented design and operation of 5G and beyond systems has been emphasized by academia, industries as well as standard bodies. In particular, Radio Access Network (RAN) is the major energy consuming part of cellular networks. To increase the RAN efficiency, Cloud Radio Access Network (CRAN) has been proposed to enable centralized cloud processing of baseband functions while Base Stations (BSs) are reduced to simple Radio Remote Heads (RRHs). The connection between the RRHs and central cloud is provided by high capacity and very low latency fronthaul. Flexible functional splits between local BS sites and a central cloud are then proposed to relax the CRAN fronthaul requirements via partial processing of baseband functions at the local BS sites. Moreover, Network Function Virtualization (NFV) and Software Defined Networking (SDN) enable flexibility in placement and control of network functions. Relying on SDN/NFV with flexible functional splits, network functions of small BSs can be virtualized and placed at different sites of the network. These small BSs are known as virtual Small Cells (vSCs). More recently, Multi-access Edge Computing (MEC) has been introduced where BSs can leverage cloud computing capabilities and offer computational resources on demand basis. On the other hand, Energy Harvesting (EH) is a promising technology ensuring both cost effectiveness and carbon footprint reduction. However, EH comes with challenges mainly due to intermittent and unreliable energy sources. In EH Base Stations (EHBSs), it is important to intelligently manage the harvested energy as well as to ensure energy storage provision. Consequently, MEC enabled EHBSs can open a new frontier in energy-aware processing and sharing of processing units according to flexible functional split options. The goal of this PhD thesis is to propose energy-aware control algorithms in EH powered vSCs for efficient utilization of harvested energy and lowering the grid energy consumption of RAN, which is the most power consuming part of the network. We leverage on virtualization and MEC technologies for dynamic provision of computational resources according to functional split options employed by the vSCs. After describing the state-of-the-art, the first part of the thesis focuses on offline optimization for efficient harvested energy utilization via dynamic functional split control in vSCs powered by EH. For this purpose, dynamic programming is applied to determine the performance bound and comparison is drawn against static configurations. The second part of the thesis focuses on online control methods where reinforcement learning based controllers are designed and evaluated. In particular, more focus is given towards the design of multi-agent reinforcement learning to overcome the limitations of centralized approaches due to complexity and scalability. Both tabular and deep reinforcement learning algorithms are tailored in a distributed architecture with emphasis on enabling coordination among the agents. Policy comparison among the online controllers and against the offline bound as well as energy and cost saving benefits are also analyzed.Debido al rápido crecimiento del tráfico de datos móviles, se espera que las redes móviles futuras admitan al menos 1000 veces más capacidad que los sistemas 4G. Esta tendencia lleva a una creciente demanda de energía de las redes móviles, lo que plantea preocupaciones económicas y ambientales. Los costos de energía se están convirtiendo en una parte importante de OPEX por parte de los operadores de redes móviles (MNO). Como resultado, la academia, las industrias y los organismos estándar han enfatizado el cambio hacia el diseño orientado a la energía y la operación de sistemas 5G y más allá de los sistemas. En particular, la red de acceso por radio (RAN) es la principal parte de las redes celulares que consume energía. Para aumentar la eficiencia de la RAN, se ha propuesto Cloud Radio Access Network (CRAN) para permitir el procesamiento centralizado en la nube de las funciones de banda base, mientras que las estaciones base (BS) se reducen a simples cabezales remotos de radio (RRH). La conexión entre los RRHs y la nube central es proporcionada por una capacidad frontal de muy alta latencia y muy baja latencia. Luego se proponen divisiones funcionales flexibles entre los sitios de BS locales y una nube central para relajar los requisitos de red de enlace CRAN a través del procesamiento parcial de las funciones de banda base en los sitios de BS locales. Además, la virtualización de funciones de red (NFV) y las redes definidas por software (SDN) permiten flexibilidad en la colocación y el control de las funciones de red. Confiando en SDN / NFV con divisiones funcionales flexibles, las funciones de red de pequeñas BS pueden virtualizarse y ubicarse en diferentes sitios de la red. Estas pequeñas BS se conocen como pequeñas celdas virtuales (vSC). Más recientemente, se introdujo la computación perimetral de acceso múltiple (MEC) donde los BS pueden aprovechar las capacidades de computación en la nube y ofrecer recursos computacionales según la demanda. Por otro lado, Energy Harvesting (EH) es una tecnología prometedora que garantiza tanto la rentabilidad como la reducción de la huella de carbono. Sin embargo, EH presenta desafíos principalmente debido a fuentes de energía intermitentes y poco confiables. En las estaciones base EH (EHBS), es importante administrar de manera inteligente la energía cosechada, así como garantizar el suministro de almacenamiento de energía. En consecuencia, los EHBS habilitados para MEC pueden abrir una nueva frontera en el procesamiento con conciencia energética y el intercambio de unidades de procesamiento de acuerdo con las opciones de división funcional flexible. El objetivo de esta tesis doctoral es proponer algoritmos de control conscientes de la energía en vSC alimentados por EH para la utilización eficiente de la energía cosechada y reducir el consumo de energía de la red de RAN, que es la parte más consumidora de la red. Aprovechamos las tecnologías de virtualización y MEC para la provisión dinámica de recursos computacionales de acuerdo con las opciones de división funcional empleadas por los vSC. La primera parte de la tesis se centra en la optimización fuera de línea para la utilización eficiente de la energía cosechada a través del control dinámico de división funcional en vSC con tecnología EH. Para este propósito, la programación dinámica se aplica para determinar el rendimiento limitado y la comparación se realiza con configuraciones estáticas. La segunda parte de la tesis se centra en los métodos de control en línea donde se diseñan y evalúan los controladores basados en el aprendizaje por refuerzo. En particular, se presta más atención al diseño de aprendizaje de refuerzo de múltiples agentes para superar las limitaciones de los enfoques centralizados debido a la complejidad y la escalabilidad. También se analiza la comparación de políticas entre los controladores en línea y contra los límites fuera de línea,Postprint (published version
    corecore