2,291 research outputs found

    Secure Multi-Path Selection with Optimal Controller Placement Using Hybrid Software-Defined Networks with Optimization Algorithm

    Get PDF
    The Internet's growth in popularity requires computer networks for both agility and resilience. Recently, unable to satisfy the computer needs for traditional networking systems. Software Defined Networking (SDN) is known as a paradigm shift in the networking industry. Many organizations are used SDN due to their efficiency of transmission. Striking the right balance between SDN and legacy switching capabilities will enable successful network scenarios in architecture networks. Therefore, this object grand scenario for a hybrid network where the external perimeter transport device is replaced with an SDN device in the service provider network. With the moving away from older networks to SDN, hybrid SDN includes both legacy and SDN switches. Existing models of SDN have limitations such as overfitting, local optimal trapping, and poor path selection efficiency. This paper proposed a Deep Kronecker Neural Network (DKNN) to improve its efficiency with a moderate optimization method for multipath selection in SDN. Dynamic resource scheduling is used for the reward function the learning performance is improved by the deep reinforcement learning (DRL) technique. The controller for centralised SDN acts as a network brain in the control plane. Among the most important duties network is selected for the best SDN controller. It is vulnerable to invasions and the controller becomes a network bottleneck. This study presents an intrusion detection system (IDS) based on the SDN model that runs as an application module within the controller. Therefore, this study suggested the feature extraction and classification of contractive auto-encoder with a triple attention-based classifier. Additionally, this study leveraged the best performing SDN controllers on which many other SDN controllers are based on OpenDayLight (ODL) provides an open northbound API and supports multiple southbound protocols. Therefore, one of the main issues in the multi-controller placement problem (CPP) that addresses needed in the setting of SDN specifically when different aspects in interruption, ability, authenticity and load distribution are being considered. Introducing the scenario concept, CPP is formulated as a robust optimization problem that considers changes in network status due to power outages, controller’s capacity, load fluctuations and changes in switches demand. Therefore, to improve network performance, it is planned to improve the optimal amount of controller placements by simulated annealing using different topologies the modified Dragonfly optimization algorithm (MDOA)

    Methods and Techniques for Dynamic Deployability of Software-Defined Security Services

    Get PDF
    With the recent trend of “network softwarisation”, enabled by emerging technologies such as Software-Defined Networking and Network Function Virtualisation, system administrators of data centres and enterprise networks have started replacing dedicated hardware-based middleboxes with virtualised network functions running on servers and end hosts. This radical change has facilitated the provisioning of advanced and flexible network services, ultimately helping system administrators and network operators to cope with the rapid changes in service requirements and networking workloads. This thesis investigates the challenges of provisioning network security services in “softwarised” networks, where the security of residential and business users can be provided by means of sets of software-based network functions running on high performance servers or on commodity devices. The study is approached from the perspective of the telecom operator, whose goal is to protect the customers from network threats and, at the same time, maximize the number of provisioned services, and thereby revenue. Specifically, the overall aim of the research presented in this thesis is proposing novel techniques for optimising the resource usage of software-based security services, hence for increasing the chances for the operator to accommodate more service requests while respecting the desired level of network security of its customers. In this direction, the contributions of this thesis are the following: (i) a solution for the dynamic provisioning of security services that minimises the utilisation of computing and network resources, and (ii) novel methods based on Deep Learning and Linux kernel technologies for reducing the CPU usage of software-based security network functions, with specific focus on the defence against Distributed Denial of Service (DDoS) attacks. The experimental results reported in this thesis demonstrate that the proposed solutions for service provisioning and DDoS defence require fewer computing resources, compared to similar approaches available in the scientific literature or adopted in production networks

    Power Management Strategies for Wired Communication Networks.

    Get PDF
    With the exponential traffic growth and the rapid expansion of communication infrastructures worldwide, energy expenditure of the Internet has become a major concern in IT-reliant society. This energy problem has motivated the urgent demands of new strategies to reduce the consumption of telecommunication networks, with a particular focus on IP networks. In addition to the development of a new generation of energy-efficient network equipment, a significant body of research has concentrated on incorporating power/energy-awareness into network control and management, which aims at reducing the network power/energy consumption by either dynamically scaling speeds of each active network component to make it capable of adapting to its current load or putting to sleep the lightly loaded network elements and reconfiguring the network. However, the fundamental challenge of greening the Internet is to achieve a balance between the power/energy saving and the demands of quality-of-service (QoS) performance, which is an issue that has received less attention but is becoming a major problem in future green network designs. In this dissertation, we study how energy consumption can be reduced through different power/energy- and QoS-aware strategies for wired communication networks. To sufficiently reduce energy consumption while meeting the desire QoS requirements, we introduce several different schemes combing power management techniques with different scheduling strategies, which can be classified into experimental power management (EPM) and algorithmic power management (APM). In these proposed schemes, the power management techniques that we focus on are speed scaling and sleep mode. When the network processor is active, its speed and supply voltage can be decreased to reduce the energy consumption (speed scaling), while when the processor is idle, it can be put in a low power mode to save the energy consumption (sleep mode). The resulting problem is to determine how and when to adjust speeds for the processors, and/or to put a device into sleep mode. In this dissertation, we first discuss three families of dynamic voltage/frequency scaling (DVFS) based, QoS-aware EPM schemes, which aim to reduce the energy consumption in network equipment by using different packet scheduling strategies, while adhering to QoS requirements of supported applications. Then, we explore the problem of energy minimization under QoS constraints through a mathematical programming model, which is a DVFS-based, delay-aware APM scheme combing the speed scaling technique with the existing rate monotonic scheduling policy. Among these speed scaling based schemes, up to 26.76% dynamic power saving of the total power consumption can be achieved. In addition to speed scaling approaches, we further propose a sleep-based, traffic-aware EPM scheme, which is used to reduce power consumption by greening routing light load and putting the related network equipment into sleep mode according to twelve flow traffic density changes in 24-hour of an arbitrarily selected day. Meanwhile, a speed scaling technique without violating network QoS performance is also considered in this scheme when the traffic is rerouted. Applying this sleep-based strategy can lead to power savings of up to 62.58% of the total power consumption

    Stochastic Model Predictive Control and Machine Learning for the Participation of Virtual Power Plants in Simultaneous Energy Markets

    Get PDF
    The emergence of distributed energy resources in the electricity system involves new scenarios in which domestic consumers (end-users) can be aggregated to participate in energy markets, acting as prosumers. Every prosumer is considered to work as an individual energy node, which has its own renewable generation source, its controllable and non-controllable energy loads, or even its own individual tariffs to trade. The nodes can build aggregations which are managed by a system operator. The participation in energy markets is not trivial for individual prosumers due to different aspects such as the technical requirements which must be satisfied, or the need to trade with a minimum volume of energy. These requirements can be solved by the definition of aggregated participations. In this context, the aggregators handle the difficult task of coordinating and stabilizing the prosumers' operations, not only at an individual level, but also at a system level, so that the set of energy nodes behaves as a single entity with respect to the market. The system operators can act as a trading-distributing company, or only as a trading one. For this reason, the optimization model must consider not only aggregated tariffs, but also individual tariffs to allow individual billing for each energy node. The energy node must have the required technical and legal competences, as well as the necessary equipment to manage their participation in energy markets or to delegate it to the system operator. This aggregation, according to business rules and not only to physical locations, is known as virtual power plant. The optimization of the aggregated participation in the different energy markets requires the introduction of the concept of dynamic storage virtualization. Therefore, every energy node in the system under study will have a battery installed to store excess energy. This dynamic virtualization defines logical partitions in the storage system to allow its use for different purposes. As an example, two different partitions can be defined: one for the aggregated participation in the day-ahead market, and the other one for the demand-response program. There are several criteria which must be considered when defining the participation strategy. A risky strategy will report more benefits in terms of trading; however, this strategy will also be more likely to get penalties for not meeting the contract due to uncertainties or operation errors. On the other hand, a conservative strategy would result worse economically in terms of trading, but it will reduce these potential penalties. The inclusion of dynamic intent profiles allows to set risky bids when there exist a potential low error of forecast in terms of generation, load or failures; and conservative bids otherwise. The system operator is the agent who decides how much energy will be reserved to trade, how much to energy node self consumption, how much to demand-response program participation etc. The large number of variables and states makes this problem too complex to be solved by classical methods, especially considering the fact that slight differences in wrong decisions would imply important economic issues in the short term. The concept of dynamic storage virtualization has been studied and implemented to allow the simultaneous participation in multiple energy markets. The simultaneous participations can be optimized considering the objective of potential profits, potential risks or even a combination of both considering more advanced criteria related to the system operator's know-how. Day-ahead bidding algorithms, demand-response program participation optimization and a penalty-reduction operation control algorithm have been developed. A stochastic layer has been defined and implemented to improve the robustness inherent to forecast-dependent systems. This layer has been developed with chance-constraints, which includes the possibility of combining an intelligent agent based on a encoder-decoder arquitecture built with neural networks composed of gated recurrent units. The formulation and the implementation allow a total decouplement among all the algorithms without any dependency among them. Nevertheless, they are completely engaged because the individual execution of each one considers both the current scenario and the selected strategy. This makes possible a wider and better context definition and a more real and accurate situation awareness. In addition to the relevant simulation runs, the platform has also been tested on a real system composed of 40 energy nodes during one year in the German island of Borkum. This experience allowed the extraction of very satisfactory conclusions about the deployment of the platform in real environments.La irrupción de los sistemas de generación distribuidos en los sistemas eléctricos dan lugar a nuevos escenarios donde los consumidores domésticos (usuarios finales) pueden participar en los mercados de energía actuando como prosumidores. Cada prosumidor es considerado como un nodo de energía con su propia fuente de generación de energía renovable, sus cargas controlables y no controlables e incluso sus propias tarifas. Los nodos pueden formar agregaciones que serán gestionadas por un agente denominado operador del sistema. La participación en los mercados energéticos no es trivial, bien sea por requerimientos técnicos de instalación o debido a la necesidad de cubrir un volumen mínimo de energía por transacción, que cada nodo debe cumplir individualmente. Estas limitaciones hacen casi imposible la participación individual, pero pueden ser salvadas mediante participaciones agregadas. El agregador llevará a cabo la ardua tarea de coordinar y estabilizar las operaciones de los nodos de energía, tanto individualmente como a nivel de sistema, para que todo el conjunto se comporte como una unidad con respecto al mercado. Las entidades que gestionan el sistema pueden ser meras comercializadoras, o distribuidoras y comercializadoras simultáneamente. Por este motivo, el modelo de optimización sobre el que basarán sus decisiones deberá considerar, además de las tarifas agregadas, otras individuales para permitir facturaciones independientes. Los nodos deberán tener autonomía legal y técnica, así como el equipamiento necesario y suficiente para poder gestionar, o delegar en el operador del sistema, su participación en los mercados de energía. Esta agregación atendiendo a reglas de negocio y no solamente a restricciones de localización física es lo que se conoce como Virtual Power Plant. La optimización de la participación agregada en los mercados, desde el punto de vista técnico y económico, requiere de la introducción del concepto de virtualización dinámica del almacenamiento, para lo que será indispensable que los nodos pertenecientes al sistema bajo estudio consten de una batería para almacenar la energía sobrante. Esta virtualización dinámica definirá particiones lógicas en el sistema de almacenamiento para dedicar diferentes porcentajes de la energía almacenada para propósitos distintos. Como ejemplo, se podría hacer una virtualización en dos particiones lógicas diferentes: una de demand-response. Así, el sistema podría operar y satisfacer ambos mercados de manera simultánea con el mismo grid y el mismo almacenamiento. El potencial de estas particiones lógicas es que se pueden definir de manera dinámica, dependiendo del contexto de ejecución y del estado, tanto de la red, como de cada uno de los nodos a nivel individual. Para establecer una estrategia de participación se pueden considerar apuestas arriesgadas que reportarán más beneficios en términos de compra-venta, pero también posibles penalizaciones por no poder cumplir con el contrato. Por el contrario, una estrategia conservadora podría resultar menos beneficiosa económicamente en dichos términos de compra-venta, pero reducirá las penalizaciones. La inclusión del concepto de perfiles de intención dinámicos permitirá hacer pujas que sean arriesgadas, cuando existan errores de predicción potencialmente pequeños en términos de generación, consumo o fallos; y pujas más conservadoras en caso contrario. El operador del sistema es el agente que definirá cuánta energía utiliza para comercializar, cuánta para asegurar autoconsumo, cuánta desea tener disponible para participar en el programa de demand-response etc. El gran número de variables y de situaciones posibles hacen que este problema sea muy costoso y complejo de resolver mediante métodos clásicos, sobre todo teniendo en cuenta que pequeñas variaciones en la toma de decisiones pueden tener grandes implicaciones económicas incluso a corto plazo. En esta tesis se ha investigado en el concepto de virtualización dinámica del almacenamiento para permitir una participación simultánea en múltiples mercados. La estrategia de optimización definida permite participaciones simultáneas en diferentes mercados que pueden ser controladas con el objetivo de optimizar el beneficio potencial, el riesgo potencial, o incluso una combinación mixta de ambas en base a otros criterios más avanzados marcados por el know-how del operador del sistema. Se han desarrollado algoritmos de optimización para el mercado del day-ahead, para la participación en el programa de demand-response y un algoritmo de control para reducir las penalizaciones durante la operación mediante modelos de control predictivo. Se ha realizado la definición e implementación de un componente estocástico para hacer el sistema más robusto frente a la incertidumbre inherente a estos sistemas en los que hay tanto peso de una componente de tipo forecasing. La formulación de esta capa se ha realizado mediante chance-constraints, que incluye la posibilidad de combinar diferentes componentes para mejorar la precisión de la optimización. Para el caso de uso presentado se ha elegido la combinación de métodos estadísticos por probabilidad junto a un agente inteligente basado en una arquitectura de codificador-decodificador construida con redes neuronales compuestas de Gated Recurrent Units. La formulación y la implementación utilizada permiten que, aunque todos los algoritmos estén completamente desacoplados y no presenten dependencias entre ellos, todos se actual como la estrategia seleccionada. Esto permite la definición de un contexto mucho más amplio en la ejecución de las optimizaciones y una toma de decisiones más consciente, real y ajustada a la situación que condiciona al proceso. Además de las pertinentes pruebas de simulación, parte de la herramienta ha sido probada en un sistema real compuesto por 40 nodos domésticos, convenientemente equipados, durante un año en una infraestructura implantada en la isla alemana de Borkum. Esta experiencia ha permitido extraer conclusiones muy interesantes sobre la implantación de la plataforma en entornos reales

    Achieving Energy Efficiency on Networking Systems with Optimization Algorithms and Compressed Data Structures

    Get PDF
    To cope with the increasing quantity, capacity and energy consumption of transmission and routing equipment in the Internet, energy efficiency of communication networks has attracted more and more attention from researchers around the world. In this dissertation, we proposed three methodologies to achieve energy efficiency on networking devices: the NP-complete problems and heuristics, the compressed data structures, and the combination of the first two methods. We first consider the problem of achieving energy efficiency in Data Center Networks (DCN). We generalize the energy efficiency networking problem in data centers as optimal flow assignment problems, which is NP-complete, and then propose a heuristic called CARPO, a correlation-aware power optimization algorithm, that dynamically consolidate traffic flows onto a small set of links and switches in a DCN and then shut down unused network devices for power savings. We then achieve energy efficiency on Internet routers by using the compressive data structure. A novel data structure called the Probabilistic Bloom Filter (PBF), which extends the classical bloom filter into the probabilistic direction, so that it can effectively identify heavy hitters with a small memory foot print to reduce energy consumption of network measurement. To achieve energy efficiency on Wireless Sensor Networks (WSN), we developed one data collection protocol called EDAL, which stands for Energy-efficient Delay-aware Lifetime-balancing data collection. Based on the Open Vehicle Routing problem, EDAL exploits the topology requirements of Compressive Sensing (CS), then implement CS to save more energy on sensor nodes
    corecore