27 research outputs found

    Towards Zero Touch Next Generation Network Management

    Get PDF
    The current trend in user services places an ever-growing demand for higher data rates, near-real-time latencies, and near-perfect quality of service. To meet such demands, fundamental changes were made to the front and mid-haul and backbone networking segments servicing them. One of the main changes made was virtualizing the networking components to allow for faster deployment and reconfiguration when needed. However, adopting such technologies poses several challenges, such as improving the performance and efficiency of these systems by properly orchestrating the services to the ideal edge device. A second challenge is ensuring the backbone optical networking maximizes and maintains the throughput levels under more dynamically variant conditions. A third challenge is addressing the limitation of placement techniques in O-RAN. In this thesis, we propose using various optimization modeling and machine learning techniques in three segments of network systems towards lowering the need for human intervention targeting zero-touch networking. In particular, the first part of the thesis applies optimization modeling, heuristics, and segmentation to improve the locally driven orchestration techniques, which are used to place demands on edge devices throughput to ensure efficient and resilient placement decisions. The second part of the thesis proposes using reinforcement learning (RL) techniques on a nodal base to address the dynamic nature of demands within an optical networking paradigm. The RL techniques ensure blocking rates are kept to a minimum by tailoring the agents’ behavior based on each node\u27s demand intake throughout the day. The third part of the thesis proposes using transfer learning augmented reinforcement learning to drive a network slicing-based solution in O-RAN to address the stringent and divergent demands of 5G applications. The main contributions of the thesis consist of three broad parts. The first is developing optimal and heuristic orchestration algorithms that improve demands’ performance and reliability in an edge computing environment. The second is using reinforcement learning to determine the appropriate spectral placement for demands within isolated optical paths, ensuring lower fragmentation and better throughput utilization. The third is developing a heuristic controlled transfer learning augmented reinforcement learning network slicing in an O-RAN environment. Hence, ensuring improved reliability while maintaining lower complexity than traditional placement techniques

    A framework for traffic flow survivability in wireless networks prone to multiple failures and attacks

    Get PDF
    Transmitting packets over a wireless network has always been challenging due to failures that have always occurred as a result of many types of wireless connectivity issues. These failures have caused significant outages, and the delayed discovery and diagnostic testing of these failures have exacerbated their impact on servicing, economic damage, and social elements such as technological trust. There has been research on wireless network failures, but little on multiple failures such as node-node, node-link, and link–link failures. The problem of capacity efficiency and fast recovery from multiple failures has also not received attention. This research develops a capacity efficient evolutionary swarm survivability framework, which encompasses enhanced genetic algorithm (EGA) and ant colony system (ACS) survivability models to swiftly resolve node-node, node-link, and link-link failures for improved service quality. The capacity efficient models were tested on such failures at different locations on both small and large wireless networks. The proposed models were able to generate optimal alternative paths, the bandwidth required for fast rerouting, minimized transmission delay, and ensured the rerouting path fitness and good transmission time for rerouting voice, video and multimedia messages. Increasing multiple link failures reveal that as failure increases, the bandwidth used for rerouting and transmission time also increases. This implies that, failure increases bandwidth usage which leads to transmission delay, which in turn slows down message rerouting. The suggested framework performs better than the popular Dijkstra algorithm, proactive, adaptive and reactive models, in terms of throughput, packet delivery ratio (PDR), speed of transmission, transmission delay and running time. According to the simulation results, the capacity efficient ACS has a PDR of 0.89, the Dijkstra model has a PDR of 0.86, the reactive model has a PDR of 0.83, the proactive model has a PDR of 0.83, and the adaptive model has a PDR of 0.81. Another performance evaluation was performed to compare the proposed model's running time to that of other evaluated routing models. The capacity efficient ACS model has a running time of 169.89ms on average, while the adaptive model has a running time of 1837ms and Dijkstra has a running time of 280.62ms. With these results, capacity efficient ACS outperforms other evaluated routing algorithms in terms of PDR and running time. According to the mean throughput determined to evaluate the performance of the following routing algorithms: capacity efficient EGA has a mean throughput of 621.6, Dijkstra has a mean throughput of 619.3, proactive (DSDV) has a mean throughput of 555.9, and reactive (AODV) has a mean throughput of 501.0. Since Dijkstra is more similar to proposed models in terms of performance, capacity efficient EGA was compared to Dijkstra as follows: Dijkstra has a running time of 3.8908ms and EGA has a running time of 3.6968ms. In terms of running time and mean throughput, the capacity efficient EGA also outperforms the other evaluated routing algorithms. The generated alternative paths from these investigations demonstrate that the proposed framework works well in preventing the problem of data loss in transit and ameliorating congestion issue resulting from multiple failures and server overload which manifests when the process hangs. The optimal solution paths will in turn improve business activities through quality data communications for wireless service providers.School of ComputingPh. D. (Computer Science

    The Built Environment in a Changing Climate

    Get PDF
    The papers included in this Special Issue tackle multiple aspects of how cities, districts, and buildings could evolve along with climate change and how this would impact our way of conceiving and applying design criteria, policies, and urban plans. Despite the multidisciplinary nature of the collection, some transversal take-home messages emerge: • Today’s energy-efficient paradigms may lose their virtuosity in the future unless accurate estimates of future scenarios are used to design modelling platforms and to inform legislative frameworks; • Acting at the local scale is key. Future climate change adaptation will be implemented at the local level. Overlooking regional and local specificities will contribute to inaccurate and inefficient action plans. As such, the smaller scale will become vital in predicting future urban metabolic rates and corresponding comfort-driven strategies; • Energy poverty, heat vulnerability, and social injustice are emerging as critical factors for planning and acting for future-proof cities on par of micro- and meso-climatological factors; • Given that the impacts of climate change will persist for many years, adaptation to this phenomenon should be prioritized by removing any prominent barrier and by enabling combinations of different mitigation technologies. These topics will receive a global reach in few decades, since also developing and underdeveloped countries are starting their fight against local climate change, with cities at the forefront
    corecore