26 research outputs found

    Energy Efficient Core Networks with Clouds

    Get PDF
    The popularity of cloud based applications stemming from the high volume of connected mobile devices has led to a huge increase in Internet traffic. In order to enable easy access to cloud applications, infrastructure providers have invested in geographically distributed databases and servers. However, intelligent and energy efficient high capacity transport networks with near ubiquitous connectivity are needed to adequately and sustainably serve these requirements. In this thesis, network virtualisation has been identified as a potential networking paradigm that can contribute to network agility and energy efficiency improvements in core networks with clouds. The work first introduces a new virtual network embedding core network architecture with clouds and a compute and bandwidth resource provisioning mechanism aimed at reducing power consumption in core networks and data centres. Further, quality of service measures in compute and bandwidth resource provisioning such as delay and customer location have been investigated and their impact on energy efficiency established. Data centre location optimisation for energy efficiency in virtual network embedding infrastructure has been investigated by developing a MILP model that selects optimal data centre locations in the core network. The work also introduces an optical OFDM based physical layer in virtual network embedding to optimise power consumption and optical spectrum utilization. In addition, virtual network embedding schemes aimed at profit maximization for cloud infrastructure providers as well greenhouse gas emission reduction in cloud infrastructure networks have been investigated. GreenTouch, a consortium of industrial and academic experts on energy efficiency in ICTs, has adopted the work in this thesis as one of the measures of improving energy efficiency in core networks

    A framework for traffic flow survivability in wireless networks prone to multiple failures and attacks

    Get PDF
    Transmitting packets over a wireless network has always been challenging due to failures that have always occurred as a result of many types of wireless connectivity issues. These failures have caused significant outages, and the delayed discovery and diagnostic testing of these failures have exacerbated their impact on servicing, economic damage, and social elements such as technological trust. There has been research on wireless network failures, but little on multiple failures such as node-node, node-link, and link–link failures. The problem of capacity efficiency and fast recovery from multiple failures has also not received attention. This research develops a capacity efficient evolutionary swarm survivability framework, which encompasses enhanced genetic algorithm (EGA) and ant colony system (ACS) survivability models to swiftly resolve node-node, node-link, and link-link failures for improved service quality. The capacity efficient models were tested on such failures at different locations on both small and large wireless networks. The proposed models were able to generate optimal alternative paths, the bandwidth required for fast rerouting, minimized transmission delay, and ensured the rerouting path fitness and good transmission time for rerouting voice, video and multimedia messages. Increasing multiple link failures reveal that as failure increases, the bandwidth used for rerouting and transmission time also increases. This implies that, failure increases bandwidth usage which leads to transmission delay, which in turn slows down message rerouting. The suggested framework performs better than the popular Dijkstra algorithm, proactive, adaptive and reactive models, in terms of throughput, packet delivery ratio (PDR), speed of transmission, transmission delay and running time. According to the simulation results, the capacity efficient ACS has a PDR of 0.89, the Dijkstra model has a PDR of 0.86, the reactive model has a PDR of 0.83, the proactive model has a PDR of 0.83, and the adaptive model has a PDR of 0.81. Another performance evaluation was performed to compare the proposed model's running time to that of other evaluated routing models. The capacity efficient ACS model has a running time of 169.89ms on average, while the adaptive model has a running time of 1837ms and Dijkstra has a running time of 280.62ms. With these results, capacity efficient ACS outperforms other evaluated routing algorithms in terms of PDR and running time. According to the mean throughput determined to evaluate the performance of the following routing algorithms: capacity efficient EGA has a mean throughput of 621.6, Dijkstra has a mean throughput of 619.3, proactive (DSDV) has a mean throughput of 555.9, and reactive (AODV) has a mean throughput of 501.0. Since Dijkstra is more similar to proposed models in terms of performance, capacity efficient EGA was compared to Dijkstra as follows: Dijkstra has a running time of 3.8908ms and EGA has a running time of 3.6968ms. In terms of running time and mean throughput, the capacity efficient EGA also outperforms the other evaluated routing algorithms. The generated alternative paths from these investigations demonstrate that the proposed framework works well in preventing the problem of data loss in transit and ameliorating congestion issue resulting from multiple failures and server overload which manifests when the process hangs. The optimal solution paths will in turn improve business activities through quality data communications for wireless service providers.School of ComputingPh. D. (Computer Science

    Edge/Fog Computing Technologies for IoT Infrastructure

    Get PDF
    The prevalence of smart devices and cloud computing has led to an explosion in the amount of data generated by IoT devices. Moreover, emerging IoT applications, such as augmented and virtual reality (AR/VR), intelligent transportation systems, and smart factories require ultra-low latency for data communication and processing. Fog/edge computing is a new computing paradigm where fully distributed fog/edge nodes located nearby end devices provide computing resources. By analyzing, filtering, and processing at local fog/edge resources instead of transferring tremendous data to the centralized cloud servers, fog/edge computing can reduce the processing delay and network traffic significantly. With these advantages, fog/edge computing is expected to be one of the key enabling technologies for building the IoT infrastructure. Aiming to explore the recent research and development on fog/edge computing technologies for building an IoT infrastructure, this book collected 10 articles. The selected articles cover diverse topics such as resource management, service provisioning, task offloading and scheduling, container orchestration, and security on edge/fog computing infrastructure, which can help to grasp recent trends, as well as state-of-the-art algorithms of fog/edge computing technologies

    Performance controls for distributed telecommunication services

    Get PDF
    As the Internet and Telecommunications domains merge, open telecommunication service architectures such as TINA, PARLAY and PINT are becoming prevalent. Distributed Computing is a common engineering component in these technologies and promises to bring improvements to the scalability, reliability and flexibility of telecommunications service delivery systems. This distributed approach to service delivery introduces new performance concerns. As service logic is decomposed into software components and distnbuted across network resources, significant additional resource loading is incurred due to inter-node communications. This fact makes the choice of distribution of components in the network and the distribution of load between these components critical design and operational issues which must be resolved to guarantee a high level of service for the customer and a profitable network for the service operator. Previous research in the computer science domain has addressed optimal placement of components from the perspectives of minimising run time, minimising communications costs or balancing of load between network resources. This thesis proposes a more extensive optimisation model, which we argue, is more useful for addressing concerns pertinent to the telecommunications domain. The model focuses on providing optimal throughput and profitability of network resources and on overload protection whilst allowing flexibility in terms of the cost of installation of component copies and differentiation in the treatment of service types, in terms of fairness to the customer and profitability to the operator. Both static (design-time) component distribution and dynamic (run-time) load distribution algorithms are developed using Linear and Mixed Integer Programming techniques. An efficient, but sub-optimal, run-time solution, employing Market-based control, is also proposed. The performance of these algorithms is investigated using a simulation model of a distributed service platform, which is based on TINA service components interacting with the Intelligent Network through gateways. Simulation results are verified using Layered Queuing Network analytic modelling Results show significant performance gains over simpler methods of performance control and demonstrate how trade-offs in network profitability, fairness and network cost are possible

    Expanding the Horizons of Manufacturing: Towards Wide Integration, Smart Systems and Tools

    Get PDF
    This research topic aims at enterprise-wide modeling and optimization (EWMO) through the development and application of integrated modeling, simulation and optimization methodologies, and computer-aided tools for reliable and sustainable improvement opportunities within the entire manufacturing network (raw materials, production plants, distribution, retailers, and customers) and its components. This integrated approach incorporates information from the local primary control and supervisory modules into the scheduling/planning formulation. That makes it possible to dynamically react to incidents that occur in the network components at the appropriate decision-making level, requiring fewer resources, emitting less waste, and allowing for better responsiveness in changing market requirements and operational variations, reducing cost, waste, energy consumption and environmental impact, and increasing the benefits. More recently, the exploitation of new technology integration, such as through semantic models in formal knowledge models, allows for the capture and utilization of domain knowledge, human knowledge, and expert knowledge toward comprehensive intelligent management. Otherwise, the development of advanced technologies and tools, such as cyber-physical systems, the Internet of Things, the Industrial Internet of Things, Artificial Intelligence, Big Data, Cloud Computing, Blockchain, etc., have captured the attention of manufacturing enterprises toward intelligent manufacturing systems
    corecore