495 research outputs found

    Optimising Networks For Ultra-High Definition Video

    Get PDF
    The increase in real-time ultra-high definition video services is a challenging issue for current network infrastructures. The high bitrate traffic generated by ultra-high definition content reduces the effectiveness of current live video distribution systems. Transcoders and application layer multicasting (ALM) can reduce traffic in a video delivery system, but they are limited due to the static nature of their implementations. To overcome the restrictions of current static video delivery systems, an OpenFlow based migration system is proposed. This system enables an almost seamless migration of a transcoder or ALM node, while delivering real-time ultra-high definition content. Further to this, a novel heuristic algorithm is presented to optimise control of the migration events and destination. The combination of the migration system and heuristic algorithm provides an improved video delivery system, capable of migrating resources during operation with minimal disruption to clients. With the rise in popularity of consumer based live streaming, it is necessary to develop and improve architectures that can support these new types of applications. Current architectures introduce a large delay to video streams, which presents issues for certain applications. In order to overcome this, an improved infrastructure for delivering real-time streams is also presented. The proposed system uses OpenFlow within a content delivery network (CDN) architecture, in order to improve several aspects of current CDNs. Aside from the reduction in stream delay, other improvements include switch level multicasting to reduce duplicate traffic and smart load balancing for server resources. Furthermore, a novel max-flow algorithm is also presented. This algorithm aims to optimise traffic within a system such as the proposed OpenFlow CDN, with the focus on distributing traffic across the network, in order to reduce the probability of blocking

    Local Market Mechanisms: how Local Markets can shape the Energy Transition

    Get PDF
    Europe has embarked on a journey towards a zero-emission system, with the power system at its core. From electricity generation to electric vehicles, the European power system must transform into an interconnected, intelligent network. To achieve this vision, active user participation is crucial, ensuring transparency, efficiency, and inclusivity. Thus, Europe has increasingly focused on the concept of markets in all their facets. This thesis seeks to answer the following questions: How can markets, often considered abstract and accessible only to high-level users, be integrated for end-users? How can market mechanisms be leveraged across various phases of the electrical system? Why is a market- driven approach essential for solving network congestions and even influencing planning? These questions shape the core of this research. The analysis unfolds in three layers, each aligned with milestones leading to 2050. The first explores how market mechanisms can be integrated into system operator development plans, enhancing system resilience in the face of changes. In this regard, this step addresses the question of how a market can be integrated into the development plans of a network and how network planning can account for uncertainties. Finally, the analysis highlights the importance of sector coupling in network planning, proposing a study in which various energy vectors lead to a multi-energy system. According to the roadmap to 2030, this layer demonstrates how markets can manage several components of the gas and electrical network. Finally, even though the robust optimisation increases the final cost in the market, it allows to cover the system operator from uncertainties. The second step delves into the concept of network congestion. While congestion management is primarily the domain of operators, it explores how technical and economic collaboration between operators and system users, via flexibility markets, can enhance resilience amid demand uncertainties and aggressive market behaviours. In addition to flexibility markets, other congestion markets are proposed, some radically different, like locational marginal pricing, and others more innovative, such as redispatching markets for distribution. Building upon the first analysis, this section addresses questions of how various energy vectors can be used not only to meet demand but also to manage the uncertainties associated with each resource. Consequently, this second part revisits the concept of sector coupling, demonstrating how various energy vectors can be managed through flexibility markets to resolve network congestion while simultaneously handling uncertainties related to different vectors. The results demonstrate the usefulness of the flexibility market in managing the sector coupling and the uncertainties related to several energy vectors. The third and most innovative step proposes energy and service markets for low-voltage users, employing distributed ledger technology. Since this step highlights topics that are currently too innovative to be realized, this third section offers a comparative study between centralised and decentralised markets using blockchain technology, highlighting which aspects of distributed ledger technology deserve attention and which aspects of low-voltage markets need revision. The results show that the blockchain technology is still in the early stage of its evolution, and several improvements are needed to fully apply this technology into real-world applications. To sum up, this thesis explores the evolving role of markets in the energy transition. Its insights are aimed at assisting system operators and network planners in effectively integrating market mechanisms at all levels of

    Impact of peer-to-peer trading and flexibility on local energy systems

    Get PDF
    To meet the 2050 net zero emission targets, energy systems around the globe are being revisited to achieve multi-vector decarbonisation in terms of electricity, transport, heating and cooling. As energy systems become more decentralised and digitised, local energy systems will have greater potential to self-sustain and hence, decrease reliance on fossil-fuelled central generation. While the uptake of electric vehicles, heat pumps, solar and battery systems offer a solution, the increase in electricity demand poses challenges in terms of higher peak demand, imbalance and overloading. Additionally, the current energy market structure prevents these assets in the distribution network from reaching their true techno-economic potential in flexibility services and energy trading. Peer-to-peer energy trading and community-level control algorithms achieve better matching of local demand and supply through the use of transactive energy markets, load shifting and peak shaving techniques. Existing research addresses the challenges of local energy markets and others investigate the effect of increased distributed assets on the network. However, the combined techno-economic effect requires the co-simulation of both market and network levels, coupled with simultaneous system balance, cost and carbon intensity considerations. Using bottom-up coordination and user-centric optimisation, this project investigated the potential of network-aware peer-to-peer trading and community-level control to increase self-sufficiency and self-consumption in energy communities. The techno-economic effects of these strategies are modelled while maintaining user comfort levels and healthy operation of the network and assets. The proposed strategies are evaluated according to their economic benefit, environmental impact and network stress. A case study in Scotland was employed to demonstrate the benefits of peer-to-peer trading and community self-consumption using future projections of demand, generation and storage. Additionally, the concept of energy smart contracts, embedded in blockchains, are proposed and demonstrated to overcome the major challenges of monitoring and contracting. The results indicate benefits for various energy systems stakeholders. Distribution system end-users benefit from lower energy costs while system operators obtain better visibility of the local-level flexibility along with the associated technical challenges in terms of losses, imbalance and loading. From a commercial perspective, community energy companies may utilise this study to inform investment decisions regarding storage, distributed generation and transactive market solutions. Additionally, the insights about the energy smart contracts allow blockchain and relevant technology sectors to recognise the opportunities and challenges of smart contracts and distributed ledger technologies that are specific to the energy sector. On the broader scale, energy system operators, regulators and high-level decision-makers can compare the simulated impact of community-led energy transition on the net zero goals with large-scale top-down initiatives

    Energy Efficiency of P2P and Distributed Clouds Networks

    Get PDF
    Since its inception, the Internet witnessed two major approaches to communicate digital content to end users: peer to peer (P2P) and client/server (C/S) networks. Both approaches require high bandwidth and low latency physical underlying networks to meet the users’ escalating demands. Network operators typically have to overprovision their systems to guarantee acceptable quality of service (QoS) and availability while delivering content. However, more physical devices led to more ICT power consumption over the years. An effective approach to confront these challenges is to jointly optimise the energy consumption of content providers and transportation networks. This thesis proposes a number of energy efficient mechanisms to optimise BitTorrent based P2P networks and clouds based C/S content distribution over IP/WDM based core optical networks. For P2P systems, a mixed integer linear programming (MILP) optimisation, two heuristics and an experimental testbed are developed to minimise the power consumption of IP/WDM networks that deliver traffic generated by an overlay layer of homogeneous BitTorrent users. The approach optimises peers’ selection where the goal is to minimise IP/WDM network power consumption while maximising peers download rate. The results are compared to typical C/S systems. We also considered Heterogeneous BitTorrent peers and developed models that optimise P2P systems to compensate for different peers behaviour after finishing downloading. We investigated the impact of core network physical topology on the energy efficiency of BitTorrent systems. We also investigated the power consumption of Video on Demand (VoD) services using CDN, P2P and hybrid CDN-P2P architectures over IP/WDM networks and addressed content providers efforts to balance the load among their data centres. For cloud systems, a MILP and a heuristic were developed to minimise content delivery induced power consumption of both clouds and IP/WDM networks. This was done by optimally determining the number, location and internal capability in terms of servers, LAN and storage of each cloud, subject to daily traffic variation. Different replication schemes were studied revealing that replicating content into multiple clouds based on content popularity is the optimum approach with respect to energy. The model was extended to study Storage as a Service (StaaS). We also studied the problem of virtual machine placement in IP/WDM networks and showed that VM Slicing is the best approach compared to migration and replication schemes to minimise energy. Finally, we have investigated the utilisation of renewable energy sources represented by solar cells and wind farms in BitTorrent networks and content delivery clouds, respectively. Comprehensive modelling and simulation as well as experimental demonstration were developed, leading to key contributions in the field of energy efficient telecommunications

    Optimising Structured P2P Networks for Complex Queries

    Get PDF
    With network enabled consumer devices becoming increasingly popular, the number of connected devices and available services is growing considerably - with the number of connected devices es- timated to surpass 15 billion devices by 2015. In this increasingly large and dynamic environment it is important that users have a comprehensive, yet efficient, mechanism to discover services. Many existing wide-area service discovery mechanisms are centralised and do not scale to large numbers of users. Additionally, centralised services suffer from issues such as a single point of failure, high maintenance costs, and difficulty of management. As such, this Thesis seeks a Peer to Peer (P2P) approach. Distributed Hash Tables (DHTs) are well known for their high scalability, financially low barrier of entry, and ability to self manage. They can be used to provide not just a platform on which peers can offer and consume services, but also as a means for users to discover such services. Traditionally DHTs provide a distributed key-value store, with no search functionality. In recent years many P2P systems have been proposed providing support for a sub-set of complex query types, such as keyword search, range queries, and semantic search. This Thesis presents a novel algorithm for performing any type of complex query, from keyword search, to complex regular expressions, to full-text search, over any structured P2P overlay. This is achieved by efficiently broadcasting the search query, allowing each peer to process the query locally, and then efficiently routing responses back to the originating peer. Through experimentation, this technique is shown to be successful when the network is stable, however performance degrades under high levels of network churn. To address the issue of network churn, this Thesis proposes a number of enhancements which can be made to existing P2P overlays in order to improve the performance of both the existing DHT and the proposed algorithm. Through two case studies these enhancements are shown to improve not only the performance of the proposed algorithm under churn, but also the performance of traditional lookup operations in these networks

    A new priority rule cloud scheduling technique that utilizes gaps to increase the efficiency of jobs distribution

    Get PDF
    In recent years, the concept of cloud computing has been gaining traction to provide dynamically increasing access to shared computing resources (software and hardware) via the internet. It’s no secret that cloud computing’s ability to supply mission-critical services has made job scheduling a hot subject in the industry right now. However, the efficient utilization of these cloud resources has been a challenge, often resulting in wastage or degraded service performance due to poor scheduling. To solve this issue, existing research has been focused on queue-based job scheduling techniques, where jobs are scheduled based on specific deadlines or job lengths. To overcome this challenge, numerous researchers have focused on improving existing Priority Rule (PR) cloud schedulers by developing dynamic scheduling algorithms, but they have fallen short of meeting user satisfaction, such as flowtime, makespan, and total tardiness. These are the limitations of the current implementation of existing Priority Rule (PR) schedulers, mainly caused by blocking made by jobs at the head of the queue. These limitations lead to the poor performance of cloud-based mobile applications and other cloud services. To address this issue, the main objective of this research is to improve the existing PR cloud schedulers by developing a new dynamic scheduling algorithm by manipulating the gaps in the cloud job schedule. In this thesis, first a Priority-Based Fair Scheduling (PBFS) algorithm has been introduced to schedule jobs so that jobs get access to the required resources at optimal times. Then, a backfilling strategy called Shortest Gap Priority-Based Fair Scheduling (SG-PBFS) is proposed that attempts to manipulate the gaps in the schedule of cloud jobs. Finally, the performance evaluation demonstrates that the proposed SG-PBFS algorithm outperforms SG-SJF, SG-LJF, SG-FCFS, SG-EDF, and SG-(MAX-MIN) in terms of flow time, makespan time, and total tardiness, which conclusively demonstrates its effectiveness. The experiment result shows that for 500 jobs, SG-PBFS flow time, makespan time, and tardiness time are 9%, 4%, and 7% less than PBFS gradually

    Conception d'un modèle architectural collaboratif pour l'informatique omniprésente à la périphérie des réseaux mobiles

    Get PDF
    Le progrès des technologies de communication pair-à-pair et sans fil a de plus en plus permis l’intégration de dispositifs portables et omniprésents dans des systèmes distribués et des architectures informatiques de calcul dans le paradigme de l’internet des objets. De même, ces dispositifs font l'objet d'un développement technologique continu. Ainsi, ils ont toujours tendance à se miniaturiser, génération après génération durant lesquelles ils sont considérés comme des dispositifs de facto. Le fruit de ces progrès est l'émergence de l'informatique mobile collaborative et omniprésente, notamment intégrée dans les modèles architecturaux de l'Internet des Objets. L’avantage le plus important de cette évolution de l'informatique est la facilité de connecter un grand nombre d'appareils omniprésents et portables lorsqu'ils sont en déplacement avec différents réseaux disponibles. Malgré les progrès continuels, les systèmes intelligents mobiles et omniprésents (réseaux, dispositifs, logiciels et technologies de connexion) souffrent encore de diverses limitations à plusieurs niveaux tels que le maintien de la connectivité, la puissance de calcul, la capacité de stockage de données, le débit de communications, la durée de vie des sources d’énergie, l'efficacité du traitement de grosses tâches en termes de partitionnement, d'ordonnancement et de répartition de charge. Le développement technologique accéléré des équipements et dispositifs de ces modèles mobiles s'accompagne toujours de leur utilisation intensive. Compte tenu de cette réalité, plus d'efforts sont nécessaires à la fois dans la conception structurelle tant au matériel et logiciel que dans la manière dont il est géré. Il s'agit d'améliorer, d'une part, l'architecture de ces modèles et leurs technologies de communication et, d'autre part, les algorithmes d'ordonnancement et d'équilibrage de charges pour effectuer leurs travaux efficacement sur leurs dispositifs. Notre objectif est de rendre ces modèles omniprésents plus autonomes, intelligents et collaboratifs pour renforcer les capacités de leurs dispositifs, leurs technologies de connectivité et les applications qui effectuent leurs tâches. Ainsi, nous avons établi un modèle architectural autonome, omniprésent et collaboratif pour la périphérie des réseaux. Ce modèle s'appuie sur diverses technologies de connexion modernes telles que le sans-fil, la radiocommunication pair-à-pair, et les technologies offertes par LoPy4 de Pycom telles que LoRa, BLE, Wi-Fi, Radio Wi-Fi et Bluetooth. L'intégration de ces technologies permet de maintenir la continuité de la communication dans les divers environnements, même les plus sévères. De plus, ce modèle conçoit et évalue un algorithme d'équilibrage de charge et d'ordonnancement permettant ainsi de renforcer et améliorer son efficacité et sa qualité de service (QoS) dans différents environnements. L’évaluation de ce modèle architectural montre des avantages tels que l’amélioration de la connectivité et l’efficacité d’exécution des tâches. Advances in peer-to-peer and wireless communication technologies have increasingly enabled the integration of mobile and pervasive devices into distributed systems and computing architectures in the Internet of Things paradigm. Likewise, these devices are subject to continuous technological development. Thus, they always tend to be miniaturized, generation after generation during which they are considered as de facto devices. The success of this progress is the emergence of collaborative mobiles and pervasive computing, particularly integrated into the architectural models of the Internet of Things. The most important benefit of this form of computing is the ease of connecting a large number of pervasive and portable devices when they are on the move with different networks available. Despite the continual advancements that support this field, mobile and pervasive intelligent systems (networks, devices, software and connection technologies) still suffer from various limitations at several levels such as maintaining connectivity, computing power, ability to data storage, communication speeds, the lifetime of power sources, the efficiency of processing large tasks in terms of partitioning, scheduling and load balancing. The accelerated technological development of the equipment and devices of these mobile models is always accompanied by their intensive use. Given this reality, it requires more efforts both in their structural design and management. This involves improving on the one hand, the architecture of these models and their communication technologies, and, on the other hand, the scheduling and load balancing algorithms for the work efficiency. The goal is to make these models more autonomous, intelligent, and collaborative by strengthening the different capabilities of their devices, their connectivity technologies and the applications that perform their tasks. Thus, we have established a collaborative autonomous and pervasive architectural model deployed at the periphery of networks. This model is based on various modern connection technologies such as wireless, peer-to-peer radio communication, and technologies offered by Pycom's LoPy4 such as LoRa, BLE, Wi-Fi, Radio Wi-Fi and Bluetooth. The integration of these technologies makes it possible to maintain the continuity of communication in the various environments, even the most severe ones. Within this model, we designed and evaluated a load balancing and scheduling algorithm to strengthen and improve its efficiency and quality of service (QoS) in different environments. The evaluation of this architectural model shows payoffs such as improvement of connectivity and efficiency of task executions
    • …
    corecore