55 research outputs found

    Cloud Computing cost and energy optimization through Federated Cloud SoS

    Get PDF
    2017 Fall.Includes bibliographical references.The two most significant differentiators amongst contemporary Cloud Computing service providers have increased green energy use and datacenter resource utilization. This work addresses these two issues from a system's architectural optimization viewpoint. The proposed approach herein, allows multiple cloud providers to utilize their individual computing resources in three ways by: (1) cutting the number of datacenters needed, (2) scheduling available datacenter grid energy via aggregators to reduce costs and power outages, and lastly by (3) utilizing, where appropriate, more renewable and carbon-free energy sources. Altogether our proposed approach creates an alternative paradigm for a Federated Cloud SoS approach. The proposed paradigm employs a novel control methodology that is tuned to obtain both financial and environmental advantages. It also supports dynamic expansion and contraction of computing capabilities for handling sudden variations in service demand as well as for maximizing usage of time varying green energy supplies. Herein we analyze the core SoS requirements, concept synthesis, and functional architecture with an eye on avoiding inadvertent cascading conditions. We suggest a physical architecture that diminishes unwanted outcomes while encouraging desirable results. Finally, in our approach, the constituent cloud services retain their independent ownership, objectives, funding, and sustainability means. This work analyzes the core SoS requirements, concept synthesis, and functional architecture. It suggests a physical structure that simulates the primary SoS emergent behavior to diminish unwanted outcomes while encouraging desirable results. The report will analyze optimal computing generation methods, optimal energy utilization for computing generation as well as a procedure for building optimal datacenters using a unique hardware computing system design based on the openCompute community as an illustrative collaboration platform. Finally, the research concludes with security features cloud federation requires to support to protect its constituents, its constituents tenants and itself from security risks

    Edge computing infrastructure for 5G networks: a placement optimization solution

    Get PDF
    This thesis focuses on how to optimize the placement of the Edge Computing infrastructure for upcoming 5G networks. To this aim, the core contributions of this research are twofold: 1) a novel heuristic called Hybrid Simulated Annealing to tackle the NP-hard nature of the problem and, 2) a framework called EdgeON providing a practical tool for real-life deployment optimization. In more detail, Edge Computing has grown into a key solution to 5G latency, reliability and scalability requirements. By bringing computing, storage and networking resources to the edge of the network, delay-sensitive applications, location-aware systems and upcoming real-time services leverage the benefits of a reduced physical and logical path between the end-user and the data or service host. Nevertheless, the edge node placement problem raises critical concerns regarding deployment and operational expenditures (i.e., mainly due to the number of nodes to be deployed), current backhaul network capabilities and non-technical placement limitations. Common approaches to the placement of edge nodes are based on: Mobile Edge Computing (MEC), where the processing capabilities are deployed at the Radio Access Network nodes and Facility Location Problem variations, where a simplistic cost function is used to determine where to optimally place the infrastructure. However, these methods typically lack the flexibility to be used for edge node placement under the strict technical requirements identified for 5G networks. They fail to place resources at the network edge for 5G ultra-dense networking environments in a network-aware manner. This doctoral thesis focuses on rigorously defining the Edge Node Placement Problem (ENPP) for 5G use cases and proposes a novel framework called EdgeON aiming at reducing the overall expenses when deploying and operating an Edge Computing network, taking into account the usage and characteristics of the in-place backhaul network and the strict requirements of a 5G-EC ecosystem. The developed framework implements several placement and optimization strategies thoroughly assessing its suitability to solve the network-aware ENPP. The core of the framework is an in-house developed heuristic called Hybrid Simulated Annealing (HSA), seeking to address the high complexity of the ENPP while avoiding the non-convergent behavior of other traditional heuristics (i.e., when applied to similar problems). The findings of this work validate our approach to solve the network-aware ENPP, the effectiveness of the heuristic proposed and the overall applicability of EdgeON. Thorough performance evaluations were conducted on the core placement solutions implemented revealing the superiority of HSA when compared to widely used heuristics and common edge placement approaches (i.e., a MEC-based strategy). Furthermore, the practicality of EdgeON was tested through two main case studies placing services and virtual network functions over the previously optimally placed edge nodes. Overall, our proposal is an easy-to-use, effective and fully extensible tool that can be used by operators seeking to optimize the placement of computing, storage and networking infrastructure at the users’ vicinity. Therefore, our main contributions not only set strong foundations towards a cost-effective deployment and operation of an Edge Computing network, but directly impact the feasibility of upcoming 5G services/use cases and the extensive existing research regarding the placement of services and even network service chains at the edge

    Automated Service Negotiation Between Autonomous Computational Agents

    Get PDF
    PhDMulti-agent systems are a new computational approach for solving real world, dynamic and open system problems. Problems are conceptualized as a collection of decentralised autonomous agents that collaborate to reach the overall solution. Because of the agents autonomy, their limited rationality, and the distributed nature of most real world problems, the key issue in multi-agent system research is how to model interactions between agents. Negotiation models have emerged as suitable candidates to solve this interaction problem due to their decentralised nature, emphasis on mutual selection of an action, and the prevalence of negotiation in real social systems. The central problem addressed in this thesis is the design and engineering of a negotiation model for autonomous agents for sharing tasks and/or resources. To solve this problem a negotiation protocol and a set of deliberation mechanisms are presented which together coordinate the actions of a multiple agent system. In more detail, the negotiation protocol constrains the action selection problem solving of the agents through the use of normative rules of interaction. These rules temporally order, according to the agents' roles, communication utterances by specifying both who can say what, as well as when. Specifically, the presented protocol is a repeated, sequential model where offers are iteratively exchanged. Under this protocol, agents are assumed to be fully committed to their utterances and utterances are private between the two agents. The protocol is distributed, symmetric, supports bi and/or multi-agent negotiation as well as distributive and integrative negotiation. In addition to coordinating the agent interactions through normative rules, a set of mechanisms are presented that coordinate the deliberation process of the agents during the ongoing negotiation. Whereas the protocol normatively describes the orderings of actions, the mechanisms describe the possible set of agent strategies in using the protocol. These strategies are captured by a negotiation architecture that is composed of responsive and deliberative decision mechanisms. Decision making with the former mechanism is based on a linear combination of simple functions called tactics, which manipulate the utility of deals. The latter mechanisms are subdivided into trade-off and issue manipulation mechanisms. The trade-off mechanism generates offers that manipulate the value, rather than the overall utility, of the offer. The issue manipulation mechanism aims to increase the likelihood of an agreement by adding and removing issues into the negotiation set. When taken together, these mechanisms represent a continuum of possible decision making capabilities: ranging from behaviours that exhibit greater awareness of environmental resources and less to solution quality, to behaviours that attempt to acquire a given solution quality independently of the resource consumption. The protocol and mechanisms are empirically evaluated and have been applied to real world task distribution problems in the domains of business process management and telecommunication management. The main contribution and novelty of this research are: i) a domain independent computational model of negotiation that agents can use to support a wide variety of decision making strategies, ii) an empirical evaluation of the negotiation model for a given agent architecture in a number of different negotiation environments, and iii) the application of the developed model to a number of target domains. An increased strategy set is needed because the developed protocol is less restrictive and less constrained than the traditional ones, thus supporting development of strategic interaction models that belong more to open systems. Furthermore, because of the combination of the large number of environmental possibilities and the size of the set of possible strategies, the model has been empirically investigated to evaluate the success of strategies in different environments. These experiments have facilitated the development of general guidelines that can be used by designers interested in developing strategic negotiating agents. The developed model is grounded from the requirement considerations from both the business process management and telecommunication application domains. It has also been successfully applied to five other real world scenarios

    Improved planning and resource management in next generation green mobile communication networks

    Get PDF
    In upcoming years, mobile communication networks will experience a disruptive reinventing process through the deployment of post 5th Generation (5G) mobile networks. Profound impacts are expected on network planning processes, maintenance and operations, on mobile services, subscribers with major changes in their data consumption and generation behaviours, as well as on devices itself, with a myriad of different equipment communicating over such networks. Post 5G will be characterized by a profound transformation of several aspects: processes, technology, economic, social, but also environmental aspects, with energy efficiency and carbon neutrality playing an important role. It will represent a network of networks: where different types of access networks will coexist, an increasing diversity of devices of different nature, massive cloud computing utilization and subscribers with unprecedented data-consuming behaviours. All at greater throughput and quality of service, as unseen in previous generations. The present research work uses 5G new radio (NR) latest release as baseline for developing the research activities, with future networks post 5G NR in focus. Two approaches were followed: i) method re-engineering, to propose new mechanisms and overcome existing or predictably existing limitations and ii) concept design and innovation, to propose and present innovative methods or mechanisms to enhance and improve the design, planning, operation, maintenance and optimization of 5G networks. Four main research areas were addressed, focusing on optimization and enhancement of 5G NR future networks, the usage of edge virtualized functions, subscriber’s behavior towards the generation of data and a carbon sequestering model aiming to achieve carbon neutrality. Several contributions have been made and demonstrated, either through models of methodologies that will, on each of the research areas, provide significant improvements and enhancements from the planning phase to the operational phase, always focusing on optimizing resource management. All the contributions are retro compatible with 5G NR and can also be applied to what starts being foreseen as future mobile networks. From the subscriber’s perspective and the ultimate goal of providing the best quality of experience possible, still considering the mobile network operator’s (MNO) perspective, the different proposed or developed approaches resulted in optimization methods for the numerous problems identified throughout the work. Overall, all of such contributed individually but aggregately as a whole to improve and enhance globally future mobile networks. Therefore, an answer to the main question was provided: how to further optimize a next-generation network - developed with optimization in mind - making it even more efficient while, simultaneously, becoming neutral concerning carbon emissions. The developed model for MNOs which aimed to achieve carbon neutrality through CO2 sequestration together with the subscriber’s behaviour model - topics still not deeply focused nowadays – are two of the main contributions of this thesis and of utmost importance for post-5G networks.Nos próximos anos espera-se que as redes de comunicações móveis se reinventem para lá da 5ª Geração (5G), com impactos profundos ao nível da forma como são planeadas, mantidas e operacionalizadas, ao nível do comportamento dos subscritores de serviços móveis, e através de uma miríade de dispositivos a comunicar através das mesmas. Estas redes serão profundamente transformadoras em termos tecnológicos, económicos, sociais, mas também ambientais, sendo a eficiência energética e a neutralidade carbónica aspetos que sofrem uma profunda melhoria. Paradoxalmente, numa rede em que coexistirão diferentes tipos de redes de acesso, mais dispositivos, utilização massiva de sistema de computação em nuvem, e subscritores com comportamentos de consumo de serviços inéditos nas gerações anteriores. O trabalho desenvolvido utiliza como base a release mais recente das redes 5G NR (New Radio), sendo o principal focus as redes pós-5G. Foi adotada uma abordagem de "reengenharia de métodos” (com o objetivo de propor mecanismos para resolver limitações existentes ou previsíveis) e de “inovação e design de conceitos”, em que são apresentadas técnicas e metodologias inovadoras, com o principal objetivo de contribuir para um desenho e operação otimizadas desta geração de redes celulares. Quatro grandes áreas de investigação foram endereçadas, contribuindo individualmente para um todo: melhorias e otimização generalizada de redes pós-5G, a utilização de virtualização de funções de rede, a análise comportamental dos subscritores no respeitante à geração e consumo de tráfego e finalmente, um modelo de sequestro de carbono com o objetivo de compensar as emissões produzidas por esse tipo de redes que se prevê ser massiva, almejando atingir a neutralidade carbónica. Como resultado deste trabalho, foram feitas e demonstradas várias contribuições, através de modelos ou metodologias, representando em cada área de investigação melhorias e otimizações, que, todas contribuindo para o mesmo objetivo, tiveram em consideração a retro compatibilidade e aplicabilidade ao que se prevê que sejam as futuras redes pós 5G. Focando sempre na perspetiva do subscritor da melhor experiência possível, mas também no lado do operador de serviço móvel – que pretende otimizar as suas redes, reduzir custos e maximizar o nível de qualidade de serviço prestado - as diferentes abordagens que foram desenvolvidas ou propostas, tiveram como resultado a resolução ou otimização dos diferentes problemas identificados, contribuindo de forma agregada para a melhoria do sistema no seu todo, respondendo à questão principal de como otimizar ainda mais uma rede desenvolvida para ser extremamente eficiente, tornando-a, simultaneamente, neutra em termos de emissões de carbono. Das principais contribuições deste trabalho relevam-se precisamente o modelo de compensação das emissões de CO2, com vista à neutralidade carbónica e um modelo de análise comportamental dos subscritores, dois temas ainda pouco explorados e extremamente importantes em contexto de redes futuras pós-5G

    Incentive-driven QoS in peer-to-peer overlays

    Get PDF
    A well known problem in peer-to-peer overlays is that no single entity has control over the software, hardware and configuration of peers. Thus, each peer can selfishly adapt its behaviour to maximise its benefit from the overlay. This thesis is concerned with the modelling and design of incentive mechanisms for QoS-overlays: resource allocation protocols that provide strategic peers with participation incentives, while at the same time optimising the performance of the peer-to-peer distribution overlay. The contributions of this thesis are as follows. First, we present PledgeRoute, a novel contribution accounting system that can be used, along with a set of reciprocity policies, as an incentive mechanism to encourage peers to contribute resources even when users are not actively consuming overlay services. This mechanism uses a decentralised credit network, is resilient to sybil attacks, and allows peers to achieve time and space deferred contribution reciprocity. Then, we present a novel, QoS-aware resource allocation model based on Vickrey auctions that uses PledgeRoute as a substrate. It acts as an incentive mechanism by providing efficient overlay construction, while at the same time allocating increasing service quality to those peers that contribute more to the network. The model is then applied to lagsensitive chunk swarming, and some of its properties are explored for different peer delay distributions. When considering QoS overlays deployed over the best-effort Internet, the quality received by a client cannot be adjudicated completely to either its serving peer or the intervening network between them. By drawing parallels between this situation and well-known hidden action situations in microeconomics, we propose a novel scheme to ensure adherence to advertised QoS levels. We then apply it to delay-sensitive chunk distribution overlays and present the optimal contract payments required, along with a method for QoS contract enforcement through reciprocative strategies. We also present a probabilistic model for application-layer delay as a function of the prevailing network conditions. Finally, we address the incentives of managed overlays, and the prediction of their behaviour. We propose two novel models of multihoming managed overlay incentives in which overlays can freely allocate their traffic flows between different ISPs. One is obtained by optimising an overlay utility function with desired properties, while the other is designed for data-driven least-squares fitting of the cross elasticity of demand. This last model is then used to solve for ISP profit maximisation

    Naming and discovery in networks : architecture and economics

    Get PDF
    In less than three decades, the Internet was transformed from a research network available to the academic community into an international communication infrastructure. Despite its tremendous success, there is a growing consensus in the research community that the Internet has architectural limitations that need to be addressed in a effort to design a future Internet. Among the main technical limitations are the lack of mobility support, and the lack of security and trust. The Internet, and particularly TCP/IP, identifies endpoints using a location/routing identifier, the IP address. Coupling the endpoint identifier to the location identifier hinders mobility and poorly identifies the actual endpoint. On the other hand, the lack of security has been attributed to limitations in both the network and the endpoint. Authentication for example is one of the main concerns in the architecture and is hard to implement partly due to lack of identity support. The general problem that this dissertation is concerned with is that of designing a future Internet. Towards this end, we focus on two specific sub-problems. The first problem is the lack of a framework for thinking about architectures and their design implications. It was obvious after surveying the literature that the majority of the architectural work remains idiosyncratic and descriptions of network architectures are mostly idiomatic. This has led to the overloading of architectural terms, and to the emergence of a large body of network architecture proposals with no clear understanding of their cross similarities, compatibility points, their unique properties, and architectural performance and soundness. On the other hand, the second problem concerns the limitations of traditional naming and discovery schemes in terms of service differentiation and economic incentives. One of the recurring themes in the community is the need to separate an entity\u27s identifier from its locator to enhance mobility and security. Separation of identifier and locator is a widely accepted design principle for a future Internet. Separation however requires a process to translate from the identifier to the locator when discovering a network path to some identified entity. We refer to this process as identifier-based discovery, or simply discovery, and we recognize two limitations that are inherent in the design of traditional discovery schemes. The first limitation is the homogeneity of the service where all entities are assumed to have the same discovery performance requirements. The second limitation is the inherent incentive mismatch as it relates to sharing the cost of discovery. This dissertation addresses both subproblems, the architectural framework as well as the naming and discovery limitations
    corecore