586 research outputs found

    Climbing Up Cloud Nine: Performance Enhancement Techniques for Cloud Computing Environments

    Get PDF
    With the transformation of cloud computing technologies from an attractive trend to a business reality, the need is more pressing than ever for efficient cloud service management tools and techniques. As cloud technologies continue to mature, the service model, resource allocation methodologies, energy efficiency models and general service management schemes are not yet saturated. The burden of making this all tick perfectly falls on cloud providers. Surely, economy of scale revenues and leveraging existing infrastructure and giant workforce are there as positives, but it is far from straightforward operation from that point. Performance and service delivery will still depend on the providers’ algorithms and policies which affect all operational areas. With that in mind, this thesis tackles a set of the more critical challenges faced by cloud providers with the purpose of enhancing cloud service performance and saving on providers’ cost. This is done by exploring innovative resource allocation techniques and developing novel tools and methodologies in the context of cloud resource management, power efficiency, high availability and solution evaluation. Optimal and suboptimal solutions to the resource allocation problem in cloud data centers from both the computational and the network sides are proposed. Next, a deep dive into the energy efficiency challenge in cloud data centers is presented. Consolidation-based and non-consolidation-based solutions containing a novel dynamic virtual machine idleness prediction technique are proposed and evaluated. An investigation of the problem of simulating cloud environments follows. Available simulation solutions are comprehensively evaluated and a novel design framework for cloud simulators covering multiple variations of the problem is presented. Moreover, the challenge of evaluating cloud resource management solutions performance in terms of high availability is addressed. An extensive framework is introduced to design high availability-aware cloud simulators and a prominent cloud simulator (GreenCloud) is extended to implement it. Finally, real cloud application scenarios evaluation is demonstrated using the new tool. The primary argument made in this thesis is that the proposed resource allocation and simulation techniques can serve as basis for effective solutions that mitigate performance and cost challenges faced by cloud providers pertaining to resource utilization, energy efficiency, and client satisfaction

    Quality of Experience monitoring and management strategies for future smart networks

    Get PDF
    One of the major driving forces of the service and network's provider market is the user's perceived service quality and expectations, which are referred to as user's Quality of Experience (QoE). It is evident that QoE is particularly critical for network providers, who are challenged with the multimedia engineering problems (e.g. processing, compression) typical of traditional networks. They need to have the right QoE monitoring and management mechanisms to have a significant impact on their budget (e.g. by reducing the users‘ churn). Moreover, due to the rapid growth of mobile networks and multimedia services, it is crucial for Internet Service Providers (ISPs) to accurately monitor and manage the QoE for the delivered services and at the same time keep the computational resources and the power consumption at low levels. The objective of this thesis is to investigate the issue of QoE monitoring and management for future networks. This research, developed during the PhD programme, aims to describe the State-of-the-Art and the concept of Virtual Probes (vProbes). Then, I proposed a QoE monitoring and management solution, two Agent-based solutions for QoE monitoring in LTE-Advanced networks, a QoE monitoring solution for multimedia services in 5G networks and an SDN-based approach for QoE management of multimedia services

    Demand-Response in Smart Buildings

    Get PDF
    This book represents the Special Issue of Energies, entitled “Demand-Response in Smart Buildings”, that was published in the section “Energy and Buildings”. This Special Issue is a collection of original scientific contributions and review papers that deal with smart buildings and communities. Demand response (DR) offers the capability to apply changes in the energy usage of consumers—from their normal consumption patterns—in response to changes in energy pricing over time. This leads to a lower energy demand during peak hours or during periods when an electricity grid’s reliability is put at risk. Therefore, demand response is a reduction in demand designed to reduce peak load or avoid system emergencies. Hence, demand response can be more cost-effective than adding generation capabilities to meet the peak and/or occasional demand spikes. The underlying objective of DR is to actively engage customers in modifying their consumption in response to pricing signals. Demand response is expected to increase energy market efficiency and the security of supply, which will ultimately benefit customers by way of options for managing their electricity costs leading to reduced environmental impact

    Computational resource management for data-driven applications with deadline constraints

    Get PDF
    Recent advances in the type and variety of sensing technologies have led to an extraordinary growth in the volume of data being produced and led to a number of streaming applications that make use of this data. Sensors typically monitor environmental or physical phenomenon at predefined time intervals or triggered by user-defined events. Understanding how such streaming content (the raw data or events) can be processed within a time threshold remains an important research challenge. We investigate how a cloud-based computational infrastructure can autonomically respond to such streaming content, offering quality of service guarantees. In particular, we contextualize our approach using an electric vehicles (EVs) charging scenario, where such vehicles need to connect to the electrical grid to charge their batteries. There has been an emerging interest in EV aggregators (primarily intermediate brokers able to estimate aggregate charging demand for a collection of EVs) to coordinate the charging process. We consider predicting EV charging demand as a potential workload with execution time constraints. We assume that an EV aggregator manages a number of geographic areas and a pool of computational resources of a cloud computing cluster to support scheduling of EV charging. The objective is to ensure that there is enough computational capacity to satisfy the requirements for managing EV battery charging requests within specific time constraints

    Supervisor virtual machine for VoLTE service on a cloud environment

    Get PDF
    With the continuing growth of Voice of Long Term Evolution (VoLTE) networks, coupled with the need of Mobile Operators to reduce maintenance costs, Cloud Service deployment is becoming a common application. This study was designed to create a method capable of improving the Operations and Monitoring activities of a VoLTE service that is deployed on a Cloud platform. In this study, we present contents referring to the constituent elements of a VoLTE network, and we review in detail the features of the Telephony Application Server. For this study, TAS used was the Open TAS. Also included in this study is a generic explanation of Cloud Openstack’s behavior. The presented method implies the creation of a virtual machine Supervisor and its deployment in Cloud. This virtual machine is capable of establishing SSH connections with open TAS to extract the Clear Codes report, which identifies the state with which calls were terminated for analysis. The virtual machine contains defined limits, which check if they have been exceeded. If this a limit is excited, the virtual machine notifies the system operator of an incident. This study presents the possibilities of implementation in a Cloud environment, to improve and automate Operations and Maintenance functions in the Telecommunications network.Com o crescimento contínuo das redes de Voice over Long Term Evolution (VoLTE), juntamente com a necessidade de redução de custo de manutenção pelos Operadores Móveis, a implementação do Serviço em Cloud começa a ser uma aplicação comum. Este estudo foi elaborado com o intuito de criar um método capaz de melhorar as atividades de Operações e Monitorização de um serviço de VoLTE, que esteja implementado numa plataforma Cloud. Neste estudo, encontram-se presentes conteúdos referentes aos elementos constituintes de uma rede de VoLTE, e é revisto em detalhe as funcionalidades do Telephony Application Server. Para este estudo, o TAS utilizado foi o open TAS. Neste estudo, igualmente é incluído uma explicação genérica do comportamento da Cloud Openstack. O método apresentado implica criação de uma máquina virtual Supervisor e da sua implementação na Cloud. Esta máquina virtual é capaz de estabelecer ligações SSH com o open TAS, de modo a extrair o relatório de Clear Codes, que identifica o estado com que as chamadas foram finalizadas, para proceder a análises. A máquina virtual contém limites definidos, os quais verifica se foram excedidos. Caso este evento seja verificado, notificam o operador do sistema para um incidente. Esta é uma proposta que apresenta as possibilidades de implementação num ambiente de Cloud, em melhorar e automatizar as funções de Operações e Manutenção na rede de Telecomunicações

    SLA-driven dynamic cloud resource management

    Full text link
    As the size and complexity of Cloud systems increase, the manual management of these solutions becomes a challenging issue as more personnel, resources and expertise are needed. Service Level Agreement (SLA)- aware autonomic cloud solutions enable managing large scale infrastructure management meanwhile supporting multiple dynamic requirement from users. This paper contributes to these topics by the introduction of Cloudcompaas, a SLA-aware PaaS Cloud platform that manages the complete resource lifecycle. This platform features an extension of the SLA specification WS-Agreement, tailored to the specific needs of Cloud Computing. In particular, Cloudcompaas enables Cloud providers with a generic SLA model to deal with higher-level metrics, closer to end-user perception, and with flexible composition of the requirements of multiple actors in the computational scene. Moreover, Cloudcompaas provides a framework for general Cloud computing applications that could be dynamically adapted to correct the QoS violations by using the elasticity features of Cloud infrastructures. The effectiveness of this solution is demonstrated in this paper through a simulation that considers several realistic workload profiles, where Cloudcompaas achieves minimum cost and maximum efficiency, under highly heterogeneous utilization patterns. © 2013 Elsevier B.V. All rights reserved.This work has been developed under the support of the program Formacion de Personal Investigador de Caracter Predoctoral grant number BFPI/2009/103, from the Conselleria d'Educacio of the Generalitat Valenciana. Also, the authors wish to thank the financial support received from The Spanish Ministry of Education and Science to develop the project 'CodeCloud', with reference TIN2010-17804.García García, A.; Blanquer Espert, I.; Hernández García, V. (2014). SLA-driven dynamic cloud resource management. Future Generation Computer Systems. 31:1-11. https://doi.org/10.1016/j.future.2013.10.005S1113

    Convergence of Intelligent Data Acquisition and Advanced Computing Systems

    Get PDF
    This book is a collection of published articles from the Sensors Special Issue on "Convergence of Intelligent Data Acquisition and Advanced Computing Systems". It includes extended versions of the conference contributions from the 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS’2019), Metz, France, as well as external contributions
    corecore