12 research outputs found

    Elastic management of tasks in virtualized environments

    Get PDF
    Nowadays, service providers in the Cloud offer complex services ready to be used as it was a commodity like water or electricity to their customers. A key technology for this approach is virtualization which facilitates provider's management and provides on-demand virtual environments, which are isolated and consolidated in order to achieve a better utilization of the provider's resources. However, dealing with some virtualization capabilities, such as the creation of virtual environments, implies an effort for the user in order to take benefit from them. In order to avoid this problem, we are contributing the research community with the EMOTIVE (Elastic Management of Tasks in Virtualized Environments) middleware, which allows executing tasks and providing virtualized environments to the users without any extra effort in an efficient way. This is a virtualized environment manager which aims to provide virtual machines that fulfils with the user requirements in terms of software and system capabilities. Furthermore, it supports fine-grained local resource management and provides facilities for developing scheduling policies such as migration and checkpointing.Postprint (published version

    Addressing the use of cloud computing for web hosting providers

    Get PDF
    Nobody doubts about cloud computing is and will be a sea change for the Information Tech nology. Specifically, we address an application of this emerging paradigm into the web hosting providers. We create the Cloud Hosting Provider (CHP): a web hosting provider that uses the outsourcing technique in order to take advantage of cloud computing infrastructures (i.e. cloud-based outsourcing) for providing scalability and availability capabilities to the web applications deployed. Hence, the main goal is to maximize the revenue obtained by the provider through both the analysis of Service Level Agreements (SLA) and the application of economic functions. This target is achieved by using an SLA-aware resource (i.e. web servers) management. Through the experimentation we demonstrate that the CHP is able to maximize the hosting provider's revenue while offering to the web applications scalability, availability and very good performance.Postprint (published version

    Adaptive Resource Allocation and Provisioning in Multi-Service Cloud Environments

    Get PDF
    In the current cloud business environment, the cloud provider (CP) can provide a means for offering the required quality of service (QoS) for multiple classes of clients. We consider the cloud market where various resources such as CPUs, memory, and storage in the form of Virtual Machine (VM) instances can be provisioned and then leased to clients with QoS guarantees. Unlike existing works, we propose a novel Service Level Agreement (SLA) framework for cloud computing, in which a price control parameter is used to meet QoS demands for all classes in the market. The framework uses reinforcement learning (RL) to derive a VM hiring policy that can adapt to changes in the system to guarantee the QoS for all client classes. These changes include: service cost, system capacity, and the demand for service. In exhibiting solutions, when the CP leases more VMs to a class of clients, the QoS is degraded for other classes due to an inadequate number of VMs. However, our approach integrates computing resources adaptation with service admission control based on the RL model. To the best of our knowledge, this study is the first attempt that facilitates this integration to enhance the CP's profit and avoid SLA violation. Numerical analysis stresses the ability of our approach to avoid SLA violation while maximizing the CP’s profit under varying cloud environment conditions

    Developing resource consolidation frameworks for moldable virtual machines in clouds

    Get PDF
    This paper considers the scenario where multiple clusters of Virtual Machines (i.e., termed Virtual Clusters) are hosted in a Cloud system consisting of a cluster of physical nodes. Multiple Virtual Clusters (VCs) cohabit in the physical cluster, with each VC offering a particular type of service for the incoming requests. In this context, VM consolidation, which strives to use a minimal number of nodes to accommodate all VMs in the system, plays an important role in saving resource consumption. Most existing consolidation methods proposed in the literature regard VMs as “rigid” during consolidation, i.e., VMs’ resource capacities remain unchanged. In VC environments, QoS is usually delivered by a VC as a single entity. Therefore, there is no reason why VMs’ resource capacity cannot be adjusted as long as the whole VC is still able to maintain the desired QoS. Treating VMs as “moldable” during consolidation may be able to further consolidate VMs into an even fewer number of nodes. This paper investigates this issue and develops a Genetic Algorithm (GA) to consolidate moldable VMs. The GA is able to evolve an optimized system state, which represents the VM-to-node mapping and the resource capacity allocated to each VM. After the new system state is calculated by the GA, the Cloud will transit from the current system state to the new one. The transition time represents overhead and should be minimized. In this paper, a cost model is formalized to capture the transition overhead, and a reconfiguration algorithm is developed to transit the Cloud to the optimized system state with low transition overhead. Experiments have been conducted to evaluate the performance of the GA and the reconfiguration algorithm

    Energy-Efficient Software

    Get PDF
    The energy consumption of ICT is growing at an unprecedented pace. The main drivers for this growth are the widespread diffusion of mobile devices and the proliferation of datacenters, the most power-hungry IT facilities. In addition, it is predicted that the demand for ICT technologies and services will increase in the coming years. Finding solutions to decrease ICT energy footprint is and will be a top priority for researchers and professionals in the field. As a matter of fact, hardware technology has substantially improved throughout the years: modern ICT devices are definitely more energy efficient than their predecessors, in terms of performance per watt. However, as recent studies show, these improvements are not effectively reducing the growth rate of ICT energy consumption. This suggests that these devices are not used in an energy-efficient way. Hence, we have to look at software. Modern software applications are not designed and implemented with energy efficiency in mind. As hardware became more and more powerful (and cheaper), software developers were not concerned anymore with optimizing resource usage. Rather, they focused on providing additional features, adding layers of abstraction and complexity to their products. This ultimately resulted in bloated, slow software applications that waste hardware resources -- and consequently, energy. In this dissertation, the relationship between software behavior and hardware energy consumption is explored in detail. For this purpose, the abstraction levels of software are traversed upwards, from source code to architectural components. Empirical research methods and evidence-based software engineering approaches serve as a basis. First of all, this dissertation shows the relevance of software over energy consumption. Secondly, it gives examples of best practices and tactics that can be adopted to improve software energy efficiency, or design energy-efficient software from scratch. Finally, this knowledge is synthesized in a conceptual framework that gives the reader an overview of possible strategies for software energy efficiency, along with examples and suggestions for future research

    Semantic resource management and interoperability between distributed computing platforms

    Get PDF
    Distributed Computing is the paradigm where the application execution is distributed across different computers connected by a communication network. Distributed Computing platforms have evolved very fast during the las decades: starting from Clusters, where a set of computers were working together in a single location; then evolving to the Grids, where computing resources are shared by different entities, creating a global computing infrastructure which is available to different user communities; and finally becoming in what is currently known as the Cloud, where computing and data resources are provided, on demand, in a very dynamic fashion, and following the Utility Computing model where you pay only for what you consume. Different types of companies and institutions are exploring the potential benefits of moving their IT services and applications to Cloud infrastructures, in order to decouple the management of computing resources from their core business process to become more productive. Nevertheless, migrating software to Clouds is not an easy task, since it requires a deep knowledge of the technology to decompose the application and the capabilities offered by providers and how to use them. Besides this complex deployment process, the current cloud market place has several providers offering resources with different capabilities, prices and quality, and each provider uses their own properties and APIs for describing and accessing their resources. Therefore, when customers want to execute an application in the providers' resources, they must understand the different providers' description, compare them and select the most suitable resources for their interests. Once the provider and resources have been selected, developers have to inter-operate with the different providers' interfaces to perform the application execution steps. To do all the mentioned steps, application developers have to deal with the design and implementation of complex integration procedures. This thesis presents several contributions to overcome the aforementioned problems by providing a platform that facilitates and automates the integration of applications in different providers' infrastructures lowering the barrier of adopting new distributed computing infrastructure such as Clouds. The achievement of this objective has been split in several parts. In the first part, we have studied how semantic web technologies are helping to describe applications and to automatically infer a model for deploying them in a distributed platform. Once the application deployment model has been inferred, the second step is finding the resources to deploy and execute the different application components. Regarding this topic, we have studied how semantic web technologies can be applied in the resource allocation problem. Once the different components have been allocated in the providers' resources, it is time to deploy and execute the application components on these resources by invoking a workflow of provider API calls. However, every provider defines their own management interfaces, so the workflow to perform the same actions is different depending on the selected provider. In this thesis, we propose a framework to automatically infer the workflow of provider interface calls required to perform any resource management tasks. In the last part of the thesis, we have studied how to introduce the benefits of software agents for coordinating the application management in distributed platforms. We propose a multi-agent system which is in charge of coordinating the different steps of the application deployment in a distributed way as well as monitoring the correct execution of the application in the computing resources. The different contributions have been validated with a prototype implementation and a set of use cases.La Computación Distribuida es un paradigma donde la ejecución de aplicaciones se distribuye entre diferentes computadores contados a través de una red de comunicación. Las plataformas de computación distribuida han evolucionado rápidamente durante las últimas décadas, empezando por los "Clusters", donde varios computadores están conectados por una red local; pasando por los "Grids", donde los recursos computacionales son compartidos por varias instituciones creando un red de computación global; llegando finalmente a lo que actualmente conocemos como "Clouds", donde nos podemos proveer de recursos de manera dinámica, bajo demanda y pagando solo por lo que consumimos. Actualmente, varias compañías están descubriendo los beneficios de mover sus aplicaciones a las infraestructuras Cloud, desacoplando la administración de los recursos computacionales de su "core business" para ser más productivos. Sin embargo migrar el software al Cloud no es una tarea fácil porque se requiere un conocimiento exhaustivo de la tecnología y como usar los servicios ofrecidos por los diferentes proveedores. Además cada proveedor ofrece recursos con diferentes capacidades, precios y calidades, con su propia interfaz para acceder a ellos. Por consiguiente, cuando un usuario quiere ejecutar una aplicación en el Cloud, debe entender que ofrece cada proveedor y como usarlo y una vez que ha elegido debe programar los diferentes pasos del despliegue de su aplicación. Si además se quieren usar varios proveedores o cambiar a otro, este proceso debe repetirse varias veces. Esta tesis presenta varias contribuciones para mitigar estos problemas diseñando una plataforma para facilitar y automatizar la integración de aplicaciones en los diferentes proveedores. Estas contribuciones se dividen en varias partes: Primero, el estudio de como las tecnologías semánticas pueden ayudar para describir aplicaciones y automáticamente inferir como se puede desplegar en un plataforma distribuida. Una vez obtenemos este modelo de despliegue, la segunda contribución nos presenta como estas mismas tecnologías pueden usarse para asignar las diferentes partes del despliegue de la aplicación a los recursos de los proveedores. Una vez sabemos la asignación, la siguiente contribución nos resuelve como se puede usar "AI planning" para encontrar la secuencia de servicios que se deben ejecutar para realizar el despliegue deseado. Finalmente, la última parte de la tesis, nos presenta como el despliegue y ejecuciones de las aplicaciones puede coordinarse por un sistema multi-agentes de una manera escalable y distribuida. Las diferentes contribuciones de la tesis han sido validadas mediante la implementación de prototipos y casos de uso

    Building the Future Internet through FIRE

    Get PDF
    The Internet as we know it today is the result of a continuous activity for improving network communications, end user services, computational processes and also information technology infrastructures. The Internet has become a critical infrastructure for the human-being by offering complex networking services and end-user applications that all together have transformed all aspects, mainly economical, of our lives. Recently, with the advent of new paradigms and the progress in wireless technology, sensor networks and information systems and also the inexorable shift towards everything connected paradigm, first as known as the Internet of Things and lately envisioning into the Internet of Everything, a data-driven society has been created. In a data-driven society, productivity, knowledge, and experience are dependent on increasingly open, dynamic, interdependent and complex Internet services. The challenge for the Internet of the Future design is to build robust enabling technologies, implement and deploy adaptive systems, to create business opportunities considering increasing uncertainties and emergent systemic behaviors where humans and machines seamlessly cooperate

    Building the Future Internet through FIRE

    Get PDF
    The Internet as we know it today is the result of a continuous activity for improving network communications, end user services, computational processes and also information technology infrastructures. The Internet has become a critical infrastructure for the human-being by offering complex networking services and end-user applications that all together have transformed all aspects, mainly economical, of our lives. Recently, with the advent of new paradigms and the progress in wireless technology, sensor networks and information systems and also the inexorable shift towards everything connected paradigm, first as known as the Internet of Things and lately envisioning into the Internet of Everything, a data-driven society has been created. In a data-driven society, productivity, knowledge, and experience are dependent on increasingly open, dynamic, interdependent and complex Internet services. The challenge for the Internet of the Future design is to build robust enabling technologies, implement and deploy adaptive systems, to create business opportunities considering increasing uncertainties and emergent systemic behaviors where humans and machines seamlessly cooperate

    Classificação e resolução de defeitos em manutenção de software utilizando ODC e histórico de soluções

    Get PDF
    Nowadays with the increasing of demand in support services such as high availability and performance faced by customers and low cost operations to software maintenance and hosting enterprises, the resolution of software incidents in less time and defect prevention turned a key point. Moreover, the software maintenance is one most time consuming and effort demanding and by consequence cost in software development life cycle. Balancing effectiveness and costs turn a challenge to any enterprise that manages support in software maintenance. The approach used on this research defines a process to classify defects and boundary a set of best solutions associated to these classes from defects history and customer knowledge base. Also this classification separates different problem complexities that can be handled to different support teams. The base method used is the ODC (Orthogonal Defect Classification). With this research is possible to verify that the classification and solutions associations with problem class can undertake a time reduction in incidents resolution in software maintenance. It was verified using four service provider samples in two different customers (X and Y) that classification of defects, correct support team redirection and best solution grouping helps on reduction of incidents resolution time between 70% of incidents to customer Y and 92,5 % of incidents to customer X. The reduction was reached with reduction in number of transfers between supporting teams and incidents number reduction. The process is incremental because it is unfolded from history information and from effectiveness into solutions purposed. This process can leverage human resources nedded to support computational systems in service providers.Nos dias atuais, com o aumento da demanda de serviços de suporte, como, por exemplo, no campo da alta disponibilidade e do desempenho para o cliente e da demanda de custos mais baixos para as empresas de manutenção e hospedagem de software, a resolução de incidentes de software em um tempo menor junto ao cliente e a prevenção de defeitos tornaram-se um tópicos fundamentais. Além disso, a atividade de manutenção de software é uma das fases que consome mais tempo, esforço e consequentemente custo no ciclo de desenvolvimento de software. Balancear eficiência e custo torna-se um desafio para qualquer empresa de suporte em manutenção de software. A abordagem utilizada nesta pesquisa estabelece um processo para classificar defeitos e delinear um conjunto de melhores soluções para os defeitos classificados a partir do histórico de defeitos e a base de conhecimento do cliente. Esta classificação também separa complexidade de problemas para serem gerenciados pelo time de suporte mais adequado. O método base utilizado é o ODC (Classificação Ortogonal de Defeitos) e extensões voltadas ao suporte de software são propostas e utilizadas. Por meio desta pesquisa, é possível verificar se a classificação e associação de soluções podem acarretar em uma redução no tempo de atendimento dos incidentes de suporte. Foi observado em quatro amostras de dois clientes diferentes (X e Y) que utilizando a classificação dos defeitos, direcionamento correto aos times de suporte e agrupamento de soluções, promoveu uma redução no tempo de atendimento em 70% dos incidentes de suporte no cliente Y e 92,5% dos incidentes de suporte para o cliente X. A redução de tempo foi obtida pela redução no número de transferências entre os times de suporte e a redução de incidentes. O processo apresentado é incremental, pois é baseado no aumento das informações históricas e na eficácia das soluções propostas. Este método de soluções pode favorecer a redução dos recursos necessários para suportar sistemas computacionais em provedores de serviço
    corecore