13 research outputs found

    Residual Resource Defragmentation Based on ECRC (Enhanced Cloud Resource Consolidating)

    Get PDF
    Abstract In cloud computing, server consolidation is the part where very few persons go through the same. By consolidating the unused server space, memory can be reused for another data allocation. The objective of this paper is to improve resource utilization. Residual resource fragmentation refers to the state of the data center where sufficient amount of residual resources are available for any new VM allocation. To achieve this, there are three methods followed here. Active physical servers are identified. Then the maximum utilization of the resources is found out. Finally the resources are allocated and scheduled using the developed algorithm. In this work, we have proposed a new algorithm enhanced cloud consolidating algorithm. This algorithm improves some of the qualities of the cloud consolidating algorithm. Here the allocation technique is based on the cost and the memory

    Efficient and elastic management of computing infrastructures

    Full text link
    Tesis por compendio[EN] Modern data centers integrate a lot of computer and electronic devices. However, some reports state that the mean usage of a typical data center is around 50% of its peak capacity, and the mean usage of each server is between 10% and 50%. A lot of energy is destined to power on computer hardware that most of the time remains idle. Therefore, it would be possible to save energy simply by powering off those parts from the data center that are not actually used, and powering them on again as they are needed. Most data centers have computing clusters that are used for intensive computing, recently evolving towards an on-premises Cloud service model. Despite the use of low consuming components, higher energy savings can be achieved by dynamically adapting the system to the actual workload. The main approach in this case is the usage of energy saving criteria for scheduling the jobs or the virtual machines into the working nodes. The aim is to power off idle servers automatically. But it is necessary to schedule the power management of the servers in order to minimize the impact on the end users and their applications. The objective of this thesis is the elastic and efficient management of cluster infrastructures, with the aim of reducing the costs associated to idle components. This objective is addressed by automating the power management of the working nodes in a computing cluster, and also proactive stimulating the load distribution to achieve idle resources that could be powered off by means of memory overcommitment and live migration of virtual machines. Moreover, this automation is of interest for virtual clusters, as they also suffer from the same problems. While in physical clusters idle working nodes waste energy, in the case of virtual clusters that are built from virtual machines, the idle working nodes can waste money in commercial Clouds or computational resources in an on-premises Cloud.[ES] En los Centros de Procesos de Datos (CPD) existe una gran concentración de dispositivos informáticos y de equipamiento electrónico. Sin embargo, algunos estudios han mostrado que la utilización media de los CPD está en torno al 50%, y que la utilización media de los servidores se encuentra entre el 10% y el 50%. Estos datos evidencian que existe una gran cantidad de energía destinada a alimentar equipamiento ocioso, y que podríamos conseguir un ahorro energético simplemente apagando los componentes que no se estén utilizando. En muchos CPD suele haber clusters de computadores que se utilizan para computación de altas prestaciones y para la creación de Clouds privados. Si bien se ha tratado de ahorrar energía utilizando componentes de bajo consumo, también es posible conseguirlo adaptando los sistemas a la carga de trabajo en cada momento. En los últimos años han surgido trabajos que investigan la aplicación de criterios energéticos a la hora de seleccionar en qué servidor, de entre los que forman un cluster, se debe ejecutar un trabajo o alojar una máquina virtual. En muchos casos se trata de conseguir equipos ociosos que puedan ser apagados, pero habitualmente se asume que dicho apagado se hace de forma automática, y que los equipos se encienden de nuevo cuando son necesarios. Sin embargo, es necesario hacer una planificación de encendido y apagado de máquinas para minimizar el impacto en el usuario final. En esta tesis nos planteamos la gestión elástica y eficiente de infrastructuras de cálculo tipo cluster, con el objetivo de reducir los costes asociados a los componentes ociosos. Para abordar este problema nos planteamos la automatización del encendido y apagado de máquinas en los clusters, así como la aplicación de técnicas de migración en vivo y de sobreaprovisionamiento de memoria para estimular la obtención de equipos ociosos que puedan ser apagados. Además, esta automatización es de interés para los clusters virtuales, puesto que también sufren el problema de los componentes ociosos, sólo que en este caso están compuestos por, en lugar de equipos físicos que gastan energía, por máquinas virtuales que gastan dinero en un proveedor Cloud comercial o recursos en un Cloud privado.[CA] En els Centres de Processament de Dades (CPD) hi ha una gran concentració de dispositius informàtics i d'equipament electrònic. No obstant això, alguns estudis han mostrat que la utilització mitjana dels CPD està entorn del 50%, i que la utilització mitjana dels servidors es troba entre el 10% i el 50%. Estes dades evidencien que hi ha una gran quantitat d'energia destinada a alimentar equipament ociós, i que podríem aconseguir un estalvi energètic simplement apagant els components que no s'estiguen utilitzant. En molts CPD sol haver-hi clusters de computadors que s'utilitzen per a computació d'altes prestacions i per a la creació de Clouds privats. Si bé s'ha tractat d'estalviar energia utilitzant components de baix consum, també és possible aconseguir-ho adaptant els sistemes a la càrrega de treball en cada moment. En els últims anys han sorgit treballs que investiguen l'aplicació de criteris energètics a l'hora de seleccionar en quin servidor, d'entre els que formen un cluster, s'ha d'executar un treball o allotjar una màquina virtual. En molts casos es tracta d'aconseguir equips ociosos que puguen ser apagats, però habitualment s'assumix que l'apagat es fa de forma automàtica, i que els equips s'encenen novament quan són necessaris. No obstant això, és necessari fer una planificació d'encesa i apagat de màquines per a minimitzar l'impacte en l'usuari final. En esta tesi ens plantegem la gestió elàstica i eficient d'infrastructuras de càlcul tipus cluster, amb l'objectiu de reduir els costos associats als components ociosos. Per a abordar este problema ens plantegem l'automatització de l'encesa i apagat de màquines en els clusters, així com l'aplicació de tècniques de migració en viu i de sobreaprovisionament de memòria per a estimular l'obtenció d'equips ociosos que puguen ser apagats. A més, esta automatització és d'interés per als clusters virtuals, ja que també patixen el problema dels components ociosos, encara que en este cas estan compostos per, en compte d'equips físics que gasten energia, per màquines virtuals que gasten diners en un proveïdor Cloud comercial o recursos en un Cloud privat.Alfonso Laguna, CD. (2015). Efficient and elastic management of computing infrastructures [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/57187Compendi

    Learning-based run-time power and energy management of multi/many-core systems: current and future trends

    Get PDF
    Multi/Many-core systems are prevalent in several application domains targeting different scales of computing such as embedded and cloud computing. These systems are able to fulfil the everincreasing performance requirements by exploiting their parallel processing capabilities. However, effective power/energy management is required during system operations due to several reasons such as to increase the operational time of battery operated systems, reduce the energy cost of datacenters, and improve thermal efficiency and reliability. This article provides an extensive survey of learning-based run-time power/energy management approaches. The survey includes a taxonomy of the learning-based approaches. These approaches perform design-time and/or run-time power/energy management by employing some learning principles such as reinforcement learning. The survey also highlights the trends followed by the learning-based run-time power management approaches, their upcoming trends and open research challenges

    Planning and Management of Cloud Computing Networks

    Get PDF
    Résumé L’évolution de l’internet a un effet important sur une grande partie de la population mondiale. On l’utilise pour communiquer, consulter de l’information, travailler et se divertir. Son utilité exceptionnelle a conduit à une explosion de la quantité d’applications et de ressources informatiques. Cependant, la croissance du réseau entraîne une importante consommation énergétique. Si la consommation énergétique des réseaux de télécommunications et des centres de données était celle d’un pays, il se classerait 5e pays du monde. Pis, le nombre de serveurs dans le monde devrait être multiplié par 10 entre 2013 et 2020. Ce contexte nous a motivé à étudier des techniques et des méthodes pour affecter les ressources d’une façon optimale par rapport aux coûts, à la qualité de service, à la consommation énergétique et `a l’impact écologique. Les résultats que nous avons obtenus minimisent les dépenses d’investissement (CAPEX) et les dépenses d’exploitation (OPEX), réduisent d’un facteur 6 le temps de réponse, diminuent la consommation énergétique de 30% et divisent les émissions de CO2 par un facteur 60. L’infonuagique permet l’accès dynamique aux ressources informatiques comme un service. Les programmes sont exécutés sur des serveurs connectés `a l’internet, et les usagers peuvent les utiliser depuis leurs ordinateurs et dispositifs mobiles. Le premier avantage de cette architecture est de réduire le temps de mise en place des applications et l’interopérabilité. En effet, un nouvel utilisateur n’a besoin que d’un navigateur web. Il n’est forcé ni d’installer de programmes sur son ordinateur, ni de posséder un système d’exploitation spécifique. Le deuxième avantage est la disponibilité des applications et de l’information de fa ̧con continue. Celles-ci peuvent être utilisées `a partir de n’importe quel endroit et de n’importe quel dis- positif connecté `a l’internet. De plus, les serveurs et les ressources informatiques peuvent être affectés aux applications de fa ̧con dynamique, selon la quantité d’utilisateurs et la charge de travail. C’est ce que l’on appelle l’élasticité des applications.---------- Abstract The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access

    Network-aware virtual machine placement in cloud data centers with multiple traffic-intensive components

    Get PDF
    Following a shift from computing as a purchasable product to computing as a deliverable service to consumers over the Internet, cloud computing has emerged as a novel paradigm with an unprecedented success in turning utility computing into a reality. Like any emerging technology, with its advent, it also brought new challenges to be addressed. This work studies network and traffic aware virtual machine (VM) placement in a special cloud computing scenario from a provider's perspective, where certain infrastructure components have a predisposition to be the endpoints of a large number of intensive flows whose other endpoints are VMs located in physical machines (PMs). In the scenarios of interest, the performance of any VM is strictly dependent on the infrastructure's ability to meet their intensive traffic demands. We first introduce and attempt to maximize the total value of a metric named "satisfaction" that reflects the performance of a VM when placed on a particular PM. The problem of finding a perfect assignment for a set of given VMs is NP-hard and there is no polynomial time algorithm that can yield optimal solutions for large problems. Therefore, we introduce several off-line heuristic-based algorithms that yield nearly optimal solutions given the communication pattern and flow demand profiles of subject VMs. With extensive simulation experiments we evaluate and compare the effectiveness of our proposed algorithms against each other and also against naïve approaches. © 2015 Elsevier B.V.All rights reserved

    Resource management in the cloud: An end-to-end Approach

    Get PDF
    Philosophiae Doctor - PhDCloud Computing enables users achieve ubiquitous on-demand , and convenient access to a variety of shared computing resources, such as serves network, storage ,applications and more. As a business model, Cloud Computing has been openly welcomed by users and has become one of the research hotspots in the field of information and communication technology. This is because it provides users with on-demand customization and pay-per-use resource acquisition methods

    Control Plane in Software Defined Networks and Stateful Data Planes

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Next generation control of transport networks

    Get PDF
    It is widely understood by telecom operators and industry analysts that bandwidth demand is increasing dramatically, year on year, with typical growth figures of 50% for Internet-based traffic [5]. This trend means that the consumers will have both a wide variety of devices attaching to their networks and a range of high bandwidth service requirements. The corresponding impact is the effect on the traffic engineered network (often referred to as the “transport network”) to ensure that the current rate of growth of network traffic is supported and meets predicted future demands. As traffic demands increase and newer services continuously arise, novel network elements are needed to provide more flexibility, scalability, resilience, and adaptability to today’s transport network. The transport network provides transparent traffic engineered communication of user, application, and device traffic between attached clients (software and hardware) and establishing and maintaining point-to-point or point-to-multipoint connections. The research documented in this thesis was based on three initial research questions posed while performing research at British Telecom research labs and investigating control of transport networks of future transport networks: 1. How can we meet Internet bandwidth growth yet minimise network costs? 2. Which enabling network technologies might be leveraged to control network layers and functions cooperatively, instead of separated network layer and technology control? 3. Is it possible to utilise both centralised and distributed control mechanisms for automation and traffic optimisation? This thesis aims to provide the classification, motivation, invention, and evolution of a next generation control framework for transport networks, and special consideration of delivering broadcast video traffic to UK subscribers. The document outlines pertinent telecoms technology and current art, how requirements I gathered, and research I conducted, and by which the transport control framework functional components are identified and selected, and by which method the architecture was implemented and applied to key research projects requiring next generation control capabilities, both at British Telecom and the wider research community. Finally, in the closing chapters, the thesis outlines the next steps for ongoing research and development of the transport network framework and key areas for further study

    Symmetry-Adapted Machine Learning for Information Security

    Get PDF
    Symmetry-adapted machine learning has shown encouraging ability to mitigate the security risks in information and communication technology (ICT) systems. It is a subset of artificial intelligence (AI) that relies on the principles of processing future events by learning past events or historical data. The autonomous nature of symmetry-adapted machine learning supports effective data processing and analysis for security detection in ICT systems without the interference of human authorities. Many industries are developing machine-learning-adapted solutions to support security for smart hardware, distributed computing, and the cloud. In our Special Issue book, we focus on the deployment of symmetry-adapted machine learning for information security in various application areas. This security approach can support effective methods to handle the dynamic nature of security attacks by extraction and analysis of data to identify hidden patterns of data. The main topics of this Issue include malware classification, an intrusion detection system, image watermarking, color image watermarking, battlefield target aggregation behavior recognition model, IP camera, Internet of Things (IoT) security, service function chain, indoor positioning system, and crypto-analysis
    corecore