22 research outputs found

    Towards an Environment for Efficient and Transparent Virtual Machine Operations: The ENTICE Approach

    Get PDF
    Cloud computing is based on Virtual Machines (VM) or containers, which provide their own software execution environment that can be deployed by facilitating technologies on top of various physical hardware. The use of VMs or containers represents an efficient way to automatize the overall software engineering and operation life-cycle. Some of the benefits include elasticity and high scalability, which increases the utilization efficiency and decreases the operational costs. VMs or containers as software artifacts are created using provider-specific templates and are stored in proprietary or public repositories for further use. However, technology specific choices may reduce their portability, lead to a vendor lock-in, particularly when applications need to run in federated Clouds. In this paper we present the current state of development of the novel concept of a VM repository and operational environment for federated Clouds named ENTICE. The ENTICE environment has been designed to receive unmodified and functionally complete VM images from its users, and transparently tailor and optimise them for specific Cloud infrastructures with respect to their size, configuration, and geographical distribution, such that they are loaded, delivered, and executed faster and with improved QoS compared to their current behaviour. Furthermore, in this work a specific use case scenario for the ENTICE environment has been provided and the underlying novel technologies have been presented

    Distributed Environment for Efficient Virtual Machine Image Management in Federated Cloud Architectures

    Get PDF
    The use of Virtual Machines (VM) in Cloud computing provides various benefits in the overall software engineering lifecycle. These include efficient elasticity mechanisms resulting in higher resource utilization and lower operational costs. VM as software artifacts are created using provider-specific templates, called VM images (VMI), and are stored in proprietary or public repositories for further use. However, some technology specific choices can limit the interoperability among various Cloud providers and bundle the VMIs with nonessential or redundant software packages, leading to increased storage size, prolonged VMI delivery, stagnant VMI instantiation and ultimately vendor lock-in. To address these challenges, we present a set of novel functionalities and design approaches for efficient operation of distributed VMI repositories, specifically tailored for enabling: (i) simplified creation of lightweight and size optimized VMIs tuned for specific application requirements; (ii) multi-objective VMI repository optimization; and (iii) efficient reasoning mechanism to help optimizing complex VMI operations. The evaluation results confirm that the presented approaches can enable VMI size reduction by up to 55%, while trimming the image creation time by 66%. Furthermore, the repository optimization algorithms, can reduce the VMI delivery time by up to 51% and cut down the storage expenses by 3%. Moreover, by implementing replication strategies, the optimization algorithms can increase the system reliability by 74%

    Optimization of an Earth Observation Data Processing and Distribution System

    Get PDF
    Conventional Earth Observation Payload Data Ground Segments (PDGS) continuously receive variable requests for data processing and distribution. However, their architecture was conceived to be on the premises of satellite operators and, for instance, has intrinsic limitations to offer variable services. In the current chapter, we introduce cloud computing technology to be considered as an alternative to offer variable services. For that purpose, a cloud infrastructure based on OpenNebula and the PDGS used in the Deimos-2 mission was adapted with the objective of optimizing it using the ENTICE open source middleware. Preliminary results with a realistic satellite recording scenario are presented

    ENTICE VM Image Analysis and Optimised Fragmentation

    Get PDF
    Virtual machine (VM) images (VMIs) often share common parts of significant size as they are stored individually. Using existing de-duplication techniques for such images are non-trivial, impose serious technical challenges, and requires direct access to clouds' proprietary image storages, which is not always feasible. We propose an alternative approach to split images into shared parts, called fragments, which are stored only once. Our solution requires a reasonably small set of base images available in the cloud, and additionally only the increments will be stored without the contents of base images, providing significant storage space savings. Composite images consisting of a base image and one or more fragments are assembled on-demand at VM deployment. Our technique can be used in conjunction with practically any popular cloud solution, and the storage of fragments is independent of the proprietary image storage of the cloud provider

    Multi-Criteria Decision-Making Approach for Container-based Cloud Applications: The SWITCH and ENTICE Workbenches

    Get PDF
    Many emerging smart applications rely on the Internet of Things (IoT) to provide solutions to time-critical problems. When building such applications, a software engineer must address multiple Non-Functional Requirements (NFRs), including requirements for fast response time, low communication latency, high throughput, high energy efficiency, low operational cost and similar. Existing modern container-based software engineering approaches promise to improve the software lifecycle; however, they fail short of tools and mechanisms for NFRs management and optimisation. Our work addresses this problem with a new decision-making approach based on a Pareto Multi-Criteria optimisation. By using different instance configurations on various geo-locations, we demonstrate the suitability of our method, which narrows the search space to only optimal instances for the deployment of the containerised microservice.This solution is included in two advanced software engineering environments, the SWITCH workbench, which includes an Interactive Development Environment (IDE) and the ENTICE Virtual Machine and container images portal. The developed approach is particularly useful when building, deploying and orchestrating IoT applications across multiple computing tiers, from Edge-Cloudlet to Fog-Cloud data centres

    ENTICE VM image analysis and optimised fragmentation

    Get PDF
    Virtual machine (VM) images (VMIs) often share common parts of significant size as they are stored individually. Using existing de-duplication techniques for such images are non-trivial, impose serious technical challenges, and requires direct access to clouds' proprietary image storages, which is not always feasible. We propose an alternative approach to split images into shared parts, called fragments, which are stored only once. Our solution requires a reasonably small set of base images available in the cloud, and additionally only the increments will be stored without the contents of base images, providing significant storage space savings. Composite images consisting of a base image and one or more fragments are assembled on- demand at VM deployment. Our technique can be used in conjunction with practically any popular cloud solution, and the storage of fragments is independent of the proprietary image storage of the cloud provider

    A reactive architecture for cloud-based system engineering

    Get PDF
    PhD ThesisSoftware system engineering is increasingly practised over globally distributed locations. Such a practise is termed as Global Software Development (GSD). GSD has become a business necessity mainly because of the scarcity of resources, cost, and the need to locate development closer to the customers. GSD is highly dependent on requirements management, but system requirements continuously change. Poorly managed change in requirements affects the overall cost, schedule and quality of GSD projects. It is particularly challenging to manage and trace such changes, and hence we require a rigorous requirement change management (RCM) process. RCM is not trivial in collocated software development; and with the presence of geographical, cultural, social and temporal factors, it makes RCM profoundly difficult for GSD. Existing RCM methods do not take into consideration these issues faced in GSD. Considering the state-of-the-art in RCM, design and analysis of architecture, and cloud accountability, this work contributes: 1. an alternative and novel mechanism for effective information and knowledge-sharing towards RCM and traceability. 2. a novel methodology for the design and analysis of small-to-medium size cloud-based systems, with a particular focus on the trade-off of quality attributes. 3. a dependable framework that facilitates the RCM and traceability method for cloud-based system engineering. 4. a novel methodology for assuring cloud accountability in terms of dependability. 5. a cloud-based framework to facilitate the cloud accountability methodology. The results show a traceable RCM linkage between system engineering processes and stakeholder requirements for cloud-based GSD projects, which is better than existing approaches. Also, the results show an improved dependability assurance of systems interfacing with the unpredictable cloud environment. We reach the conclusion that RCM with a clear focus on traceability, which is then facilitated by a dependable framework, improves the chance of developing a cloud-based GSD project successfully

    A service broker for Intercloud computing

    Get PDF
    This thesis aims at assisting users in finding the most suitable Cloud resources taking into account their functional and non-functional SLA requirements. A key feature of the work is a Cloud service broker acting as mediator between consumers and Clouds. The research involves the implementation and evaluation of two SLA-aware match-making algorithms by use of a simulation environment. The work investigates also the optimal deployment of Multi-Cloud workflows on Intercloud environments

    A cloud infrastructure for scalable computing on population imaging databanks

    Full text link
    This article describes the software architecture designed to cope with the computing demand of research usage of complex data from the imaging biobank of the Regional Ministry of Health in the Valencia Region (CS). It proposes the use of self-configured virtual clusters on top of on-premise and public cloud infrastructures. It uses a model based on recipes and autoconfiguration to deploy virtual elastic clusters that adjust themselves to the actual workload of the study, therefore reducing operating costs and preventing the need of up-front investments both at the level of the imaging biobank or the final user. All the software used is released under open-source licenses.Blanquer Espert, I.; Caballer Fernández, M.; Martí-Bonmatí, L.; Alberich Bayarri, A.; De La Iglesia Vayá, MDLD.; Martínez, J. (2015). A cloud infrastructure for scalable computing on population imaging databanks. International Journal of Image Mining. 1(2/3):175-187. doi:10.1504/IJIM.2015.073015S17518712/

    Efficient and elastic management of computing infrastructures

    Full text link
    Tesis por compendio[EN] Modern data centers integrate a lot of computer and electronic devices. However, some reports state that the mean usage of a typical data center is around 50% of its peak capacity, and the mean usage of each server is between 10% and 50%. A lot of energy is destined to power on computer hardware that most of the time remains idle. Therefore, it would be possible to save energy simply by powering off those parts from the data center that are not actually used, and powering them on again as they are needed. Most data centers have computing clusters that are used for intensive computing, recently evolving towards an on-premises Cloud service model. Despite the use of low consuming components, higher energy savings can be achieved by dynamically adapting the system to the actual workload. The main approach in this case is the usage of energy saving criteria for scheduling the jobs or the virtual machines into the working nodes. The aim is to power off idle servers automatically. But it is necessary to schedule the power management of the servers in order to minimize the impact on the end users and their applications. The objective of this thesis is the elastic and efficient management of cluster infrastructures, with the aim of reducing the costs associated to idle components. This objective is addressed by automating the power management of the working nodes in a computing cluster, and also proactive stimulating the load distribution to achieve idle resources that could be powered off by means of memory overcommitment and live migration of virtual machines. Moreover, this automation is of interest for virtual clusters, as they also suffer from the same problems. While in physical clusters idle working nodes waste energy, in the case of virtual clusters that are built from virtual machines, the idle working nodes can waste money in commercial Clouds or computational resources in an on-premises Cloud.[ES] En los Centros de Procesos de Datos (CPD) existe una gran concentración de dispositivos informáticos y de equipamiento electrónico. Sin embargo, algunos estudios han mostrado que la utilización media de los CPD está en torno al 50%, y que la utilización media de los servidores se encuentra entre el 10% y el 50%. Estos datos evidencian que existe una gran cantidad de energía destinada a alimentar equipamiento ocioso, y que podríamos conseguir un ahorro energético simplemente apagando los componentes que no se estén utilizando. En muchos CPD suele haber clusters de computadores que se utilizan para computación de altas prestaciones y para la creación de Clouds privados. Si bien se ha tratado de ahorrar energía utilizando componentes de bajo consumo, también es posible conseguirlo adaptando los sistemas a la carga de trabajo en cada momento. En los últimos años han surgido trabajos que investigan la aplicación de criterios energéticos a la hora de seleccionar en qué servidor, de entre los que forman un cluster, se debe ejecutar un trabajo o alojar una máquina virtual. En muchos casos se trata de conseguir equipos ociosos que puedan ser apagados, pero habitualmente se asume que dicho apagado se hace de forma automática, y que los equipos se encienden de nuevo cuando son necesarios. Sin embargo, es necesario hacer una planificación de encendido y apagado de máquinas para minimizar el impacto en el usuario final. En esta tesis nos planteamos la gestión elástica y eficiente de infrastructuras de cálculo tipo cluster, con el objetivo de reducir los costes asociados a los componentes ociosos. Para abordar este problema nos planteamos la automatización del encendido y apagado de máquinas en los clusters, así como la aplicación de técnicas de migración en vivo y de sobreaprovisionamiento de memoria para estimular la obtención de equipos ociosos que puedan ser apagados. Además, esta automatización es de interés para los clusters virtuales, puesto que también sufren el problema de los componentes ociosos, sólo que en este caso están compuestos por, en lugar de equipos físicos que gastan energía, por máquinas virtuales que gastan dinero en un proveedor Cloud comercial o recursos en un Cloud privado.[CA] En els Centres de Processament de Dades (CPD) hi ha una gran concentració de dispositius informàtics i d'equipament electrònic. No obstant això, alguns estudis han mostrat que la utilització mitjana dels CPD està entorn del 50%, i que la utilització mitjana dels servidors es troba entre el 10% i el 50%. Estes dades evidencien que hi ha una gran quantitat d'energia destinada a alimentar equipament ociós, i que podríem aconseguir un estalvi energètic simplement apagant els components que no s'estiguen utilitzant. En molts CPD sol haver-hi clusters de computadors que s'utilitzen per a computació d'altes prestacions i per a la creació de Clouds privats. Si bé s'ha tractat d'estalviar energia utilitzant components de baix consum, també és possible aconseguir-ho adaptant els sistemes a la càrrega de treball en cada moment. En els últims anys han sorgit treballs que investiguen l'aplicació de criteris energètics a l'hora de seleccionar en quin servidor, d'entre els que formen un cluster, s'ha d'executar un treball o allotjar una màquina virtual. En molts casos es tracta d'aconseguir equips ociosos que puguen ser apagats, però habitualment s'assumix que l'apagat es fa de forma automàtica, i que els equips s'encenen novament quan són necessaris. No obstant això, és necessari fer una planificació d'encesa i apagat de màquines per a minimitzar l'impacte en l'usuari final. En esta tesi ens plantegem la gestió elàstica i eficient d'infrastructuras de càlcul tipus cluster, amb l'objectiu de reduir els costos associats als components ociosos. Per a abordar este problema ens plantegem l'automatització de l'encesa i apagat de màquines en els clusters, així com l'aplicació de tècniques de migració en viu i de sobreaprovisionament de memòria per a estimular l'obtenció d'equips ociosos que puguen ser apagats. A més, esta automatització és d'interés per als clusters virtuals, ja que també patixen el problema dels components ociosos, encara que en este cas estan compostos per, en compte d'equips físics que gasten energia, per màquines virtuals que gasten diners en un proveïdor Cloud comercial o recursos en un Cloud privat.Alfonso Laguna, CD. (2015). Efficient and elastic management of computing infrastructures [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/57187Compendi
    corecore