1,154 research outputs found

    Size estimation of cloud migration projects with cloud migration point (CMP)

    Get PDF
    2011-2012 > Academic research: refereed > Refereed conference paperAccepted ManuscriptPublishe

    Size Estimation of Cloud Migration Projects with Cloud Migration Point (CMP)

    Full text link
    One major obstacle to enterprise adoption of cloud technologies has been the lack of visibility into migration effort and cost. In this paper, we present a methodology, called Cloud Migration Point (CMP), for estimating the size of cloud migration projects, by recasting a well-known software size estimation model called Function Point (FP) into the context of cloud migration. We empirically evaluate our CMP model by performing a cross-validation on six different small-scale cloud migration projects and show that our size estimation model can be used as a reliable predictor for effort estimation. Furthermore, we prove that our CMP model satisfies the fundamental properties of a software size measure.Department of ComputingRefereed conference pape

    A Model Driven Framework for Portable Cloud Services

    Get PDF
    Cloud Computing is an evolving technology as it offers significant benefits like pay only for what you use, scale the resources according to the needs and less in-house staff and resources. These benefits have resulted in tremendous increase in the number of applications and services hosted in the cloud which inturn has resulted in increase in the number of cloud providers in the market. Cloud service providers have a lot of heterogeneity in the resources they use. They have their own servers, different cloud infrastructures, API’s and methods to access the cloud resources. Despite its benefits; lack of standards among service providers has caused a high level of vendor lock-in when a software developer tries to change its cloud provider. In this paper we give an overview on the ongoing and current trends in the area of cloud service portability and we also propose a new cloud portability platform. Our new platform is based on establishing feature models which offers the desired cloud portability. Our solution DSkyL uses feature models and domain model analysis to support development, customization and deployment of application components across multiple clouds. The main goal of our approach is to reduce the effort and time needed for porting applications across different clouds. This paper aims to give an overview on DSkyL

    Efficient and elastic management of computing infrastructures

    Full text link
    Tesis por compendio[EN] Modern data centers integrate a lot of computer and electronic devices. However, some reports state that the mean usage of a typical data center is around 50% of its peak capacity, and the mean usage of each server is between 10% and 50%. A lot of energy is destined to power on computer hardware that most of the time remains idle. Therefore, it would be possible to save energy simply by powering off those parts from the data center that are not actually used, and powering them on again as they are needed. Most data centers have computing clusters that are used for intensive computing, recently evolving towards an on-premises Cloud service model. Despite the use of low consuming components, higher energy savings can be achieved by dynamically adapting the system to the actual workload. The main approach in this case is the usage of energy saving criteria for scheduling the jobs or the virtual machines into the working nodes. The aim is to power off idle servers automatically. But it is necessary to schedule the power management of the servers in order to minimize the impact on the end users and their applications. The objective of this thesis is the elastic and efficient management of cluster infrastructures, with the aim of reducing the costs associated to idle components. This objective is addressed by automating the power management of the working nodes in a computing cluster, and also proactive stimulating the load distribution to achieve idle resources that could be powered off by means of memory overcommitment and live migration of virtual machines. Moreover, this automation is of interest for virtual clusters, as they also suffer from the same problems. While in physical clusters idle working nodes waste energy, in the case of virtual clusters that are built from virtual machines, the idle working nodes can waste money in commercial Clouds or computational resources in an on-premises Cloud.[ES] En los Centros de Procesos de Datos (CPD) existe una gran concentración de dispositivos informáticos y de equipamiento electrónico. Sin embargo, algunos estudios han mostrado que la utilización media de los CPD está en torno al 50%, y que la utilización media de los servidores se encuentra entre el 10% y el 50%. Estos datos evidencian que existe una gran cantidad de energía destinada a alimentar equipamiento ocioso, y que podríamos conseguir un ahorro energético simplemente apagando los componentes que no se estén utilizando. En muchos CPD suele haber clusters de computadores que se utilizan para computación de altas prestaciones y para la creación de Clouds privados. Si bien se ha tratado de ahorrar energía utilizando componentes de bajo consumo, también es posible conseguirlo adaptando los sistemas a la carga de trabajo en cada momento. En los últimos años han surgido trabajos que investigan la aplicación de criterios energéticos a la hora de seleccionar en qué servidor, de entre los que forman un cluster, se debe ejecutar un trabajo o alojar una máquina virtual. En muchos casos se trata de conseguir equipos ociosos que puedan ser apagados, pero habitualmente se asume que dicho apagado se hace de forma automática, y que los equipos se encienden de nuevo cuando son necesarios. Sin embargo, es necesario hacer una planificación de encendido y apagado de máquinas para minimizar el impacto en el usuario final. En esta tesis nos planteamos la gestión elástica y eficiente de infrastructuras de cálculo tipo cluster, con el objetivo de reducir los costes asociados a los componentes ociosos. Para abordar este problema nos planteamos la automatización del encendido y apagado de máquinas en los clusters, así como la aplicación de técnicas de migración en vivo y de sobreaprovisionamiento de memoria para estimular la obtención de equipos ociosos que puedan ser apagados. Además, esta automatización es de interés para los clusters virtuales, puesto que también sufren el problema de los componentes ociosos, sólo que en este caso están compuestos por, en lugar de equipos físicos que gastan energía, por máquinas virtuales que gastan dinero en un proveedor Cloud comercial o recursos en un Cloud privado.[CA] En els Centres de Processament de Dades (CPD) hi ha una gran concentració de dispositius informàtics i d'equipament electrònic. No obstant això, alguns estudis han mostrat que la utilització mitjana dels CPD està entorn del 50%, i que la utilització mitjana dels servidors es troba entre el 10% i el 50%. Estes dades evidencien que hi ha una gran quantitat d'energia destinada a alimentar equipament ociós, i que podríem aconseguir un estalvi energètic simplement apagant els components que no s'estiguen utilitzant. En molts CPD sol haver-hi clusters de computadors que s'utilitzen per a computació d'altes prestacions i per a la creació de Clouds privats. Si bé s'ha tractat d'estalviar energia utilitzant components de baix consum, també és possible aconseguir-ho adaptant els sistemes a la càrrega de treball en cada moment. En els últims anys han sorgit treballs que investiguen l'aplicació de criteris energètics a l'hora de seleccionar en quin servidor, d'entre els que formen un cluster, s'ha d'executar un treball o allotjar una màquina virtual. En molts casos es tracta d'aconseguir equips ociosos que puguen ser apagats, però habitualment s'assumix que l'apagat es fa de forma automàtica, i que els equips s'encenen novament quan són necessaris. No obstant això, és necessari fer una planificació d'encesa i apagat de màquines per a minimitzar l'impacte en l'usuari final. En esta tesi ens plantegem la gestió elàstica i eficient d'infrastructuras de càlcul tipus cluster, amb l'objectiu de reduir els costos associats als components ociosos. Per a abordar este problema ens plantegem l'automatització de l'encesa i apagat de màquines en els clusters, així com l'aplicació de tècniques de migració en viu i de sobreaprovisionament de memòria per a estimular l'obtenció d'equips ociosos que puguen ser apagats. A més, esta automatització és d'interés per als clusters virtuals, ja que també patixen el problema dels components ociosos, encara que en este cas estan compostos per, en compte d'equips físics que gasten energia, per màquines virtuals que gasten diners en un proveïdor Cloud comercial o recursos en un Cloud privat.Alfonso Laguna, CD. (2015). Efficient and elastic management of computing infrastructures [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/57187Compendi

    Model-Driven Machine Learning for Predictive Cloud Auto-scaling

    Get PDF
    Cloud provisioning of resources requires continuous monitoring and analysis of the workload on virtual computing resources. However, cloud providers offer the rule-based and schedule-based auto-scaling service. Auto-scaling is a cloud system that reacts to real-time metrics and adjusts service instances based on predefined scaling policies. The challenge of this reactive approach to auto-scaling is to cope with fluctuating load changes. For data management applications, the workload is changing and needs forecasting on historical trends and integrating with auto-scaling service. We aim to discover changes and patterns on multi metrics of resource usages of CPU, memory, and networking. To address this problem, the learning-and-inference based prediction has been adopted to predict the needs prior to provision action. First, we develop a novel machine learning-based auto-scaling process that covers the technique of learning multiple metrics for cloud auto-scaling decision. This technique is used for continuous model training and workload forecasting. Furthermore, the result of workload forecasting triggers the auto-scaling process automatically. Also, we build the serverless functions of this machine learning-based process, including monitoring, machine learning, model selection, scheduling as microservices and orchestrating these independent services by platform, language orthogonal APIs. We demonstrate this architectural implementation on AWS and Microsoft Azure and show the prediction results from machine learning on-the-fly. Results show significant cost reductions by our proposed solution compared to a general threshold-based auto-scaling. Still, there is a need to integrate the machine learning prediction with the auto-scaling system. So, the deployment effort of devising additional machine learning components is increased. So, we present a model-driven framework that defines first-class entities to represent machine learning algorithm types, inputs, outputs, parameters, and evaluation scores. We set up rules for validating machine learning entities. The connection between the machine learning and auto-scaling system is presented by two levels of abstraction models, namely cloud platform independent model and cloud platform specific model. We automate the model-to-model transformation and model-to-deployment transformation. We integrate model-driven with a DevOps approach to make models deployable and executable on a target cloud platform. We demonstrate our method with scaling configuration and deployment of two open source benchmark applications - Dell DVD store and Netflix (NDBench) on three cloud platforms, AWS, Azure, and Rackspace. The evaluation shows our inference-based auto-scaling with model-driven reduces approximately 27% of deployment effort compared to the ordinary auto-scaling

    A Knowledge Based Decision Making Tool to Support Cloud Migration Decision Making

    Get PDF
    way that IT services are delivered within enterprises. Cloud computing promises to reduce the cost of computing services, provide on-demand computing resources and a pay per use model. However, there are numerous challenges for enterprises planning to migrate to a cloud computing environment as cloud computing impacts multiple aspects of enterprises and the implications of migration to the cloud vary between enterprises. This paper discusses the development of an holistic model to support strategic decision making for cloud computing migration. The proposed model uses a hybrid approach to support decision making, combining the analytical hierarchical approach (AHP) with Case Based Reasoning (CBR) to provide a knowledge based decision support model and takes into account five factors identified from the secondary research as covering all aspects of cloud migration decision making. The paper discusses the different phases of the model and describes the next stage of the research which will include the development of a prototype tool and use of the tool to evaluate the model in a real life contex

    Cloud computing : developing a cost estimation model for customers

    Get PDF
    Cloud computing is an essential part of the digital transformation journey. It offers many benefits to organisations, including the advantages of scalability and agility. Cloud customers see cloud computing as a moving train that every organisation needs to catch. This means that adoption decisions are made quickly in order to keep up with the new trend. Such quick decisions have led to many disappointments for cloud customers and have questioned the cost of the cloud. This is also because there is a lack of criteria or guidelines to help cloud customers get a complete picture of what is required of them before they go to the cloud. From another perspective, as new technologies force changes to the organizational structure and business processes, it is important to understand how cloud computing changes the IT and non-IT departments and how can this be translated into costs. Accordingly, this research uses the total cost of ownership approach and transaction cost theory to develop a customer-centric model to estimate the cost of cloud computing. The Research methodology used the Design Science Research approach. Expert interviews were used to develop the model. The model was then validated using four case studies. The model, named Sunny, identifies many costs that need to be estimated, which will help to make the cloud-based digital transformation journey less cloudy. The costs include Meta Services, Continuous Contract management, Monitoring and ITSM Adjustment. From an academic perspective, this research highlights the management efforts required for cloud computing and how misleading the rapid provision potential of the cloud resources can be. From a business perspective, proper estimation of these costs would help customers make informed decisions and vendors make realistic promises.Cloud Computing ist ein wesentlicher Bestandteil der Digitalisierung. Es bietet Unternehmen viele Vorteile, wie Skalierbarkeit und Agilität. Cloud-Kunden sehen Cloud Computing als einen Zug, auf den jedes Unternehmen aufspringen muss. Das bedeutet, dass Einführungsentscheidungen schnell getroffen werden, um mit dem neuen Trend Schritt zu halten. Solche Schnellschüsse haben zu vielen Enttäuschungen bei Cloud-Kunden geführt und die Kosten der Cloud in Frage gestellt. Dies ist auch darauf zurückzuführen, dass es keine Kriterien oder Leitlinien gibt, die den Cloud-Kunden helfen, sich ein vollständiges Bild davon zu machen, was von ihnen erwartet wird, bevor sie in die Cloud gehen. Aus einem anderen Blickwinkel ist es wichtig zu verstehen, wie Cloud Computing IT- und Nicht-IT-Abteilungen verändert und wie sich dies auf die Kosten auswirkt, da neue Technologien Veränderungen in der Organisationsstruktur und den Geschäftsprozessen erzwingen. Dementsprechend werden in dieser Forschungsarbeit der Total Cost of Ownership-Ansatz und die Transaktionskostentheorie verwendet, um ein kundenorientiertes Modell zur Schätzung der Kosten von Cloud Computing zu entwickeln. Die Forschungsmethodik basiert auf dem Design Science Research Ansatz. Zur Entwicklung des Modells wurden Experteninterviews durchgeführt. Anschließend wurde das Modell anhand von vier Fallstudien validiert. Das Modell mit dem Namen Sunny identifiziert viele Kosten, die geschätzt werden müssen, um die Reise zur digitalen Transformation in der Cloud weniger wolkig zu gestalten. Zu diesen Kosten gehören Meta-Services, kontinuierliches Vertragsmanagement, Überwachung und ITSM-Anpassung. Aus akademischer Sicht verdeutlicht diese Forschung, welcher Verwaltungsaufwand für Cloud Computing erforderlich ist und wie irreführend das schnelle Bereitstellungspotenzial von Cloud-Ressourcen sein kann. Aus Unternehmenssicht würde eine korrekte Einschätzung dieser Kosten den Kunden helfen, fundierte Entscheidungen zu treffen, und den Anbietern, realistische Versprechungen zu machen

    Application Migration Effort in the Cloud

    Get PDF
    Over the last years, the utilization of cloud resources has been steadily rising and an increasing number of enterprises are moving applications to the cloud. A leading trend is the adoption of Platform as a Service to support rapid application deployment. By providing a managed environment, cloud platforms take away a lot of complex configuration effort required to build scalable applications. However, application migrations to and between clouds cost development effort and open up new risks of vendor lock-in. This is problematic because frequent migrations may be necessary in the dynamic and fast changing cloud market. So far, the effort of application migration in PaaS environments and typical issues experienced in this task are hardly understood. To improve this situation, we present a cloud-to-cloud migration of a real-world application to seven representative cloud platforms. In this case study, we analyze the feasibility of the migrations in terms of portability and the effort of the migrations. We present a Docker-based deployment system that provides the ability of isolated and reproducible measurements of deployments to platform vendors, thus enabling the comparison of platforms for a particular application. Using this system, the study identifies key problems during migrations and quantifies these differences by distinctive metrics

    Measuring the orientations of hidden subvertical joints in highways rock cuts using ground penetrating radar in combination with LIDAR

    Get PDF
    Mapping discontinuities in rock cuts and measuring their orientations is crucial in assessing the stability of rock masses. This can be done usually using manual methods such as scanline or advanced techniques such as LIDAR. However, these methods are used only to map exposed discontinuities which may cause underestimation for slope stability. Accordingly, ground penetrating radar (GPR) has been recently used to detect such hidden discontinuities. The used 400 MHz monostatic GPR antenna was significantly able to detect and map hidden subvertical joints within 4 m depths in five sandstone highways rock cuts and within 3 m depths in two ignimbrite highways rock cuts in the State of Missouri. Manual 2D migration was done to estimate, in 2D and 3D radiograms, the slope face-perpendicular depths which was measured from three coplanar etched points, the three index points , at each rock cut surface to the corresponding points on each plane of the detected subvertical joints. The orientations of the detected hidden joints were then determined based on the 3-point equation and using the calibrated LIDAR coordinates. Some of these measurements were confirmed by very close-results of field verification measurements. The results of this GPR-and-LIDAR based investigation demonstrate that our new proposed approach using these techniques is straightforward, understandable, and can be valuable in some rock engineering applications and rock cuts design in terms of the orientations of joints, in addition to the number of joint sets which may build a more clear view about the rock cut stability than before --Abstract, page iii
    corecore