32 research outputs found

    IMPLEMENTASI TEKNIK SCALING PADA SISTEM MANAJEMEN BALANCING SERVER BERBASIS WEBSITE

    Get PDF
    Peningkatan jumlah pengguna internet dapat menyebabkan jumlah pengunjung website meningkat. Meningkatnya jumlah pengunjung website mengakibatkan kinerja server tidak optimal dalam menyediakan sumber daya untuk menerima request. Salah satu permasalahan yang akan terjadi adalah server dapat menjadi down. Solusi dari permasalahan tersebut adalah dengan menerapkan teknik scaling untuk mengalihkan request yang terjadi diserver. Dengan melakukan scaling, server dapat mengontrol storage ketika terjadi request dengan jumlah tinggi sehingga sistem dapat melakukan balancing terhadap server. Pada penelitian ini, server yang digunakan adalah elastice compute (EC2) dan berjumlah 2 server. Teknik scaling dilakukan untuk mengelola jaringan server menggunakan konsep horizontal scaling dengan parameter CPU dan memori. Sistem menggunakan tools httperf untuk melakukan request dan iptables untuk melakukan reject semua koneksi request dari protokol ICMP pada server. Penolakan request pada server menyebabkan penurunan penggunaan CPU dan memori. Output yang dihasilkan mencakup tiga data yaitu CPU, memori serta response time dengan scaling dan tanpa scaling server. Hasil akhir menunjukkan bahwa request yang dilakukan reject dapat berpengaruh pada penggunaan CPU dan memori dengan rata-rata nilai selisih tiap penurunan adalah 6,07% perdetik dan 2,9 MB perdetik. Nilai rata-rata response time tanpa scaling adalah 564,4 ms. Sedangkan response time dengan scaling tidak memiliki nilai request karena koneksi ditolak oleh server

    OCSO: Off-the-cloud service optimization for green efficient service resource utilization

    Get PDF
    Many efforts have been made in optimizing cloud service resource management for efficient service provision and delivery, yet little research addresses how to consume the provisioned service resources efficiently. Meanwhile, typical existing resource scaling management approaches often rest on single monitor category statistics and are driven by certain threshold algorithms, they usually fail to function effectively in case of dealing with complicated and unpredictable workload patterns. Fundamentally, this is due to the inflexibility of using static monitor, threshold and scaling parameters. This paper presents Off-the-Cloud Service Optimization (OCSO), a novel user-side optimization solution which specifically deals with service resource consumption efficiency from the service consumer perspective. OCSO rests on an intelligent resource scaling algorithm which relies on multiple service monitor metrics plus dynamic threshold and scaling parameters. It can achieve proactive and continuous service optimizations for both real-world IaaS and PaaS services, through OCSO cloud service API. From the two series of experiments conducted over Amazon EC2 and ElasticBeanstalk using OCSO prototype, it is demonstrated that the proposed approach can make significant improvement over Amazon native automated service provision and scaling options, regardless of scaling up/down or in/out

    A service broker for Intercloud computing

    Get PDF
    This thesis aims at assisting users in finding the most suitable Cloud resources taking into account their functional and non-functional SLA requirements. A key feature of the work is a Cloud service broker acting as mediator between consumers and Clouds. The research involves the implementation and evaluation of two SLA-aware match-making algorithms by use of a simulation environment. The work investigates also the optimal deployment of Multi-Cloud workflows on Intercloud environments

    MODEL PREDIKSI SIMPLE MOVING AVERAGE PADA AUTO-SCALING CLOUD COMPUTING

    Get PDF
    [INA]Simple  Moving  Average  (SMA)  yang  merupakan salah  satu  metode  pada  model  sistem  prediksi  yang berbasis  time  series  dengan  karakteristik komputasinya  yang  sederhana  dibandingkan  dengan metode  yang  lain.  Analisis  model  prediksi  SMA  ini akan diujikan pada kondisi stabil dan fluktuatif untuk melihat  performansi  model  dengan  parameter  uji Mean Time Between Failure (MTBF), Mean Time To Repair  (MTTR),  Availability  Operational  (AO), Down Time dan   Up Time. Hasil implementasi model nilai AO di atas 96%. [EN]Simple  Moving  Average  (SMA),  is  one  method  for prediction  system  model  time  series  based  with  a simple  computational  characteristics  compared  with other  methods.  Analysis  of  SMA  prediction  modelswill be  tested in  a stable condition  and  fluctuating  to see  the  performance  of  the  model  with  test parameters  Mean  Time  Between  Failure  (MTBF), Mean  Time  To  Repair  (MTTR),  Operational Availability  (AO),  Down  Time  and  Up Time.  Results  of  the  implementation  of  the  model  AO  value  above 96%

    Empirical Evaluation of Cloud IAAS Platforms using System-level Benchmarks

    Get PDF
    Cloud Computing is an emerging paradigm in the field of computing where scalable IT enabled capabilities are delivered ‘as-a-service’ using Internet technology. The Cloud industry adopted three basic types of computing service models based on software level abstraction: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). Infrastructure-as-a-Service allows customers to outsource fundamental computing resources such as servers, networking, storage, as well as services where the provider owns and manages the entire infrastructure. This allows customers to only pay for the resources they consume. In a fast-growing IaaS market with multiple cloud platforms offering IaaS services, the user\u27s decision on the selection of the best IaaS platform is quite challenging. Therefore, it is very important for organizations to evaluate and compare the performance of different IaaS cloud platforms in order to minimize cost and maximize performance. Using a vendor-neutral approach, this research focused on four of the top IaaS cloud platforms- Amazon EC2, Microsoft Azure, Google Compute Engine, and Rackspace cloud services. This research compared the performance of IaaS cloud platforms using system-level parameters including server, file I/O, and network. System-level benchmarking provides an objective comparison of the IaaS cloud platforms from performance perspective. Unixbench, Dbench, and Iperf are the system-level benchmarks chosen to test the performance of the server, file I/O, and network respectively. In order to capture the performance variability, the benchmark tests were performed at different time periods on weekdays and weekends. Each IaaS platform\u27s performance was also tested using various parameters. The benchmark tests conducted on different virtual machine (VM) configurations should help cloud users select the best IaaS platform for their needs. Also, based on their applications\u27 requirements, cloud users should get a clearer picture of which VM configuration they should choose. In addition to the performance evaluation, the price-per-performance value of all the IaaS cloud platforms was also examined

    Elastic Build System in a Hybrid Cloud Environment

    Get PDF
    Linux-based operating systems such as MeeGo consist of thousands of modular packages. Compiling source code and packaging software is an automated but computationally heavy task. Fast and cost-efficient software building is one of the requirements for rapid software development and testing. Meanwhile, the arrival of cloud services makes it easier to buy computing infrastructure and platforms over the Internet. The difference to earlier hosting services is the agility; services are accessible within minutes from the request and the customer only pays per use. This thesis examines how cloud services could be leveraged to ensure sufficient computing capacity for a software build system. The chosen system is Open Build Service, a centrally managed distributed build system capable of building packages for MeeGo among other distributions. As the load on a build cluster can vary greatly, a local infrastructure is difficult to provision efficiently, thus virtual machines from the cloud could be acquired temporarily to accommodate the fluctuating demand. Main issues are whether cloud could be utilized safely and whether it is time-efficient to transfer computational jobs to an outside service. A MeeGo-enabled instance of Open Build Service was first set up in-house, running a management server and a server for workers which build the packages. A virtual machine template for cloud workers was created. Virtual machines created from this template would start the worker program and connect to the management server through a secured tunnel. A service manager script was then implemented to monitor jobs and the usage of workers and to make decisions whether new machines from the cloud should be requested or idle ones terminated. This elasticity is automated and is capable of scaling up in a matter of minutes. The service manager also features cost optimizations implemented with a specific cloud service (Amazon Web Services) in mind. The latency between the in-house and the cloud did not prove to be insurmountable, but as each virtual machine from the cloud has a starting delay of three minutes, the system reacts fairly slowly to increasing demand. The main advantage of the cloud usage is the seemingly infinite number of machines available, ideal for building a large number of packages that can be built in parallel. Packages may need other packages during building, which inhibits the system from building all packages in parallel. Powerful workers are needed to quickly build larger bottleneck packages. Finding the balance between the number and performance of workers is one of the issues for future research. To ensure high availability, improvements should be made to the service manager and a separate virtual infrastructure manager should be used to utilize multiple cloud providers. In addition, mechanisms are needed to keep proprietary source code on in-house workers and to ensure that malicious code cannot be injected into the system via packages originating from open development communities. /Kir1

    Elastic Build System in a Hybrid Cloud Environment

    Get PDF
    Linux-based operating systems such as MeeGo consist of thousands of modular packages. Compiling source code and packaging software is an automated but computationally heavy task. Fast and cost-efficient software building is one of the requirements for rapid software development and testing. Meanwhile, the arrival of cloud services makes it easier to buy computing infrastructure and platforms over the Internet. The difference to earlier hosting services is the agility; services are accessible within minutes from the request and the customer only pays per use. This thesis examines how cloud services could be leveraged to ensure sufficient computing capacity for a software build system. The chosen system is Open Build Service, a centrally managed distributed build system capable of building packages for MeeGo among other distributions. As the load on a build cluster can vary greatly, a local infrastructure is difficult to provision efficiently, thus virtual machines from the cloud could be acquired temporarily to accommodate the fluctuating demand. Main issues are whether cloud could be utilized safely and whether it is time-efficient to transfer computational jobs to an outside service. A MeeGo-enabled instance of Open Build Service was first set up in-house, running a management server and a server for workers which build the packages. A virtual machine template for cloud workers was created. Virtual machines created from this template would start the worker program and connect to the management server through a secured tunnel. A service manager script was then implemented to monitor jobs and the usage of workers and to make decisions whether new machines from the cloud should be requested or idle ones terminated. This elasticity is automated and is capable of scaling up in a matter of minutes. The service manager also features cost optimizations implemented with a specific cloud service (Amazon Web Services) in mind. The latency between the in-house and the cloud did not prove to be insurmountable, but as each virtual machine from the cloud has a starting delay of three minutes, the system reacts fairly slowly to increasing demand. The main advantage of the cloud usage is the seemingly infinite number of machines available, ideal for building a large number of packages that can be built in parallel. Packages may need other packages during building, which inhibits the system from building all packages in parallel. Powerful workers are needed to quickly build larger bottleneck packages. Finding the balance between the number and performance of workers is one of the issues for future research. To ensure high availability, improvements should be made to the service manager and a separate virtual infrastructure manager should be used to utilize multiple cloud providers. In addition, mechanisms are needed to keep proprietary source code on in-house workers and to ensure that malicious code cannot be injected into the system via packages originating from open development communities. /Kir1

    Model-Driven Machine Learning for Predictive Cloud Auto-scaling

    Get PDF
    Cloud provisioning of resources requires continuous monitoring and analysis of the workload on virtual computing resources. However, cloud providers offer the rule-based and schedule-based auto-scaling service. Auto-scaling is a cloud system that reacts to real-time metrics and adjusts service instances based on predefined scaling policies. The challenge of this reactive approach to auto-scaling is to cope with fluctuating load changes. For data management applications, the workload is changing and needs forecasting on historical trends and integrating with auto-scaling service. We aim to discover changes and patterns on multi metrics of resource usages of CPU, memory, and networking. To address this problem, the learning-and-inference based prediction has been adopted to predict the needs prior to provision action. First, we develop a novel machine learning-based auto-scaling process that covers the technique of learning multiple metrics for cloud auto-scaling decision. This technique is used for continuous model training and workload forecasting. Furthermore, the result of workload forecasting triggers the auto-scaling process automatically. Also, we build the serverless functions of this machine learning-based process, including monitoring, machine learning, model selection, scheduling as microservices and orchestrating these independent services by platform, language orthogonal APIs. We demonstrate this architectural implementation on AWS and Microsoft Azure and show the prediction results from machine learning on-the-fly. Results show significant cost reductions by our proposed solution compared to a general threshold-based auto-scaling. Still, there is a need to integrate the machine learning prediction with the auto-scaling system. So, the deployment effort of devising additional machine learning components is increased. So, we present a model-driven framework that defines first-class entities to represent machine learning algorithm types, inputs, outputs, parameters, and evaluation scores. We set up rules for validating machine learning entities. The connection between the machine learning and auto-scaling system is presented by two levels of abstraction models, namely cloud platform independent model and cloud platform specific model. We automate the model-to-model transformation and model-to-deployment transformation. We integrate model-driven with a DevOps approach to make models deployable and executable on a target cloud platform. We demonstrate our method with scaling configuration and deployment of two open source benchmark applications - Dell DVD store and Netflix (NDBench) on three cloud platforms, AWS, Azure, and Rackspace. The evaluation shows our inference-based auto-scaling with model-driven reduces approximately 27% of deployment effort compared to the ordinary auto-scaling
    corecore