4,677 research outputs found
Scaling Virtualized Smartphone Images in the Cloud
Ăks selle Bakalaureuse töö eesmĂ€rkidest oli Android-x86 nutitelefoni platvormi juurutamine
pilvekeskkonda ja vÀlja selgitamine, kas valitud instance on piisav virtualiseeritud nutitelefoni
platvormi juurutamiseks ning kui palju koormust see talub. Töös kasutati Amazoni instance'i
M1 Small, mis oli piisav, et juurutada Androidi virtualiseeritud platvormi, kuid jÀi kesisemaks
kui mobiiltelefon, millel teste lĂ€bi viidi. M1 Medium instance'i tĂŒĂŒp oli sobivam ja nĂ€itas
paremaid tulemusi vÔrreldes telefoniga.
Teostati koormusteste selleks vastava tööriistaga Tsung, et nĂ€ha, kui palju ĂŒheaegseid
kasutajaid instance talub. Testi lÀbiviimiseks paigaldasime Dalviku instance'ile Tomcat
serveri.
PĂ€rast teste ĂŒhe eksemplariga, juurutasime kĂŒlge Elastic Load Balancing ja
automaatse skaleerimise Amazon Auto Scaling tööriista. Esimene neist jaotas koormust
instance'ide
vahel.
Automaatse
skaleerimise
tööriista
kasutasime,
et
rakendada
horisontaalset skaleerimist meie Android-x86 instance'le. Kui CPU tĂ”usis ĂŒle 60% kauemaks
kui ĂŒks minut, siis tehti eelmisele identne instance ja koormust saadeti edaspidi sinna. Seda
protseduuri vajadusel korrati maksimum kĂŒmne instance'ini. Meie teostusel olid tagasilöögid,
sest Elastic Load Balancer aegus 60 sekundi pÀrast ning me ei saanud kÔikide vÀlja
saadetud pÀringutele vastuseid. Serverisse saadetud faili kirjutamine ja kompileerimine olid
kulukad tegevused ja seega ei lÔppenud kÔik 60 sekundi jooksul. Me ei saanud koos Load
Balancer'iga lÀbiviidud testidest piisavalt andmeid, et teha jÀreldusi, kas virtualiseeritud
nutitelefoni platvorm Android on hÀsti vÔi halvasti skaleeruv.In this thesis we deployed a smartphone image in an Amazon EC2 instance and ran stress tests on them to know how much users can one instance bear and how scalable it is. We tested how much time would a method run in a physical Android device and in a cloud instance. We deployed CyanogenMod and Dalvik for a single instance. We used Tsung for stress testing. For those tests we also made a Tomcat server on Dalvik instance that would take the incoming file, the file would be compiled with java and its class file would be wrapped into dex, a Dalvik executable file, that is later executed with Dalvik. Three instances made a Tsung cluster that sent load to a Dalvik Virtual Machine instance. For scaling we used Amazon Auto Scaling tool and Elastic Load Balancer that divided incoming load between the instances
A Minimum-Cost Flow Model for Workload Optimization on Cloud Infrastructure
Recent technology advancements in the areas of compute, storage and
networking, along with the increased demand for organizations to cut costs
while remaining responsive to increasing service demands have led to the growth
in the adoption of cloud computing services. Cloud services provide the promise
of improved agility, resiliency, scalability and a lowered Total Cost of
Ownership (TCO). This research introduces a framework for minimizing cost and
maximizing resource utilization by using an Integer Linear Programming (ILP)
approach to optimize the assignment of workloads to servers on Amazon Web
Services (AWS) cloud infrastructure. The model is based on the classical
minimum-cost flow model, known as the assignment model.Comment: 2017 IEEE 10th International Conference on Cloud Computin
Extending sensor networks into the cloud using Amazon web services
Sensor networks provide a method of collecting environmental data for use in a variety of distributed applications. However, to date, limited support has been provided for the development of integrated environmental monitoring and modeling applications. Specifically, environmental dynamism makes it difficult to provide computational resources that are sufficient to deal with changing environmental conditions. This paper argues that the Cloud Computing model is a good fit with the dynamic computational requirements of environmental monitoring and modeling. We demonstrate that Amazon EC2 can meet the dynamic computational needs of environmental applications. We also demonstrate that EC2 can be integrated with existing sensor network technologies to offer an end-to-end environmental monitoring and modeling solution
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Historically, high energy physics computing has been performed on large
purpose-built computing systems. These began as single-site compute facilities,
but have evolved into the distributed computing grids used today. Recently,
there has been an exponential increase in the capacity and capability of
commercial clouds. Cloud resources are highly virtualized and intended to be
able to be flexibly deployed for a variety of computing tasks. There is a
growing nterest among the cloud providers to demonstrate the capability to
perform large-scale scientific computing. In this paper, we discuss results
from the CMS experiment using the Fermilab HEPCloud facility, which utilized
both local Fermilab resources and virtual machines in the Amazon Web Services
Elastic Compute Cloud. We discuss the planning, technical challenges, and
lessons learned involved in performing physics workflows on a large-scale set
of virtualized resources. In addition, we will discuss the economics and
operational efficiencies when executing workflows both in the cloud and on
dedicated resources.Comment: 15 pages, 9 figure
Distributed Feature Extraction Using Cloud Computing Resources
The need to expand the computational resources in a massive surveillance network is clear but traditional means of purchasing new equipment for short-term tasks every year is wasteful. In this work I will provide evidence in support of utilizing a cloud computing infrastructure to perform computationally intensive feature extraction tasks on data streams. Efficient off-loading of computational tasks to cloud resources will require a minimization of the time needed to expand the cloud resources, an efficient model of communication and a study of the interplay between the in-network computational resources and remote resources in the cloud. This report provides strong evidence that the use of cloud computing resources in a near real-time distributed sensor network surveillance system, ASAP, is feasible. A face detection web service operating on an Amazon EC2 instance is shown to provide processing of 10-15 frames per second.Umakishore Ramachandran - Faculty Mentor ; Rajnish Kumar - Committee Member/Second Reade
Survey and Analysis of Production Distributed Computing Infrastructures
This report has two objectives. First, we describe a set of the production
distributed infrastructures currently available, so that the reader has a basic
understanding of them. This includes explaining why each infrastructure was
created and made available and how it has succeeded and failed. The set is not
complete, but we believe it is representative.
Second, we describe the infrastructures in terms of their use, which is a
combination of how they were designed to be used and how users have found ways
to use them. Applications are often designed and created with specific
infrastructures in mind, with both an appreciation of the existing capabilities
provided by those infrastructures and an anticipation of their future
capabilities. Here, the infrastructures we discuss were often designed and
created with specific applications in mind, or at least specific types of
applications. The reader should understand how the interplay between the
infrastructure providers and the users leads to such usages, which we call
usage modalities. These usage modalities are really abstractions that exist
between the infrastructures and the applications; they influence the
infrastructures by representing the applications, and they influence the ap-
plications by representing the infrastructures
- âŠ