3,079 research outputs found
EPOBF: Energy Efficient Allocation of Virtual Machines in High Performance Computing Cloud
Cloud computing has become more popular in provision of computing resources
under virtual machine (VM) abstraction for high performance computing (HPC)
users to run their applications. A HPC cloud is such cloud computing
environment. One of challenges of energy efficient resource allocation for VMs
in HPC cloud is tradeoff between minimizing total energy consumption of
physical machines (PMs) and satisfying Quality of Service (e.g. performance).
On one hand, cloud providers want to maximize their profit by reducing the
power cost (e.g. using the smallest number of running PMs). On the other hand,
cloud customers (users) want highest performance for their applications. In
this paper, we focus on the scenario that scheduler does not know global
information about user jobs and user applications in the future. Users will
request shortterm resources at fixed start times and non interrupted durations.
We then propose a new allocation heuristic (named Energy-aware and Performance
per watt oriented Bestfit (EPOBF)) that uses metric of performance per watt to
choose which most energy-efficient PM for mapping each VM (e.g. maximum of MIPS
per Watt). Using information from Feitelson's Parallel Workload Archive to
model HPC jobs, we compare the proposed EPOBF to state of the art heuristics on
heterogeneous PMs (each PM has multicore CPU). Simulations show that the EPOBF
can reduce significant total energy consumption in comparison with state of the
art allocation heuristics.Comment: 10 pages, in Procedings of International Conference on Advanced
Computing and Applications, Journal of Science and Technology, Vietnamese
Academy of Science and Technology, ISSN 0866-708X, Vol. 51, No. 4B, 201
High-Level Object Oriented Genetic Programming in Logistic Warehouse Optimization
DisertaÄŤnĂ práce je zaměřena na optimalizaci prĹŻbÄ›hu pracovnĂch operacĂ v logistickĂ˝ch skladech a distribuÄŤnĂch centrech. HlavnĂm cĂlem je optimalizovat procesy plánovánĂ, rozvrhovánĂ a odbavovánĂ. JelikoĹľ jde o problĂ©m patĹ™ĂcĂ do tĹ™Ădy sloĹľitosti NP-teĹľkĂ˝, je vĂ˝poÄŤetnÄ› velmi nároÄŤnĂ© nalĂ©zt optimálnĂ Ĺ™ešenĂ. MotivacĂ pro Ĺ™ešenĂ tĂ©to práce je vyplnÄ›nĂ pomyslnĂ© mezery mezi metodami zkoumanĂ˝mi na vÄ›deckĂ© a akademickĂ© pĹŻdÄ› a metodami pouĹľĂvanĂ˝mi v produkÄŤnĂch komerÄŤnĂch prostĹ™edĂch. Jádro optimalizaÄŤnĂho algoritmu je zaloĹľeno na základÄ› genetickĂ©ho programovánĂ Ĺ™ĂzenĂ©ho bezkontextovou gramatikou. HlavnĂm pĹ™Ănosem tĂ©to práce je a) navrhnout novĂ˝ optimalizaÄŤnĂ algoritmus, kterĂ˝ respektuje následujĂcĂ optimalizaÄŤnĂ podmĂnky: celkovĂ˝ ÄŤas zpracovánĂ, vyuĹľitĂ zdrojĹŻ, a zahlcenĂ skladovĂ˝ch uliÄŤek, kterĂ© mĹŻĹľe nastat bÄ›hem zpracovánĂ ĂşkolĹŻ, b) analyzovat historická data z provozu skladu a vyvinout sadu testovacĂch pĹ™ĂkladĹŻ, kterĂ© mohou slouĹľit jako referenÄŤnĂ vĂ˝sledky pro dalšà vĂ˝zkum, a dále c) pokusit se pĹ™edÄŤit stanovenĂ© referenÄŤnĂ vĂ˝sledky dosaĹľenĂ© kvalifikovanĂ˝m a trĂ©novanĂ˝m operaÄŤnĂm manaĹľerem jednoho z nejvÄ›tšĂch skladĹŻ ve stĹ™ednĂ EvropÄ›.This work is focused on the work-flow optimization in logistic warehouses and distribution centers. The main aim is to optimize process planning, scheduling, and dispatching. The problem is quite accented in recent years. The problem is of NP hard class of problems and where is very computationally demanding to find an optimal solution. The main motivation for solving this problem is to fill the gap between the new optimization methods developed by researchers in academic world and the methods used in business world. The core of the optimization algorithm is built on the genetic programming driven by the context-free grammar. The main contribution of the thesis is a) to propose a new optimization algorithm which respects the makespan, the utilization, and the congestions of aisles which may occur, b) to analyze historical operational data from warehouse and to develop the set of benchmarks which could serve as the reference baseline results for further research, and c) to try outperform the baseline results set by the skilled and trained operational manager of the one of the biggest warehouses in the middle Europe.
Learning Scheduling Algorithms for Data Processing Clusters
Efficiently scheduling data processing jobs on distributed compute clusters
requires complex algorithms. Current systems, however, use simple generalized
heuristics and ignore workload characteristics, since developing and tuning a
scheduling policy for each workload is infeasible. In this paper, we show that
modern machine learning techniques can generate highly-efficient policies
automatically. Decima uses reinforcement learning (RL) and neural networks to
learn workload-specific scheduling algorithms without any human instruction
beyond a high-level objective such as minimizing average job completion time.
Off-the-shelf RL techniques, however, cannot handle the complexity and scale of
the scheduling problem. To build Decima, we had to develop new representations
for jobs' dependency graphs, design scalable RL models, and invent RL training
methods for dealing with continuous stochastic job arrivals. Our prototype
integration with Spark on a 25-node cluster shows that Decima improves the
average job completion time over hand-tuned scheduling heuristics by at least
21%, achieving up to 2x improvement during periods of high cluster load
MORPHOSYS: efficient colocation of QoS-constrained workloads in the cloud
In hosting environments such as IaaS clouds, desirable application performance is usually guaranteed through the use of Service Level Agreements (SLAs), which specify minimal fractions of resource capacities that must be
allocated for use for proper operation. Arbitrary colocation of applications with different SLAs on a single host may result in inefficient utilization of the host’s resources. In this paper, we propose that periodic resource allocation and consumption models be used for a more granular expression of SLAs. Our proposed SLA model has the salient feature that it exposes flexibilities that enable the IaaS provider to safely transform SLAs from one form to another
for the purpose of achieving more efficient colocation. Towards that goal, we present MorphoSys: a framework for a service that allows the manipulation of SLAs to enable efficient colocation of workloads. We present results from extensive trace-driven simulations of colocated Video-on-Demand servers in a cloud setting. The results show that potentially-significant reduction in wasted resources (by as much as 60%) are possible using MorphoSys.First author draf
A Survey of Prediction and Classification Techniques in Multicore Processor Systems
In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems
Business Process Elicitation, Modeling, and Reengineering: Teaching and Learning with Simulated Environments
The design of enterprise information systems requires students to master technical skills for elicitation, modeling, and reengineering business processes as well as soft skills for information gathering and communication. These tacit skills and behaviors cannot be effectively taught students but rather experienced and learned by students. This requires a pedagogical shift from teacher-centered teaching approaches towards learner-centered teaching approaches that invite students to more actively participate in the learning experience, and to acquire and enhance such technical and soft skills. This paper introduces “simulated environment” – a combination of role-playing activities to simulate organizational activities and several skills development activities to hone technical and soft skills – as a pedagogical tool in the learner-centered teaching paradigm that immerses students in a controlled learning environment which enables them to more clearly appreciate various aspects related to systems design, business processes, and information sharing, and to acquire and develop the necessary skills
- …