3,382 research outputs found
Models and heuristics for robust resource allocation in parallel and distributed computing systems
Includes bibliographical references.This is an overview of the robust resource allocation research efforts that have been and continue to be conducted by the CSU Robustness in Computer Systems Group. Parallel and distributed computing systems, consisting of a (usually heterogeneous) set of machines and networks, frequently operate in environments where delivered performance degrades due to unpredictable circumstances. Such unpredictability can be the result of sudden machine failures, increases in system load, or errors caused by inaccurate initial estimation. The research into developing models and heuristics for parallel and distributed computing systems that create robust resource allocations is presented.This research was supported by NSF under grant No. CNS-0615170 and by the Colorado State University George T. Abell Endowment
Evaluating the Robustness of Resource Allocations Obtained through Performance Modeling with Stochastic Process Algebra
Recent developments in the field of parallel and distributed computing has led to a proliferation of solving large and computationally intensive mathematical, science, or engineering problems, that consist of several parallelizable parts and several non-parallelizable (sequential) parts. In a parallel and distributed computing environment, the performance goal is to optimize the execution of parallelizable parts of an application on concurrent processors. This requires efficient application scheduling and resource allocation for mapping applications to a set of suitable parallel processors such that the overall performance goal is achieved. However, such computational environments are often prone to unpredictable variations in application (problem and algorithm) and system characteristics. Therefore, a robustness study is required to guarantee a desired level of performance. Given an initial workload, a mapping of applications to resources is considered to be robust if that mapping optimizes execution performance and guarantees a desired level of performance in the presence of unpredictable perturbations at runtime. In this research, a stochastic process algebra, Performance Evaluation Process Algebra (PEPA), is used for obtaining resource allocations via a numerical analysis of performance modeling of the parallel execution of applications on parallel computing resources. The PEPA performance model is translated into an underlying mathematical Markov chain model for obtaining performance measures. Further, a robustness analysis of the allocation techniques is performed for finding a robustmapping from a set of initial mapping schemes. The numerical analysis of the performance models have confirmed similarity with the simulation results of earlier research available in existing literature. When compared to direct experiments and simulations, numerical models and the corresponding analyses are easier to reproduce, do not incur any setup or installation costs, do not impose any prerequisites for learning a simulation framework, and are not limited by the complexity of the underlying infrastructure or simulation libraries
dms.sagepub.com
Downloaded from dms.sagepub.com at COLORADO STATE UNIV LIBRARIES on February 19, 2014JDMS Dynamic rescheduling heuristics for military village search environment
Statistical Multiplexing and Traffic Shaping Games for Network Slicing
Next generation wireless architectures are expected to enable slices of
shared wireless infrastructure which are customized to specific mobile
operators/services. Given infrastructure costs and the stochastic nature of
mobile services' spatial loads, it is highly desirable to achieve efficient
statistical multiplexing amongst such slices. We study a simple dynamic
resource sharing policy which allocates a 'share' of a pool of (distributed)
resources to each slice-Share Constrained Proportionally Fair (SCPF). We give a
characterization of SCPF's performance gains over static slicing and general
processor sharing. We show that higher gains are obtained when a slice's
spatial load is more 'imbalanced' than, and/or 'orthogonal' to, the aggregate
network load, and that the overall gain across slices is positive. We then
address the associated dimensioning problem. Under SCPF, traditional network
dimensioning translates to a coupled share dimensioning problem, which
characterizes the existence of a feasible share allocation given slices'
expected loads and performance requirements. We provide a solution to robust
share dimensioning for SCPF-based network slicing. Slices may wish to
unilaterally manage their users' performance via admission control which
maximizes their carried loads subject to performance requirements. We show this
can be modeled as a 'traffic shaping' game with an achievable Nash equilibrium.
Under high loads, the equilibrium is explicitly characterized, as are the gains
in the carried load under SCPF vs. static slicing. Detailed simulations of a
wireless infrastructure supporting multiple slices with heterogeneous mobile
loads show the fidelity of our models and range of validity of our high load
equilibrium analysis
Timely-Throughput Optimal Coded Computing over Cloud Networks
In modern distributed computing systems, unpredictable and unreliable
infrastructures result in high variability of computing resources. Meanwhile,
there is significantly increasing demand for timely and event-driven services
with deadline constraints. Motivated by measurements over Amazon EC2 clusters,
we consider a two-state Markov model for variability of computing speed in
cloud networks. In this model, each worker can be either in a good state or a
bad state in terms of the computation speed, and the transition between these
states is modeled as a Markov chain which is unknown to the scheduler. We then
consider a Coded Computing framework, in which the data is possibly encoded and
stored at the worker nodes in order to provide robustness against nodes that
may be in a bad state. With timely computation requests submitted to the system
with computation deadlines, our goal is to design the optimal computation-load
allocation scheme and the optimal data encoding scheme that maximize the timely
computation throughput (i.e, the average number of computation tasks that are
accomplished before their deadline). Our main result is the development of a
dynamic computation strategy called Lagrange Estimate-and Allocate (LEA)
strategy, which achieves the optimal timely computation throughput. It is shown
that compared to the static allocation strategy, LEA increases the timely
computation throughput by 1.4X - 17.5X in various scenarios via simulations and
by 1.27X - 6.5X in experiments over Amazon EC2 clustersComment: to appear in MobiHoc 201
- …