28,511 research outputs found
Optimal Choice of Threshold in Two Level Processor Sharing
We analyze the Two Level Processor Sharing (TLPS) scheduling discipline with
the hyper-exponential job size distribution and with the Poisson arrival
process. TLPS is a convenient model to study the benefit of the file size based
differentiation in TCP/IP networks. In the case of the hyper-exponential job
size distribution with two phases, we find a closed form analytic expression
for the expected sojourn time and an approximation for the optimal value of the
threshold that minimizes the expected sojourn time. In the case of the
hyper-exponential job size distribution with more than two phases, we derive a
tight upper bound for the expected sojourn time conditioned on the job size. We
show that when the variance of the job size distribution increases, the gain in
system performance increases and the sensitivity to the choice of the threshold
near its optimal value decreases
Towards Optimality in Parallel Scheduling
To keep pace with Moore's law, chip designers have focused on increasing the
number of cores per chip rather than single core performance. In turn, modern
jobs are often designed to run on any number of cores. However, to effectively
leverage these multi-core chips, one must address the question of how many
cores to assign to each job. Given that jobs receive sublinear speedups from
additional cores, there is an obvious tradeoff: allocating more cores to an
individual job reduces the job's runtime, but in turn decreases the efficiency
of the overall system. We ask how the system should schedule jobs across cores
so as to minimize the mean response time over a stream of incoming jobs.
To answer this question, we develop an analytical model of jobs running on a
multi-core machine. We prove that EQUI, a policy which continuously divides
cores evenly across jobs, is optimal when all jobs follow a single speedup
curve and have exponentially distributed sizes. EQUI requires jobs to change
their level of parallelization while they run. Since this is not possible for
all workloads, we consider a class of "fixed-width" policies, which choose a
single level of parallelization, k, to use for all jobs. We prove that,
surprisingly, it is possible to achieve EQUI's performance without requiring
jobs to change their levels of parallelization by using the optimal fixed level
of parallelization, k*. We also show how to analytically derive the optimal k*
as a function of the system load, the speedup curve, and the job size
distribution.
In the case where jobs may follow different speedup curves, finding a good
scheduling policy is even more challenging. We find that policies like EQUI
which performed well in the case of a single speedup function now perform
poorly. We propose a very simple policy, GREEDY*, which performs near-optimally
when compared to the numerically-derived optimal policy
Performance analysis of downlink shared channels in a UMTS network
In light of the expected growth in wireless data communications and the commonly anticipated up/downlink asymmetry, we present a performance analysis of downlink data transfer over \textsc{d}ownlink \textsc{s}hared \textsc{ch}annels (\textsc{dsch}s), arguably the most efficient \textsc{umts} transport channel for medium-to-large data transfers. It is our objective to provide qualitative insight in the different aspects that influence the data \textsc{q}uality \textsc{o}f \textsc{s}ervice (\textsc{qos}). As a most principal factor, the data traffic load affects the data \textsc{qos} in two distinct manners: {\em (i)} a heavier data traffic load implies a greater competition for \textsc{dsch} resources and thus longer transfer delays; and {\em (ii)} since each data call served on a \textsc{dsch} must maintain an \textsc{a}ssociated \textsc{d}edicated \textsc{ch}annel (\textsc{a}-\textsc{dch}) for signalling purposes, a heavier data traffic load implies a higher interference level, a higher frame error rate and thus a lower effective aggregate \textsc{dsch} throughput: {\em the greater the demand for service, the smaller the aggregate service capacity.} The latter effect is further amplified in a multicellular scenario, where a \textsc{dsch} experiences additional interference from the \textsc{dsch}s and \textsc{a}-\textsc{dch}s in surrounding cells, causing a further degradation of its effective throughput. Following an insightful two-stage performance evaluation approach, which segregates the interference aspects from the traffic dynamics, a set of numerical experiments is executed in order to demonstrate these effects and obtain qualitative insight in the impact of various system aspects on the data \textsc{qos}
Integrated engineering environments for large complex products
An introduction is given to the Engineering Design Centre at the University of Newcastle upon Tyne, along with a brief explanation of the main focus towards large made-to-order products. Three key areas of research at the Centre, which have evolved as a result of collaboration with industrial partners from various sectors of industry, are identified as (1) decision support and optimisation, (2) design for lifecycle, and (3) design integration and co-ordination. A summary of the unique features of large made-to-order products is then presented, which includes the need for integration and co-ordination technologies. Thus, an overview of the existing integration and co-ordination technologies is presented followed by a brief explanation of research in these areas at the Engineering Design Centre. A more detailed description is then presented regarding the co-ordination aspect of research being conducted at the Engineering Design Centre, in collaboration with the CAD Centre at the University of Strathclyde. Concurrent Engineering is acknowledged as a strategy for improving the design process, however design coordination is viewed as a principal requirement for its successful implementation. That is, design co-ordination is proposed as being the key to a mechanism that is able to maximise and realise any potential opportunity of concurrency. Thus, an agentoriented approach to co-ordination is presented, which incorporates various types of agents responsible for managing their respective activities. The co-ordinated approach, which is implemented within the Design Co-ordination System, includes features such as resource management and monitoring, dynamic scheduling, activity direction, task enactment, and information management. An application of the Design Co-ordination System, in conjunction with a robust concept exploration tool, shows that the computational design analysis involved in evaluating many design concepts can be performed more efficiently through a co-ordinated approach
Empirical Evaluation of the Parallel Distribution Sweeping Framework on Multicore Architectures
In this paper, we perform an empirical evaluation of the Parallel External
Memory (PEM) model in the context of geometric problems. In particular, we
implement the parallel distribution sweeping framework of Ajwani, Sitchinava
and Zeh to solve batched 1-dimensional stabbing max problem. While modern
processors consist of sophisticated memory systems (multiple levels of caches,
set associativity, TLB, prefetching), we empirically show that algorithms
designed in simple models, that focus on minimizing the I/O transfers between
shared memory and single level cache, can lead to efficient software on current
multicore architectures. Our implementation exhibits significantly fewer
accesses to slow DRAM and, therefore, outperforms traditional approaches based
on plane sweep and two-way divide and conquer.Comment: Longer version of ESA'13 pape
- âŠ