44 research outputs found
PSBS: Practical Size-Based Scheduling
Size-based schedulers have very desirable performance properties: optimal or
near-optimal response time can be coupled with strong fairness guarantees.
Despite this, such systems are very rarely implemented in practical settings,
because they require knowing a priori the amount of work needed to complete
jobs: this assumption is very difficult to satisfy in concrete systems. It is
definitely more likely to inform the system with an estimate of the job sizes,
but existing studies point to somewhat pessimistic results if existing
scheduler policies are used based on imprecise job size estimations. We take
the goal of designing scheduling policies that are explicitly designed to deal
with inexact job sizes: first, we show that existing size-based schedulers can
have bad performance with inexact job size information when job sizes are
heavily skewed; we show that this issue, and the pessimistic results shown in
the literature, are due to problematic behavior when large jobs are
underestimated. Once the problem is identified, it is possible to amend
existing size-based schedulers to solve the issue. We generalize FSP -- a fair
and efficient size-based scheduling policy -- in order to solve the problem
highlighted above; in addition, our solution deals with different job weights
(that can be assigned to a job independently from its size). We provide an
efficient implementation of the resulting protocol, which we call Practical
Size-Based Scheduler (PSBS). Through simulations evaluated on synthetic and
real workloads, we show that PSBS has near-optimal performance in a large
variety of cases with inaccurate size information, that it performs fairly and
it handles correctly job weights. We believe that this work shows that PSBS is
indeed pratical, and we maintain that it could inspire the design of schedulers
in a wide array of real-world use cases.Comment: arXiv admin note: substantial text overlap with arXiv:1403.599
OS-Assisted Task Preemption for Hadoop
This work introduces a new task preemption primitive for Hadoop, that allows
tasks to be suspended and resumed exploiting existing memory management
mechanisms readily available in modern operating systems. Our technique fills
the gap that exists between the two extremes cases of killing tasks (which
waste work) or waiting for their completion (which introduces latency):
experimental results indicate superior performance and very small overheads
when compared to existing alternatives
Revisiting Size-Based Scheduling with Estimated Job Sizes
We study size-based schedulers, and focus on the impact of inaccurate job
size information on response time and fairness. Our intent is to revisit
previous results, which allude to performance degradation for even small errors
on job size estimates, thus limiting the applicability of size-based
schedulers.
We show that scheduling performance is tightly connected to workload
characteristics: in the absence of large skew in the job size distribution,
even extremely imprecise estimates suffice to outperform size-oblivious
disciplines. Instead, when job sizes are heavily skewed, known size-based
disciplines suffer.
In this context, we show -- for the first time -- the dichotomy of
over-estimation versus under-estimation. The former is, in general, less
problematic than the latter, as its effects are localized to individual jobs.
Instead, under-estimation leads to severe problems that may affect a large
number of jobs.
We present an approach to mitigate these problems: our technique requires no
complex modifications to original scheduling policies and performs very well.
To support our claim, we proceed with a simulation-based evaluation that covers
an unprecedented large parameter space, which takes into account a variety of
synthetic and real workloads.
As a consequence, we show that size-based scheduling is practical and
outperforms alternatives in a wide array of use-cases, even in presence of
inaccurate size information.Comment: To be published in the proceedings of IEEE MASCOTS 201
Measuring Password Strength: An Empirical Analysis
We present an in-depth analysis on the strength of the almost 10,000
passwords from users of an instant messaging server in Italy. We estimate the
strength of those passwords, and compare the effectiveness of state-of-the-art
attack methods such as dictionaries and Markov chain-based techniques.
We show that the strength of passwords chosen by users varies enormously, and
that the cost of attacks based on password strength grows very quickly when the
attacker wants to obtain a higher success percentage. In accordance with
existing studies we observe that, in the absence of measures for enforcing
password strength, weak passwords are common. On the other hand we discover
that there will always be a subset of users with extremely strong passwords
that are very unlikely to be broken.
The results of our study will help in evaluating the security of
password-based authentication means, and they provide important insights for
inspiring new and better proactive password checkers and password recovery
tools.Comment: 15 pages, 9 figure
Adaptive Redundancy Management for Durable P2P Backup
We design and analyze the performance of a redundancy management mechanism
for Peer-to-Peer backup applications. Armed with the realization that a backup
system has peculiar requirements -- namely, data is read over the network only
during restore processes caused by data loss -- redundancy management targets
data durability rather than attempting to make each piece of information
availabile at any time.
In our approach each peer determines, in an on-line manner, an amount of
redundancy sufficient to counter the effects of peer deaths, while preserving
acceptable data restore times. Our experiments, based on trace-driven
simulations, indicate that our mechanism can reduce the redundancy by a factor
between two and three with respect to redundancy policies aiming for data
availability. These results imply an according increase in storage capacity and
decrease in time to complete backups, at the expense of longer times required
to restore data. We believe this is a very reasonable price to pay, given the
nature of the application.
We complete our work with a discussion on practical issues, and their
solutions, related to which encoding technique is more suitable to support our
scheme