81,265 research outputs found
Exact algorithms for a task assignment problem
We consider the following task assignment problem. Communicating tasks are to be assigned to heterogeneous processors interconnected with a heterogeneous network. The objective is to minimize the total sum of the execution and communication costs. The problem is NP-hard. We present an exact algorithm based on the well-known A* search. We report simulation results over a wide range of parameters where the largest solved instance contains about three hundred tasks to be assigned to eight processors. © World Scientific Publishing Company
Independent task assignment for heterogeneous systems
Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent Univ., 2013.Thesis (Ph. D.) -- Bilkent University, 2013.Includes bibliographical references leaves 136-150.We study the problem of assigning nonuniform tasks onto heterogeneous systems.
We investigate two distinct problems in this context. The first problem is the
one-dimensional partitioning of nonuniform workload arrays with optimal load
balancing. The second problem is the assignment of nonuniform independent
tasks onto heterogeneous systems.
For one-dimensional partitioning of nonuniform workload arrays, we investigate
two cases: chain-on-chain partitioning (CCP), where the order of the processors
is specified, and chain partitioning (CP), where processor permutation
is allowed. We present polynomial time algorithms to solve the CCP problem
optimally, while we prove that the CP problem is NP complete. Our empirical
studies show that our proposed exact algorithms for the CCP problem produce
substantially better results than the state-of-the-art heuristics while the solution
times remain comparable.
For the independent task assignment problem, we investigate improving the
performance of the well-known and widely used constructive heuristics MinMin,
MaxMin and Sufferage. All three heuristics are known to run in O(KN2
) time in
assigning N tasks to K processors. In this thesis, we present our work on an algorithmic
improvement that asymptotically decreases the running time complexity
of MinMin to O(KN log N) without affecting its solution quality. Furthermore,
we combine the newly proposed MinMin algorithm with MaxMin as well as Sufferage,
obtaining two hybrid algorithms. The motivation behind the former hybrid
algorithm is to address the drawback of MaxMin in solving problem instances
with highly skewed cost distributions while also improving the running time performance
of MaxMin. The latter hybrid algorithm improves the running time
performance of Sufferage without degrading its solution quality. The proposed
algorithms are easy to implement and we illustrate them through detailed pseudocodes.
The experimental results over a large number of real-life datasets show
that the proposed fast MinMin algorithm and the proposed hybrid algorithms
perform significantly better than their traditional counterparts as well as more
recent state-of-the-art assignment heuristics. For the large datasets used in the
experiments, MinMin, MaxMin, and Sufferage, as well as recent state-of-the-art
heuristics, require days, weeks, or even months to produce a solution, whereas all
of the proposed algorithms produce solutions within only two or three minutes.
For the independent task assignment problem, we also investigate adopting
the multi-level framework which was successfully utilized in several applications
including graph and hypergraph partitioning. For the coarsening phase of the
multi-level framework, we present an efficient matching algorithm which runs in
O(KN) time in most cases. For the uncoarsening phase, we present two refinement
algorithms: an efficient O(KN)-time move-based refinement and an efficient
O(K2N log N)-time swap-based refinement. Our results indicate that multi-level
approach improves the quality of task assignments, while also improving the running
time performance, especially for large datasets.
As a realistic distributed application of the independent task assignment problem,
we introduce the site-to-crawler assignment problem, where a large number
of geographically distributed web servers are crawled by a multi-site distributed
crawling system and the objective is to minimize the duration of the crawl. We
show that this problem can be modeled as an independent task assignment problem.
As a solution to the problem, we evaluate a large number of state-of-the-art
task assignment heuristics selected from the literature as well as the improved
versions and the newly developed multi-level task assignment algorithm. We
compare the performance of different approaches through simulations on very
large, real-life web datasets. Our results indicate that multi-site web crawling
efficiency can be considerably improved using the independent task assignment
approach, when compared to relatively easy-to-implement, yet naive baselines.Tabak, E KartalPh.D
Single-machine scheduling with stepwise tardiness costs and release times
We study a scheduling problem that belongs to the yard operations component of the railroad planning problems, namely the hump sequencing problem. The scheduling problem is characterized as a single-machine problem with stepwise tardiness cost objectives. This is a new scheduling criterion which is also relevant in the context of traditional machine scheduling problems. We produce complexity results that characterize some cases of the problem as pseudo-polynomially solvable. For the difficult-to-solve cases of the problem, we develop mathematical programming formulations, and propose heuristic algorithms. We test the formulations and heuristic algorithms on randomly generated single-machine scheduling problems and real-life datasets for the hump sequencing problem. Our experiments show promising results for both sets of problems
Optimal Inference in Crowdsourced Classification via Belief Propagation
Crowdsourcing systems are popular for solving large-scale labelling tasks
with low-paid workers. We study the problem of recovering the true labels from
the possibly erroneous crowdsourced labels under the popular Dawid-Skene model.
To address this inference problem, several algorithms have recently been
proposed, but the best known guarantee is still significantly larger than the
fundamental limit. We close this gap by introducing a tighter lower bound on
the fundamental limit and proving that Belief Propagation (BP) exactly matches
this lower bound. The guaranteed optimality of BP is the strongest in the sense
that it is information-theoretically impossible for any other algorithm to
correctly label a larger fraction of the tasks. Experimental results suggest
that BP is close to optimal for all regimes considered and improves upon
competing state-of-the-art algorithms.Comment: This article is partially based on preliminary results published in
the proceeding of the 33rd International Conference on Machine Learning (ICML
2016
Hierarchies of Inefficient Kernelizability
The framework of Bodlaender et al. (ICALP 2008) and Fortnow and Santhanam
(STOC 2008) allows us to exclude the existence of polynomial kernels for a
range of problems under reasonable complexity-theoretical assumptions. However,
there are also some issues that are not addressed by this framework, including
the existence of Turing kernels such as the "kernelization" of Leaf Out
Branching(k) into a disjunction over n instances of size poly(k). Observing
that Turing kernels are preserved by polynomial parametric transformations, we
define a kernelization hardness hierarchy, akin to the M- and W-hierarchy of
ordinary parameterized complexity, by the PPT-closure of problems that seem
likely to be fundamentally hard for efficient Turing kernelization. We find
that several previously considered problems are complete for our fundamental
hardness class, including Min Ones d-SAT(k), Binary NDTM Halting(k), Connected
Vertex Cover(k), and Clique(k log n), the clique problem parameterized by k log
n
MC-Fluid: Fluid Model-Based Mixed-Criticality Scheduling on Multiprocessors
A mixed-criticality system consists of multiple components with different criticalities. While mixed-criticality scheduling has been extensively studied for the uniprocessor case, the problem of efficient scheduling for the multiprocessor case has largely remained open. We design a fluid model-based multiprocessor mixed-criticality scheduling algorithm, called MC-Fluid in which each task is executed in proportion to its criticality-dependent rate. We propose an exact schedulability condition for MC-Fluid and an optimal assignment algorithm for criticality-dependent execution rates with polynomial-time complexity. Since MC-Fluid cannot be implemented directly on real hardware platforms, we propose another scheduling algorithm, called MC-DP-Fair, which can be implemented while preserving the same schedulability properties as MC-Fluid. We show that MC-Fluid has a speedup factor of (1 + √ 5) /2 (~ 1.618), which is best known in multiprocessor MC scheduling, and simulation results show that MC-DP-Fair outperforms all existing algorithms
- …