1,129 research outputs found
Single machine scheduling with controllable processing times by submodular optimization
In scheduling with controllable processing times the actual processing time of each job is to be chosen from the interval between the smallest (compressed or fully crashed) value and the largest (decompressed or uncrashed) value. In the problems under consideration, the jobs are processed on a single machine and the quality of a schedule is measured by two functions: the maximum cost (that depends on job completion times) and the total compression cost. Our main model is bicriteria and is related to determining an optimal trade-off between these two objectives. Additionally, we consider a pair of associated single criterion problems, in which one of the objective functions is bounded while the other one is to be minimized. We reduce the bicriteria problem to a series of parametric linear programs defined over the intersection of a submodular polyhedron with a box. We demonstrate that the feasible region is represented by a so-called base polyhedron and the corresponding problem can be solved by the greedy algorithm that runs two orders of magnitude faster than known previously. For each of the associated single criterion problems, we develop algorithms that deliver the optimum faster than it can be deduced from a solution to the bicriteria problem
On the Complexity of Local Search for Weighted Standard Set Problems
In this paper, we study the complexity of computing locally optimal solutions
for weighted versions of standard set problems such as SetCover, SetPacking,
and many more. For our investigation, we use the framework of PLS, as defined
in Johnson et al., [JPY88]. We show that for most of these problems, computing
a locally optimal solution is already PLS-complete for a simple neighborhood of
size one. For the local search versions of weighted SetPacking and SetCover, we
derive tight bounds for a simple neighborhood of size two. To the best of our
knowledge, these are one of the very few PLS results about local search for
weighted standard set problems
Efficient Implementation of a Synchronous Parallel Push-Relabel Algorithm
Motivated by the observation that FIFO-based push-relabel algorithms are able
to outperform highest label-based variants on modern, large maximum flow
problem instances, we introduce an efficient implementation of the algorithm
that uses coarse-grained parallelism to avoid the problems of existing parallel
approaches. We demonstrate good relative and absolute speedups of our algorithm
on a set of large graph instances taken from real-world applications. On a
modern 40-core machine, our parallel implementation outperforms existing
sequential implementations by up to a factor of 12 and other parallel
implementations by factors of up to 3
Non-Preemptive Scheduling on Machines with Setup Times
Consider the problem in which n jobs that are classified into k types are to
be scheduled on m identical machines without preemption. A machine requires a
proper setup taking s time units before processing jobs of a given type. The
objective is to minimize the makespan of the resulting schedule. We design and
analyze an approximation algorithm that runs in time polynomial in n, m and k
and computes a solution with an approximation factor that can be made
arbitrarily close to 3/2.Comment: A conference version of this paper has been accepted for publication
in the proceedings of the 14th Algorithms and Data Structures Symposium
(WADS
Scheduling data flow program in xkaapi: A new affinity based Algorithm for Heterogeneous Architectures
Efficient implementations of parallel applications on heterogeneous hybrid
architectures require a careful balance between computations and communications
with accelerator devices. Even if most of the communication time can be
overlapped by computations, it is essential to reduce the total volume of
communicated data. The literature therefore abounds with ad-hoc methods to
reach that balance, but that are architecture and application dependent. We
propose here a generic mechanism to automatically optimize the scheduling
between CPUs and GPUs, and compare two strategies within this mechanism: the
classical Heterogeneous Earliest Finish Time (HEFT) algorithm and our new,
parametrized, Distributed Affinity Dual Approximation algorithm (DADA), which
consists in grouping the tasks by affinity before running a fast dual
approximation. We ran experiments on a heterogeneous parallel machine with six
CPU cores and eight NVIDIA Fermi GPUs. Three standard dense linear algebra
kernels from the PLASMA library have been ported on top of the Xkaapi runtime.
We report their performances. It results that HEFT and DADA perform well for
various experimental conditions, but that DADA performs better for larger
systems and number of GPUs, and, in most cases, generates much lower data
transfers than HEFT to achieve the same performance
Evaluation of Labeling Strategies for Rotating Maps
We consider the following problem of labeling points in a dynamic map that
allows rotation. We are given a set of points in the plane labeled by a set of
mutually disjoint labels, where each label is an axis-aligned rectangle
attached with one corner to its respective point. We require that each label
remains horizontally aligned during the map rotation and our goal is to find a
set of mutually non-overlapping active labels for every rotation angle so that the number of active labels over a full map rotation of
2 is maximized. We discuss and experimentally evaluate several labeling
models that define additional consistency constraints on label activities in
order to reduce flickering effects during monotone map rotation. We introduce
three heuristic algorithms and compare them experimentally to an existing
approximation algorithm and exact solutions obtained from an integer linear
program. Our results show that on the one hand low flickering can be achieved
at the expense of only a small reduction in the objective value, and that on
the other hand the proposed heuristics achieve a high labeling quality
significantly faster than the other methods.Comment: 16 pages, extended version of a SEA 2014 pape
Healthcare Game Design: Behavioral Modeling of Serious Gaming Design for Children with Chronic Diseases
This article introduces the design principles of serious games for chronic patients based on behavioral models. First, key features of the targeted chronic condition (Diabetes) are explained. Then, the role of psychological behavioral models in the management of chronic conditions is covered. After a short review of the existing health focused games, two recent health games that are developed based on behavioral models are overviewed in more detail. Furthermore, design principles and usability issues regarding the creation of these health games are discussed. Finally, the authors conclude that designing healthcare games based on behavioral models can increase the usability of the game in order to improve the effectiveness of the gameâs desired healthcare outcomes
Establishment of a validation and benchmark database for the assessment of ship operation in adverse conditions
The Energy Efficiency Design Index (EEDI), introduced by the IMO [1] is applicable for various types of new-built ships since January 2013. Despite the release of an interim guideline [2], concerns regarding the sufficiency of propulsion power and steering devices to maintain manoeuvrability of ships in adverse conditions were raised. This was the motivation for the EU research project SHOPERA (Energy Efficient Safe SHip OPERAtion, 2013-2016 [3-6]). The aim of the project is the development of suitable methods, tools and guidelines to effectively address these concerns and to enable safe and green shipping. Within the framework of SHOPERA, a comprehensive test program consisting of more than 1,300 different model tests for three ship hulls of different geometry and hydrodynamic characteristics has been conducted by four of the leading European maritime experimental research institutes: MARINTEK, CEHIPAR, Flanders Hydraulics Research and Technische UniversitĂ€t Berlin. The hull types encompass two public domain designs, namely the KVLCC2 tanker (KRISO VLCC, developed by KRISO) and the DTC container ship (Duisburg Test Case, developed by UniversitĂ€t Duisburg-Essen) as well as a RoPax ferry design, which is a proprietary hull design of a member of the SHOPERA consortium. The tests have been distributed among the four research institutes to benefit from the unique possibilities of each facility and to gain added value by establishing data sets for the same hull model and test type at different under keel clearances (ukc). This publication presents the scope of the SHOPERA model test program for the two public domain hull models â the KVLCC2 and the DTC. The main particulars and loading conditions for the two vessels as well as the experimental setup is provided to support the interpretation of the examples of experimental data that are discussed. The focus lies on added resistance at moderate speed and drift force tests in high and steep regular head, following and oblique waves. These climates have been selected to check the applicability of numerical models in adverse wave conditions and to cover possible non-linear effects. The obtained test results with the KVLCC2 model in deep water at CEHIPAR are discussed and compared against the results obtained in shallow water at Flanders Hydraulics Research. The DTC model has been tested at MARINTEK in deep water and at Technische UniversitĂ€t Berlin and Flanders Hydraulics Research in intermediate/shallow water in different set-ups. Added resistance and drift force measurements from these facilities are discussed and compared. Examples of experimental data is also presented for manoeuvring in waves. At MARINTEK, turning circle and zig-zag tests have been performed with the DTC in regular waves. Parameters of variation are the initial heading, the wave period and height
An EPTAS for Scheduling on Unrelated Machines of Few Different Types
In the classical problem of scheduling on unrelated parallel machines, a set
of jobs has to be assigned to a set of machines. The jobs have a processing
time depending on the machine and the goal is to minimize the makespan, that is
the maximum machine load. It is well known that this problem is NP-hard and
does not allow polynomial time approximation algorithms with approximation
guarantees smaller than unless PNP. We consider the case that there
are only a constant number of machine types. Two machines have the same
type if all jobs have the same processing time for them. This variant of the
problem is strongly NP-hard already for . We present an efficient
polynomial time approximation scheme (EPTAS) for the problem, that is, for any
an assignment with makespan of length at most
times the optimum can be found in polynomial time in the
input length and the exponent is independent of . In particular
we achieve a running time of , where
denotes the input length. Furthermore, we study three other problem
variants and present an EPTAS for each of them: The Santa Claus problem, where
the minimum machine load has to be maximized; the case of scheduling on
unrelated parallel machines with a constant number of uniform types, where
machines of the same type behave like uniformly related machines; and the
multidimensional vector scheduling variant of the problem where both the
dimension and the number of machine types are constant. For the Santa Claus
problem we achieve the same running time. The results are achieved, using mixed
integer linear programming and rounding techniques
- âŠ