177 research outputs found

    Scheduling Parallel Jobs with Linear Speedup

    Get PDF
    We consider a scheduling problem where a set of jobs is distributed over parallel machines. The processing time of any job is dependent on the usage of a scarce renewable resource, e.g., personnel. An amount of k units of that resource can be allocated to the jobs at any time, and the more of that resource is allocated to a job, the smaller its processing time. The dependence of processing times on the amount of resources is linear for any job. The objective is to find a resource allocation and a schedule that minimizes the makespan. Utilizing an integer quadratic programming relaxation, we show how to obtain a (3+e)-approximation algorithm for that problem, for any e>0. This generalizes and improves previous results, respectively. Our approach relies on a fully polynomial time approximation scheme to solve the quadratic programming relaxation. This result is interesting in itself, because the underlying quadratic program is NP-hard to solve in general. We also briefly discuss variants of the problem and derive lower bounds.operations research and management science;

    An FPTAS for optimizing a class of low-rank functions over a polytope

    Get PDF
    We present a fully polynomial time approximation scheme (FPTAS) for optimizing a very general class of non-linear functions of low rank over a polytope. Our approximation scheme relies on constructing an approximate Pareto-optimal front of the linear functions which constitute the given low-rank function. In contrast to existing results in the literature, our approximation scheme does not require the assumption of quasi-concavity on the objective function. For the special case of quasi-concave function minimization, we give an alternative FPTAS, which always returns a solution which is an extreme point of the polytope. Our technique can also be used to obtain an FPTAS for combinatorial optimization problems with non-linear objective functions, for example when the objective is a product of a fixed number of linear functions. We also show that it is not possible to approximate the minimum of a general concave function over the unit hypercube to within any factor, unless P = NP. We prove this by showing a similar hardness of approximation result for supermodular function minimization, a result that may be of independent interest

    Analysis of FPTASes for the Multi-Objective Shortest Path Problem

    Get PDF
    We propose a new FPTAS for the multi-objective shortest path problem. The algorithm uses elements from both an exact labeling algorithm and an FPTAS proposed by Tsaggouris and Zaroliagis (2009). We analyze the running times of these three algorithms both from a the- oretical and a computational point of view. Theoretically, we show that there are instances for which the new FPTAS runs an arbitrary times faster than the other two algorithms. Fur- thermore, for the bi-objective case, the number of approximate solutions generated by the proposed FPTAS is at most the number of Pareto-optimal solutions multiplied by the number of nodes. By performing a set of computational tests, we show that the new FPTAS performs best in terms of running ti

    Nonlinear Integer Programming

    Full text link
    Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Numerous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms. We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considerations and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research.Comment: 57 pages. To appear in: M. J\"unger, T. Liebling, D. Naddef, G. Nemhauser, W. Pulleyblank, G. Reinelt, G. Rinaldi, and L. Wolsey (eds.), 50 Years of Integer Programming 1958--2008: The Early Years and State-of-the-Art Surveys, Springer-Verlag, 2009, ISBN 354068274

    Approximation algorithms and hardness of approximation for knapsack problems

    Get PDF
    We show various hardness of approximation algorithms for knapsack and related problems; in particular we will show that unless the Exponential-Time Hypothesis is false, then subset-sum cannot be approximated any better than with an FPTAS. We also give a simple new algorithm for approximating knapsack and subset-sum, that can be adapted to work for small space, or in small parallel time. Finally, we prove that knapsack can not be solved in Mulmuley's parallel PRAM model, even when the input is restricted to small bit-length

    An FPTAS for the Δ\Delta-modular multidimensional knapsack problem

    Full text link
    It is known that there is no EPTAS for the mm-dimensional knapsack problem unless W[1]=FPTW[1] = FPT. It is true already for the case, when m=2m = 2. But, an FPTAS still can exist for some other particular cases of the problem. In this note, we show that the mm-dimensional knapsack problem with a Δ\Delta-modular constraints matrix admits an FPTAS, whose complexity bound depends on Δ\Delta linearly. More precisely, the proposed algorithm complexity is O(TLP⋅(1/Δ)m+3⋅(2m)2m+6⋅Δ),O(T_{LP} \cdot (1/\varepsilon)^{m+3} \cdot (2m)^{2m + 6} \cdot \Delta), where TLPT_{LP} is the linear programming complexity bound. In particular, for fixed mm the arithmetical complexity bound becomes O(n⋅(1/Δ)m+3⋅Δ). O(n \cdot (1/\varepsilon)^{m+3} \cdot \Delta). Our algorithm is actually a generalisation of the classical FPTAS for the 11-dimensional case. Strictly speaking, the considered problem can be solved by an exact polynomial-time algorithm, when mm is fixed and Δ\Delta grows as a polynomial on nn. This fact can be observed combining previously known results. In this paper, we give a slightly more accurate analysis to present an exact algorithm with the complexity bound O(n⋅Δm+1), for m being fixed. O(n \cdot \Delta^{m + 1}), \quad \text{ for $m$ being fixed}. Note that the last bound is non-linear by Δ\Delta with respect to the given FPTAS
    • 

    corecore