4,126 research outputs found

    Parameterized complexity of machine scheduling: 15 open problems

    Full text link
    Machine scheduling problems are a long-time key domain of algorithms and complexity research. A novel approach to machine scheduling problems are fixed-parameter algorithms. To stimulate this thriving research direction, we propose 15 open questions in this area whose resolution we expect to lead to the discovery of new approaches and techniques both in scheduling and parameterized complexity theory.Comment: Version accepted to Computers & Operations Researc

    FLICK: developing and running application-specific network services

    Get PDF
    Data centre networks are increasingly programmable, with application-specific network services proliferating, from custom load-balancers to middleboxes providing caching and aggregation. Developers must currently implement these services using traditional low-level APIs, which neither support natural operations on application data nor provide efficient performance isolation. We describe FLICK, a framework for the programming and execution of application-specific network services on multi-core CPUs. Developers write network services in the FLICK language, which offers high-level processing constructs and application-relevant data types. FLICK programs are translated automatically to efficient, parallel task graphs, implemented in C++ on top of a user-space TCP stack. Task graphs have bounded resource usage at runtime, which means that the graphs of multiple services can execute concurrently without interference using cooperative scheduling. We evaluate FLICK with several services (an HTTP load-balancer, a Memcached router and a Hadoop data aggregator), showing that it achieves good performance while reducing development effort

    Scheduling Kernels via Configuration LP

    Get PDF
    Makespan minimization (on parallel identical or unrelated machines) is arguably the most natural and studied scheduling problem. A common approach in practical algorithm design is to reduce the size of a given instance by a fast preprocessing step while being able to recover key information even after this reduction. This notion is formally studied as kernelization (or simply, kernel) - a polynomial time procedure which yields an equivalent instance whose size is bounded in terms of some given parameter. It follows from known results that makespan minimization parameterized by the longest job processing time p_max has a kernelization yielding a reduced instance whose size is exponential in p_max. Can this be reduced to polynomial in p_max? We answer this affirmatively not only for makespan minimization, but also for the (more complicated) objective of minimizing the weighted sum of completion times, also in the setting of unrelated machines when the number of machine kinds is a parameter. Our algorithm first solves the Configuration LP and based on its solution constructs a solution of an intermediate problem, called huge N-fold integer programming. This solution is further reduced in size by a series of steps, until its encoding length is polynomial in the parameters. Then, we show that huge N-fold IP is in NP, which implies that there is a polynomial reduction back to our scheduling problem, yielding a kernel. Our technique is highly novel in the context of kernelization, and our structural theorem about the Configuration LP is of independent interest. Moreover, we show a polynomial kernel for huge N-fold IP conditional on whether the so-called separation subproblem can be solved in polynomial time. Considering that integer programming does not admit polynomial kernels except for quite restricted cases, our "conditional kernel" provides new insight

    An Algorithmic Theory of Integer Programming

    Full text link
    We study the general integer programming problem where the number of variables nn is a variable part of the input. We consider two natural parameters of the constraint matrix AA: its numeric measure aa and its sparsity measure dd. We show that integer programming can be solved in time g(a,d)poly(n,L)g(a,d)\textrm{poly}(n,L), where gg is some computable function of the parameters aa and dd, and LL is the binary encoding length of the input. In particular, integer programming is fixed-parameter tractable parameterized by aa and dd, and is solvable in polynomial time for every fixed aa and dd. Our results also extend to nonlinear separable convex objective functions. Moreover, for linear objectives, we derive a strongly-polynomial algorithm, that is, with running time g(a,d)poly(n)g(a,d)\textrm{poly}(n), independent of the rest of the input data. We obtain these results by developing an algorithmic framework based on the idea of iterative augmentation: starting from an initial feasible solution, we show how to quickly find augmenting steps which rapidly converge to an optimum. A central notion in this framework is the Graver basis of the matrix AA, which constitutes a set of fundamental augmenting steps. The iterative augmentation idea is then enhanced via the use of other techniques such as new and improved bounds on the Graver basis, rapid solution of integer programs with bounded variables, proximity theorems and a new proximity-scaling algorithm, the notion of a reduced objective function, and others. As a consequence of our work, we advance the state of the art of solving block-structured integer programs. In particular, we develop near-linear time algorithms for nn-fold, tree-fold, and 22-stage stochastic integer programs. We also discuss some of the many applications of these classes.Comment: Revision 2: - strengthened dual treedepth lower bound - simplified proximity-scaling algorith
    corecore