225 research outputs found

    No Polynomial Kernels for Knapsack

    Full text link
    This paper focuses on kernelization algorithms for the fundamental Knapsack problem. A kernelization algorithm (or kernel) is a polynomial-time reduction from a problem onto itself, where the output size is bounded by a function of some problem-specific parameter. Such algorithms provide a theoretical model for data reduction and preprocessing and are central in the area of parameterized complexity. In this way, a kernel for Knapsack for some parameter kk reduces any instance of Knapsack to an equivalent instance of size at most f(k)f(k) in polynomial time, for some computable function f()f(\cdot). When f(k)=kO(1)f(k)=k^{O(1)} then we call such a reduction a polynomial kernel. Our study focuses on two natural parameters for Knapsack: The number of different item weights w#w_{\#}, and the number of different item profits p#p_{\#}. Our main technical contribution is a proof showing that Knapsack does not admit a polynomial kernel for any of these two parameters under standard complexity-theoretic assumptions. Our proof discovers an elaborate application of the standard kernelization lower bound framework, and develops along the way novel ideas that should be useful for other problems as well. We complement our lower bounds by showing the Knapsack admits a polynomial kernel for the combined parameter w#+p#w_{\#}+p_{\#}

    Scheduling Lower Bounds via AND Subset Sum

    Get PDF
    Given NN instances (X1,t1),,(XN,tN)(X_1,t_1),\ldots,(X_N,t_N) of Subset Sum, the AND Subset Sum problem asks to determine whether all of these instances are yes-instances; that is, whether each set of integers XiX_i has a subset that sums up to the target integer tit_i. We prove that this problem cannot be solved in time O~((Ntmax)1ϵ)\tilde{O}((N \cdot t_{max})^{1-\epsilon}), for tmax=maxitit_{max}=\max_i t_i and any ϵ>0\epsilon > 0, assuming the \forall \exists Strong Exponential Time Hypothesis (\forall \exists-SETH). We then use this result to exclude O~(n+Pmaxn1ϵ)\tilde{O}(n+P_{max} \cdot n^{1-\epsilon})-time algorithms for several scheduling problems on nn jobs with maximum processing time PmaxP_{max}, based on \forall \exists-SETH. These include classical problems such as 1wjUj1||\sum w_jU_j, the problem of minimizing the total weight of tardy jobs on a single machine, and P2UjP_2||\sum U_j, the problem of minimizing the number of tardy jobs on two identical parallel machines.Comment: 14 pages, ICALP'2

    Hardness of Interval Scheduling on Unrelated Machines

    Get PDF

    Treatment of Tricuspid Regurgitation With the FORMA Repair System

    Get PDF
    Background: Tricuspid regurgitation (TR) is common and undertreated as the risk of surgery is high in this patient population. Transcatheter devices offer treatment with a lower procedural risk. The FORMA Tricuspid Valve Therapy system (Edwards Lifesciences) will be reviewed here.Device Description: The system combines a spacer placed in the regurgitant orifice and a rail, over which the spacer is delivered, that is anchored to the endocardial surface of the RV. The spacer provides a surface for leaflet coaptation.Outcomes: Eighteen compassionate care patients and 29 patients included in the US EFS trial are reviewed. Patients were elderly (76 years) and high risk (Euroscore 2 was 9.0 and 8.1%, respectively). There were 2 procedural failures in both groups. Mortality at 30 days was 0% in the compassionate group and 7% in the EFS trial. TR was reduced in both groups; 2D/3D EROA 2.1 ± 1.8 to 1.1 ± 0.9 cm2 in the EFS trial and vena contracta width 12.1 ± 3.3 to 7.1 ± 2.2 mm. Symptomatic improvement was seen in both groups; the proportion of patients in NYHA class III/IV decreased from 84 to 28% at 30 days in the EFS group, and from 94 to 21% at 1 year, in the compassionate group.Conclusions: Reduction of TR with FORMA system is feasible and sustained. Despite residual TR post-procedure, the significant relative reduction in TR severity contributes to substantial clinical improvements in patients with a FORMA device in place

    Faster Minimization of Tardy Processing Time on a Single Machine

    Get PDF
    This paper is concerned with the 1pjUj1||\sum p_jU_j problem, the problem of minimizing the total processing time of tardy jobs on a single machine. This is not only a fundamental scheduling problem, but also a very important problem from a theoretical point of view as it generalizes the Subset Sum problem and is closely related to the 0/1-Knapsack problem. The problem is well-known to be NP-hard, but only in a weak sense, meaning it admits pseudo-polynomial time algorithms. The fastest known pseudo-polynomial time algorithm for the problem is the famous Lawler and Moore algorithm which runs in O(Pn)O(P \cdot n) time, where PP is the total processing time of all nn jobs in the input. This algorithm has been developed in the late 60s, and has yet to be improved to date. In this paper we develop two new algorithms for 1pjUj1||\sum p_jU_j, each improving on Lawler and Moore's algorithm in a different scenario. Both algorithms rely on basic primitive operations between sets of integers and vectors of integers for the speedup in their running times. The second algorithm relies on fast polynomial multiplication as its main engine, while for the first algorithm we define a new "skewed" version of (max,min)(\max,\min)-convolution which is interesting in its own right

    Fairness in Repetitive Scheduling

    Full text link
    Recent research found that fairness plays a key role in customer satisfaction. Therefore, many manufacturing and services industries have become aware of the need to treat customers fairly. Still, there is a huge lack of models that enable industries to make operational decisions fairly, such as a fair scheduling of the customers' jobs. Our main aim in this research is to provide a unified framework to enable schedulers making fair decisions in repetitive scheduling environments. For doing so, we consider a set of repetitive scheduling problems involving a set of nn clients. In each out of qq consecutive operational periods (e.g. days), each one of the customers submits a job for processing by an operational system. The scheduler's aim is to provide a schedule for each of the qq periods such that the quality of service (QoS) received by each of the clients will meet a certain predefined threshold. The QoS of a client may take several different forms, e.g., the number of days that the customer receives its job later than a given due-date, the number of times the customer receive his preferred time slot for service, or the sum of waiting times for service. We analyze the single machine variant of the problem for several different definitions of QoS, and classify the complexity of the corresponding problems using the theories of classical and parameterized complexity. We also study the price of fairness, i.e., the loss in the system's efficiency that results from the need to provide fair solutions

    Scheduling Two Competing Agents When One Agent Has Significantly Fewer Jobs

    Get PDF
    We study a scheduling problem where two agents (each equipped with a private set of jobs) compete to perform their respective jobs on a common single machine. Each agent wants to keep the weighted sum of completion times of his jobs below a given (agent-dependent) bound. This problem is known to be NP-hard, even for quite restrictive settings of the problem parameters. We consider parameterized versions of the problem where one of the agents has a small number of jobs (and where this small number constitutes the parameter). The problem becomes much more tangible in this case, and we present three positive algorithmic results for it. Our study is complemented by showing that the general problem is NP-complete even when one agent only has a single job
    corecore