3,165 research outputs found

    Minimizing Weighted lp-Norm of Flow-Time in the Rejection Model

    Get PDF
    We consider the online scheduling problem to minimize the weighted ell_p-norm of flow-time of jobs. We study this problem under the rejection model introduced by Choudhury et al. (SODA 2015) - here the online algorithm is allowed to not serve an eps-fraction of the requests. We consider the restricted assignments setting where each job can go to a specified subset of machines. Our main result is an immediate dispatch non-migratory 1/eps^{O(1)}-competitive algorithm for this problem when one is allowed to reject at most eps-fraction of the total weight of jobs arriving. This is in contrast with the speed augmentation model under which no online algorithm for this problem can achieve a competitive ratio independent of p

    Rejecting Jobs to Minimize Load and Maximum Flow-time

    Full text link
    Online algorithms are usually analyzed using the notion of competitive ratio which compares the solution obtained by the algorithm to that obtained by an online adversary for the worst possible input sequence. Often this measure turns out to be too pessimistic, and one popular approach especially for scheduling problems has been that of "resource augmentation" which was first proposed by Kalyanasundaram and Pruhs. Although resource augmentation has been very successful in dealing with a variety of objective functions, there are problems for which even a (arbitrary) constant speedup cannot lead to a constant competitive algorithm. In this paper we propose a "rejection model" which requires no resource augmentation but which permits the online algorithm to not serve an epsilon-fraction of the requests. The problems considered in this paper are in the restricted assignment setting where each job can be assigned only to a subset of machines. For the load balancing problem where the objective is to minimize the maximum load on any machine, we give O(\log^2 1/\eps)-competitive algorithm which rejects at most an \eps-fraction of the jobs. For the problem of minimizing the maximum weighted flow-time, we give an O(1/\eps^4)-competitive algorithm which can reject at most an \eps-fraction of the jobs by weight. We also extend this result to a more general setting where the weights of a job for measuring its weighted flow-time and its contribution towards total allowed rejection weight are different. This is useful, for instance, when we consider the objective of minimizing the maximum stretch. We obtain an O(1/\eps^6)-competitive algorithm in this case. Our algorithms are immediate dispatch, though they may not be immediate reject. All these problems have very strong lower bounds in the speed augmentation model

    Quicksilver: Fast Predictive Image Registration - a Deep Learning Approach

    Get PDF
    This paper introduces Quicksilver, a fast deformable image registration method. Quicksilver registration for image-pairs works by patch-wise prediction of a deformation model based directly on image appearance. A deep encoder-decoder network is used as the prediction model. While the prediction strategy is general, we focus on predictions for the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the momentum-parameterization of LDDMM, which facilitates a patch-wise prediction strategy while maintaining the theoretical properties of LDDMM, such as guaranteed diffeomorphic mappings for sufficiently strong regularization. We also provide a probabilistic version of our prediction network which can be sampled during the testing time to calculate uncertainties in the predicted deformations. Finally, we introduce a new correction network which greatly increases the prediction accuracy of an already existing prediction network. We show experimental results for uni-modal atlas-to-image as well as uni- / multi- modal image-to-image registrations. These experiments demonstrate that our method accurately predicts registrations obtained by numerical optimization, is very fast, achieves state-of-the-art registration results on four standard validation datasets, and can jointly learn an image similarity measure. Quicksilver is freely available as an open-source software.Comment: Add new discussion

    From Preemptive to Non-preemptive Scheduling Using Rejections

    No full text
    International audienceWe study the classical problem of scheduling a set of independent jobs with release dates on a single machine. There exists a huge literature on the preemptive version of the problem, where the jobs can be interrupted at any moment. However, we focus here on the non-preemptive case, which is harder, but more relevant in practice. For instance, the jobs submitted to actual high performance platforms cannot be interrupted or migrated once they start their execution (due to prohibitive management overhead). We target on the minimization of the total stretch objective, defined as the ratio of the total time a job stays in the system (waiting time plus execution time), normalized by its processing time. Stretch captures the quality of service of a job and the minimum total stretch reflects the fairness between the jobs. So far, there have been only few studies about this problem, especially for the non-preemptive case. Our approach is based to the usage of the classical and efficient for the preemptive case shortest remaining processing time (SRPT) policy as a lower bound. We investigate the (offline) transformation of the SRPT schedule to a non-preemptive schedule subject to a recently introduced resource augmentation model, namely the rejection model according to which we are allowed to reject a small fraction of jobs. Specifically, we propose a 2 Ç«-approximation algorithm for the total stretch minimization problem if we allow to reject an Ç«-fraction of the jobs, for any Ç« > 0. This result shows that the rejection model is more powerful than the other resource augmentations models studied in the literature, like speed augmentation or machine augmentation, for which non-polynomial or non-scalable results are known. As a byproduct, we present a O(1)-approximation algorithm for the total flow-time minimization problem which also rejects at most an \epsilon-fraction of jobs

    Static and Dynamic State Estimation Applications in Power Systems Protection and Control Engineering

    Get PDF
    The developed methodologies are proposed to serve as support for control centers and fault analysis engineers. These approaches provide a dependable and effective means of pinpointing and resolving faults, which ultimately enhances power grid reliability. The algorithm uses the Least Absolute Value (LAV) method to estimate the augmented states of the PCB, enabling supervisory monitoring of the system. In addition, the application of statistical analysis based on projection statistics of the system Jacobian as a virtual sensor to detect faults on transmission lines. This approach is particularly valuable for detecting anomalies in transmission line data, such as bad data or other outliers, and leverage points. Through the integration of remote PCB status with virtual sensors, it becomes possible to accurately detect faulted transmission lines within the system. This, in turn, saves valuable troubleshooting time for line engineers, resulting in improved overall efficiency and potentially significant cost savings for the company. When there is a temporary or permanent fault, the generator dynamics will be affected by the transmission line reclosing, which could impact the system\u27s stability and reliability. To address this issue, an unscented Kalman filter (UKF) and optimal performance iterated unscented Kalman filter (IUKF) dynamic state estimation techniques are proposed. These techniques provide an estimate of the dynamic states of synchronous generators, which is crucial for monitoring generator states during transmission lines reclosing for temporary and permanent fault conditions. Several test systems were employed to evaluate reclosing following faults on transmission lines, including the IEEE 14-bus system, Kundur\u27s two-area model, and the reduced Western Electricity Coordinating Council (WECC) model of UTK electrical engineering hardware test bed (HTB). The developed methods offer a comprehensive solution to address the challenges posed by unbalanced faults on transmission lines, such as line-to-line, line-to-line-ground, and line-to-ground faults. Utilities must consider these faults when developing protective settings. The effectiveness of the solution is confirmed by monitoring the reaction of dynamic state variables following transmission lines reclosing after temporary faults and transmission line lockout from permanent faults

    Data based predictive control: Application to water distribution networks

    Get PDF
    In this thesis, the main goal is to propose novel data based predictive controllers to cope with complex industrial infrastructures such as water distribution networks. This sort of systems have several inputs and out- puts, complicate nonlinear dynamics, binary actuators and they are usually perturbed by disturbances and noise and require real-time control implemen- tation. The proposed controllers have to deal successfully with these issues while using the available information, such as past operation data of the process, or system properties as fading dynamics. To this end, the control strategies presented in this work follow a predic- tive control approach. The control action computed by the proposed data- driven strategies are obtained as the solution of an optimization problem that is similar in essence to those used in model predictive control (MPC) based on a cost function that determines the performance to be optimized. In the proposed approach however, the prediction model is substituted by an inference data based strategy, either to identify a model, an unknown control law or estimate the future cost of a given decision. As in MPC, the proposed strategies are based on a receding horizon implementation, which implies that the optimization problems considered have to be solved online. In order to obtain problems that can be solved e ciently, most of the strategies proposed in this thesis are based on direct weight optimization for ease of implementation and computational complexity reasons. Linear convex combination is a simple and strong tool in continuous domain and computational load associated with the constrained optimization problems generated by linear convex combination are relatively soft. This fact makes the proposed data based predictive approaches suitable to be used in real time applications. The proposed approaches selects the most adequate information (similar to the current situation according to output, state, input, disturbances,etc.), in particular, data which is close to the current state or situation of the system. Using local data can be interpreted as an implicit local linearisation of the system every time we solve the model-free data driven optimization problem. This implies that even though, model free data driven approaches presented in this thesis are based on linear theory, they can successfully deal with nonlinear systems because of the implicit information available in the database. Finally, a learning-based approach for robust predictive control design for multi-input multi-output (MIMO) linear systems is also presented, in which the effect of the estimation and measuring errors or the effect of unknown perturbations in large scale complex system is considered

    Online Min-Sum Flow Scheduling with Rejections

    Get PDF
    International audienceIn this paper, we study the problems of preemptive and non-preemptive online scheduling of jobs on unrelated machines in order to minimize the average time a job remains in the system.Both problems are known to be non-approximable by a constant factor. However, the preemptive variant has been extensively studied under the different resource augmentation models. On the other hand, the non-preemptive variant is much less explored. An O( 1/epsilon )-competitive algorithm has been presented in [7] for the non-preemptive average flow-time minimization problem on a set of unrelated machines if bothan epsilon-speed augmentation is used and an epsilon-fraction of jobs is rejected. We are interested here in exploring the power of the rejection model and, mainly, in eliminating the need for speed augmentation in the latter result. On the road to this, we show how to replace speed augmentation with rejection in the preemptive variant. Our analysis is based on the dual-fitting paradigm

    Complexity of Scheduling Few Types of Jobs on Related and Unrelated Machines

    Get PDF
    The task of scheduling jobs to machines while minimizing the total makespan, the sum of weighted completion times, or a norm of the load vector, are among the oldest and most fundamental tasks in combinatorial optimization. Since all of these problems are in general NP-hard, much attention has been given to the regime where there is only a small number k of job types, but possibly the number of jobs n is large; this is the few job types, high-multiplicity regime. Despite many positive results, the hardness boundary of this regime was not understood until now. We show that makespan minimization on uniformly related machines (Q|HM|C_max) is NP-hard already with 6 job types, and that the related Cutting Stock problem is NP-hard already with 8 item types. For the more general unrelated machines model (R|HM|C_max), we show that if either the largest job size p_max, or the number of jobs n are polynomially bounded in the instance size |I|, there are algorithms with complexity |I|^poly(k). Our main result is that this is unlikely to be improved, because Q||C_max is W[1]-hard parameterized by k already when n, p_max, and the numbers describing the speeds are polynomial in |I|; the same holds for R|HM|C_max (without speeds) when the job sizes matrix has rank 2. Our positive and negative results also extend to the objectives ??-norm minimization of the load vector and, partially, sum of weighted completion times ? w_j C_j. Along the way, we answer affirmatively the question whether makespan minimization on identical machines (P||C_max) is fixed-parameter tractable parameterized by k, extending our understanding of this fundamental problem. Together with our hardness results for Q||C_max this implies that the complexity of P|HM|C_max is the only remaining open case
    • …
    corecore