16 research outputs found

    Elimination Distance to Bounded Degree on Planar Graphs

    Get PDF
    We study the graph parameter elimination distance to bounded degree, which was introduced by Bulian and Dawar in their study of the parameterized complexity of the graph isomorphism problem. We prove that the problem is fixed-parameter tractable on planar graphs, that is, there exists an algorithm that given a planar graph G and integers d and k decides in time f(k,d)? n^c for a computable function f and constant c whether the elimination distance of G to the class of degree d graphs is at most k

    Double Coverage with Machine-Learned Advice

    Get PDF
    We study the fundamental online k-server problem in a learning-augmented setting. While in the traditional online model, an algorithm has no information about the request sequence, we assume that there is given some advice (e.g. machine-learned predictions) on an algorithm's decision. There is, however, no guarantee on the quality of the prediction and it might be far from being correct. Our main result is a learning-augmented variation of the well-known Double Coverage algorithm for k-server on the line (Chrobak et al., SIDMA 1991) in which we integrate predictions as well as our trust into their quality. We give an error-dependent competitive ratio, which is a function of a user-defined confidence parameter, and which interpolates smoothly between an optimal consistency, the performance in case that all predictions are correct, and the best-possible robustness regardless of the prediction quality. When given good predictions, we improve upon known lower bounds for online algorithms without advice. We further show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff, within a class of deterministic algorithms respecting local and memoryless properties. Our algorithm outperforms a previously proposed (more general) learning-augmented algorithm. It is remarkable that the previous algorithm crucially exploits memory, whereas our algorithm is memoryless. Finally, we demonstrate in experiments the practicability and the superior performance of our algorithm on real-world data.Comment: Accepted at ITCS 202

    Speed-Oblivious Online Scheduling: Knowing (Precise) Speeds is not Necessary

    Full text link
    We consider online scheduling on unrelated (heterogeneous) machines in a speed-oblivious setting, where an algorithm is unaware of the exact job-dependent processing speeds. We show strong impossibility results for clairvoyant and non-clairvoyant algorithms and overcome them in models inspired by practical settings: (i) we provide competitive learning-augmented algorithms, assuming that (possibly erroneous) predictions on the speeds are given, and (ii) we provide competitive algorithms for the speed-ordered model, where a single global order of machines according to their unknown job-dependent speeds is known. We prove strong theoretical guarantees and evaluate our findings on a representative heterogeneous multi-core processor. These seem to be the first empirical results for scheduling algorithms with predictions that are evaluated in a non-synthetic hardware environment.Comment: To appear at ICML 202

    A Study of Strategic Planning Practices in the Southern German Construction Industry

    Get PDF
    This research explores how strategic planning practices are facilitated, and constrained, by organizational processes and structures that characterize particular firms. The investigation of different aspects of strategic planning practices in the construction sector has yet to receive sufficient research attention. This is supported by the emergent perspective on strategic practices not as stand-alone phenomena, but as complex organizational processes. Semi-structured interviews provide primary data that serve as the basis for this qualitative investigation of construction enterprises in Southern Germany. In this study, theoretically informed questions about business strategy, the formulation of objectives, and planning practices guide the process of both data collection and its analysis. The construction companies selected for 33 interviews are located in the proximity of Munich, Augsburg, Ulm, Günzburg, Landsberg, and Ingolstadt in Swabia and Upper Bavaria. The primary qualitative data were generated over four months between November 2013 and February 2014. This study explores different aspects of strategic and planning practices by investigating encoded answers to corresponding questionnaire items. In the majority of micro and small construction companies that took part in this study, an intuitive and ad hoc approach to planning predominates. Forecasting methods are in use owing to their contribution to the minimization of short-term business costs and risks. Scenario methods are considered as involving overly long planning horizons that are unsuitable, owing to rapid changes and volatility occurring in the construction industry. For only some of the more medium companies, financial and strategic planning are differentiated, which could be because strategic plans are presented mostly to superiors and supervisors, planning is a hierarchical and formalized process, and the companies have predominantly regional- and national- scale operations. This study has followed an activity-based framework and extended theoretical research by exploring an emergent perspective that analyses strategic planning practices as tools-in-use. Significantly different patterns of various aspects of planning practices clearly indicate that planning tools are highly likely to affect their selection, utilization, and outcomes. Thus, this study demonstrates that planning practices and their associated tools are complexly related with other organizational processes. This study also suggests that the goals and motivations of owners and managers engaged in strategic planning are intricately related to planning outcomes in organizations

    Effect of nitric oxide on gene transcription – S-nitrosylation of nuclear proteins

    Get PDF
    Nitric oxide (NO) plays an important role in many different physiological processes in plants. It mainly acts by post-translationally modifying proteins. Modification of cysteine residues termed as S-nitrosylation is believed to be the most important mechanism for transduction of bioactivity of NO. The first proteins found to be nitrosylated were mainly of cytoplasmic origin or isolated from mitochondria and peroxisomes. Interestingly, it was shown that redox-sensitive transcription factors are also nitrosylated and that NO influences the redox-dependent nuclear transport of some proteins. This implies that NO plays a role in regulating transcription and/or general nuclear metabolism which is a fascinating new aspect of NO signaling in plants. In this review, we will discuss the impact of S-nitrosylation on nuclear plant proteins with a focus on transcriptional regulation, describe the function of this modification and draw also comparisons to the animal system in which S-nitrosylation of nuclear proteins is a well characterized concept

    A universal error measure for input predictions applied to online graph problems

    Get PDF
    We introduce a novel measure for quantifying the error in input predictions. The error is based on a minimum-cost hyperedge cover in a suitably defined hypergraph and provides a general template which we apply to online graph problems. The measure captures errors due to absent predicted requests as well as unpredicted actual requests; hence, predicted and actual inputs can be of arbitrary size. We achieve refined performance guarantees for previously studied network design problems in the online-list model, such as Steiner tree and facility location. Further, we initiate the study of learning-augmented algorithms for online routing problems, such as the online traveling salesperson problem and the online dial-a-ride problem, where (transportation) requests arrive over time (online-time model). We provide a general algorithmic framework and we give error-dependent performance bounds that improve upon known worst-case barriers, when given accurate predictions, at the cost of slightly increased worst-case bounds when given predictions of arbitrary quality

    Santa Claus meets Makespan and Matroids: Algorithms and Reductions

    No full text
    In this paper we study the relation of two fundamental problems in scheduling and fair allocation: makespan minimization on unrelated parallel machines and max-min fair allocation, also known as the Santa Claus problem. For both of these problems the best approximation factor is a notorious open question; more precisely, whether there is a better-than-2 approximation for the former problem and whether there is a constant approximation for the latter. While the two problems are intuitively related and history has shown that techniques can often be transferred between them, no formal reductions are known. We first show that an affirmative answer to the open question for makespan minimization implies the same for the Santa Claus problem by reducing the latter problem to the former. We also prove that for problem instances with only two input values both questions are equivalent. We then move to a special case called ``restricted assignment'', which is well studied in both problems. Although our reductions do not maintain the characteristics of this special case, we give a reduction in a slight generalization, where the jobs or resources are assigned to multiple machines or players subject to a matroid constraint and in addition we have only two values. This draws a similar picture as before: equivalence for two values and the general case of Santa Claus can only be easier than makespan minimization. To complete the picture, we give an algorithm for our new matroid variant of the Santa Claus problem using a non-trivial extension of the local search method from restricted assignment. Thereby we unify, generalize, and improve several previous results. We believe that this matroid generalization may be of independent interest and provide several sample applications. As corollaries, we obtain a polynomial-time (2−1/nǫ)-approximation for two-value makespanminimization for every ǫ > 0, improving on the previous (2 − 1/m) approximation, and a polynomial-time (1.75 + ǫ)-approximation for makespan minimization in the restricted assignment case with two values, improving the previous best rate of 1 + 2/√ 5 + ǫ ≈ 1.8945

    Robustification of Online Graph Exploration Methods

    No full text
    Exploring unknown environments is a fundamental task in many domains, e.g., robot navigation, network security, and internet search. We initiate the study of a learning-augmented variant of the classical, notoriously hard online graph exploration problem by adding access to machine-learned predictions. We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm and significantly outperforms any known online algorithm if the prediction is of high accuracy while maintaining good guarantees when the prediction is of poor quality. We provide theoretical worst-case bounds that gracefully degrade with the prediction error, and we complement them by computational experiments that confirm our results. Further, we extend our concept to a general framework to robustify algorithms. By interpolating carefully between a given algorithm and NN, we prove new performance bounds that leverage the individual good performance on particular inputs while establishing robustness to arbitrary inputs
    corecore