1,724 research outputs found

    Local search performance guarantees for restricted related parallel machine scheduling

    Get PDF
    We consider the problem of minimizing the makespan on restricted related parallel machines. In restricted machine scheduling each job is only allowed to be scheduled on a subset of machines. We study the worst-case behavior of local search algorithms. In particular, we analyze the quality of local optima with respect to the jump, swap, push and lexicographical jump neighborhood.operations research and management science;

    Symmetry Exploitation for Online Machine Covering with Bounded Migration

    Get PDF
    Online models that allow recourse are highly effective in situations where classical models are too pessimistic. One such problem is the online machine covering problem on identical machines. In this setting, jobs arrive one by one and must be assigned to machines with the objective of maximizing the minimum machine load. When a job arrives, we are allowed to reassign some jobs as long as their total size is (at most) proportional to the processing time of the arriving job. The proportionality constant is called the migration factor of the algorithm. By rounding the processing times, which yields useful structural properties for online packing and covering problems, we design first a simple (1.7 + epsilon)-competitive algorithm using a migration factor of O(1/epsilon) which maintains at every arrival a locally optimal solution with respect to the Jump neighborhood. After that, we present as our main contribution a more involved (4/3+epsilon)-competitive algorithm using a migration factor of O~(1/epsilon^3). At every arrival, we run an adaptation of the Largest Processing Time first (LPT) algorithm. Since the new job can cause a complete change of the assignment of smaller jobs in both cases, a low migration factor is achieved by carefully exploiting the highly symmetric structure obtained by the rounding procedure

    Healthy People in a Healthy Economy: A Blueprint for Action in Massachusetts

    Get PDF
    Examines the recession's effects on health and the cost of chronic disease. Suggests proven strategies for schools, municipalities, state government, payers, employers, the food industry, physicians, philanthropies, and media to promote healthy behaviors

    Community Design for Healthy Eating: How Land Use and Transportation Solutions Can Help

    Get PDF
    Examines how the built environment -- land use and lack of grocery stores, poor transportation systems, and sprawling development -- limits access to healthy foods in low-income, inner-city neighborhoods. Profiles efforts to improve food access

    Smoothed Analysis of Selected Optimization Problems and Algorithms

    Get PDF
    Optimization problems arise in almost every field of economics, engineering, and science. Many of these problems are well-understood in theory and sophisticated algorithms exist to solve them efficiently in practice. Unfortunately, in many cases the theoretically most efficient algorithms perform poorly in practice. On the other hand, some algorithms are much faster than theory predicts. This discrepancy is a consequence of the pessimism inherent in the framework of worst-case analysis, the predominant analysis concept in theoretical computer science. We study selected optimization problems and algorithms in the framework of smoothed analysis in order to narrow the gap between theory and practice. In smoothed analysis, an adversary specifies the input, which is subsequently slightly perturbed at random. As one example we consider the successive shortest path algorithm for the minimumcost flow problem. While in the worst case the successive shortest path algorithm takes exponentially many steps to compute a minimum-cost flow, we show that its running time is polynomial in the smoothed setting. Another problem studied in this thesis is makespan minimization for scheduling with related machines. It seems to be unlikely that there exist fast algorithms to solve this problem exactly. This is why we consider three approximation algorithms: the jump algorithm, the lex-jump algorithm, and the list scheduling algorithm. In the worst case, the approximation guarantees of these algorithms depend on the number of machines. We show that there is no such dependence in smoothed analysis. We also apply smoothed analysis to multicriteria optimization problems. In particular, we consider integer optimization problems with several linear objectives that have to be simultaneously minimized. We derive a polynomial upper bound for the size of the set of Pareto-optimal solutions contrasting the exponential worst-case lower bound. As the icing on the cake we find that the insights gained from our smoothed analysis of the running time of the successive shortest path algorithm lead to the design of a randomized algorithm for finding short paths between two given vertices of a polyhedron. We see this result as an indication that, in future, smoothed analysis might also result in the development of fast algorithms.Optimierungsprobleme treten in allen wirtschaftlichen, naturwissenschaftlichen und technischen Gebieten auf. Viele dieser Probleme sind ausfĂŒhrlich untersucht und aus praktischer Sicht effizient lösbar. Leider erweisen sich in vielen FĂ€llen die theoretisch effizientesten Algorithmen in der Praxis als ungeeignet. Auf der anderen Seite sind einige Algorithmen viel schneller als die Theorie vorhersagt. Dieser scheinbare Widerspruch resultiert aus dem Pessimismus, der dem in der theoretischen Informatik vorherrschenden Analysekonzept, der Worst-Case-Analyse, innewohnt. Um die LĂŒcke zwischen Theorie und Praxis zu verkleinern, untersuchen wir ausgewĂ€hlte Optimierungsprobleme und Algorithmen auf gegnerisch vorgegebenen Instanzen, die durch ein leichtes Zufallsrauschen gestört werden. Solche perturbierten Instanzen bezeichnen wir als semi-zufĂ€llige Eingaben. Als Beispiel betrachten wir den Successive- Shortest-Path-Algorithmus fĂŒr das Minimum-Cost-Flow-Problem. WĂ€hrend dieser Algorithmus imWorst Case exponentiell viele Schritte benötigt, um einen Minimum-Cost-Flow zu berechnen, zeigen wir, dass seine Laufzeit auf semi-zufĂ€lligen Eingaben polynomiell ist. Ein weiteres Problem, das wir in dieser Arbeit untersuchen, ist die Minimierung des Makespans fĂŒr Scheduling auf unterschiedlich schnellen Maschinen. Es scheint, dass dieses Problem nicht effizient gelöst werden kann. Daher betrachten wir drei Approximationsalgorithmen: den Jump-, den Lex-Jump- und den List-Scheduling-Algorithmus. Im Worst Case hĂ€ngt die ApproximationsgĂŒte dieser Algorithmen von der Anzahl der Maschinen ab. Wir zeigen, dass das auf semi-zufĂ€lligen Eingaben nicht der Fall ist. Des Weiteren betrachten wir ganzzahlige Optimierungsprobleme mit mehreren linearen Zielfunktionen, die simultan minimiert werden sollen. Wir leiten eine polynomielle obere Schranke fĂŒr die GrĂ¶ĂŸe der Pareto-Menge auf semi-zufĂ€lligen Eingaben her, die im Gegensatz zu der exponentiellen unteren Worst-Case-Schranke steht. Mit den Erkenntnissen aus der Laufzeitanalyse des Successive-Shortest-Path-Algorithmus entwerfen wir einen randomisierten Algorithmus zur Bestimmung eines kurzen Pfades zwischen zwei gegebenen Ecken eines Polyeders. Wir betrachten dieses Ergebnis als ein Indiz dafĂŒr, dass in Zukunft Analysen auf semi-zufĂ€lligen Eingaben auch zu der Entwicklung schneller Algorithmen fĂŒhren könnten

    A stochastic variable size bin packing problem with time constraints

    Get PDF
    In this paper, we extend the classical Variable Size Bin Packing Problem (VS-BPP) by adding time features to both bins and items. Speciffically, the bins act as machines that process the assigned batch of items with a fixed processing time. Hence, the items are available for processing at given times and are penalized for tardiness. Within this extension we also consider a stochastic variant, where the arrival times of the items have a discrete probability distribution. To solve these models, we build a Markov Chain Monte Carlo (MCMC) heuristic. We provide numerical tests to show the different decision making processes when time constraints and stochasticity are added to VSBPP instances. The results show that these new models entail safer and higher cost solutions. We also compare the performance of the MCMC heuristic and an industrial solver to show the effciency and the effcacy of our method

    Extreme Scale De Novo Metagenome Assembly

    Full text link
    Metagenome assembly is the process of transforming a set of short, overlapping, and potentially erroneous DNA segments from environmental samples into the accurate representation of the underlying microbiomes's genomes. State-of-the-art tools require big shared memory machines and cannot handle contemporary metagenome datasets that exceed Terabytes in size. In this paper, we introduce the MetaHipMer pipeline, a high-quality and high-performance metagenome assembler that employs an iterative de Bruijn graph approach. MetaHipMer leverages a specialized scaffolding algorithm that produces long scaffolds and accommodates the idiosyncrasies of metagenomes. MetaHipMer is end-to-end parallelized using the Unified Parallel C language and therefore can run seamlessly on shared and distributed-memory systems. Experimental results show that MetaHipMer matches or outperforms the state-of-the-art tools in terms of accuracy. Moreover, MetaHipMer scales efficiently to large concurrencies and is able to assemble previously intractable grand challenge metagenomes. We demonstrate the unprecedented capability of MetaHipMer by computing the first full assembly of the Twitchell Wetlands dataset, consisting of 7.5 billion reads - size 2.6 TBytes.Comment: Accepted to SC1

    Mixed integer programming and adaptive problem solver learned by landscape analysis for clinical laboratory scheduling

    Full text link
    This paper attempts to derive a mathematical formulation for real-practice clinical laboratory scheduling, and to present an adaptive problem solver by leveraging landscape structures. After formulating scheduling of medical tests as a distributed scheduling problem in heterogeneous, flexible job shop environment, we establish a mixed integer programming model to minimize mean test turnaround time. Preliminary landscape analysis sustains that these clinics-orientated scheduling instances are difficult to solve. The search difficulty motivates the design of an adaptive problem solver to reduce repetitive algorithm-tuning work, but with a guaranteed convergence. Yet, under a search strategy, relatedness from exploitation competence to landscape topology is not transparent. Under strategies that impose different-magnitude perturbations, we investigate changes in landscape structure and find that disturbance amplitude, local-global optima connectivity, landscape's ruggedness and plateau size fairly predict strategies' efficacy. Medium-size instances of 100 tasks are easier under smaller-perturbation strategies that lead to smoother landscapes with smaller plateaus. For large-size instances of 200-500 tasks, extant strategies at hand, having either larger or smaller perturbations, face more rugged landscapes with larger plateaus that impede search. Our hypothesis that medium perturbations may generate smoother landscapes with smaller plateaus drives our design of this new strategy and its verification by experiments. Composite neighborhoods managed by meta-Lamarckian learning show beyond average performance, implying reliability when prior knowledge of landscape is unknown
    • 

    corecore