2,584 research outputs found

    Project scheduling under undertainty – survey and research potentials.

    Get PDF
    The vast majority of the research efforts in project scheduling assume complete information about the scheduling problem to be solved and a static deterministic environment within which the pre-computed baseline schedule will be executed. However, in the real world, project activities are subject to considerable uncertainty, that is gradually resolved during project execution. In this survey we review the fundamental approaches for scheduling under uncertainty: reactive scheduling, stochastic project scheduling, stochastic GERT network scheduling, fuzzy project scheduling, robust (proactive) scheduling and sensitivity analysis. We discuss the potentials of these approaches for scheduling projects under uncertainty.Management; Project management; Robustness; Scheduling; Stability;

    Job Shop Scheduling with Routing Flexibility and Sequence-Dependent Setup Times

    Get PDF
    This paper presents a meta-heuristic algorithm for solving a job shop scheduling problem involving both sequence dependent setup-times and the possibility of selecting alternative routes among the available machines. The proposed strategy is a variant of the Iterative Flattening Search (IFS ) schema. This work provides three separate results: (1) a constraint-based solving procedure that extends an existing approach for classical Job Shop Scheduling; (2) a new variable and value ordering heuristic based on temporal flexibility that take into account both sequence dependent setup-times and flexibility in machine selection; (3) an original relaxation strategy based on the idea of randomly breaking the execution orders of the activities on the machines with a activity selection criteria based on their proximity to the solution\u27s critical path. The efficacy of the overall heuristic optimization algorithm is demonstrated on a new benchmark set which is an extension of a well-known and difficult benchmark for the Flexible Job Shop Scheduling Problem

    An optimization framework for solving capacitated multi-level lot-sizing problems with backlogging

    Get PDF
    This paper proposes two new mixed integer programming models for capacitated multi-level lot-sizing problems with backlogging, whose linear programming relaxations provide good lower bounds on the optimal solution value. We show that both of these strong formulations yield the same lower bounds. In addition to these theoretical results, we propose a new, effective optimization framework that achieves high quality solutions in reasonable computational time. Computational results show that the proposed optimization framework is superior to other well-known approaches on several important performance dimensions

    How to shift bias: Lessons from the Baldwin effect

    Get PDF
    An inductive learning algorithm takes a set of data as input and generates a hypothesis as output. A set of data is typically consistent with an infinite number of hypotheses; therefore, there must be factors other than the data that determine the output of the learning algorithm. In machine learning, these other factors are called the bias of the learner. Classical learning algorithms have a fixed bias, implicit in their design. Recently developed learning algorithms dynamically adjust their bias as they search for a hypothesis. Algorithms that shift bias in this manner are not as well understood as classical algorithms. In this paper, we show that the Baldwin effect has implications for the design and analysis of bias shifting algorithms. The Baldwin effect was proposed in 1896, to explain how phenomena that might appear to require Lamarckian evolution (inheritance of acquired characteristics) can arise from purely Darwinian evolution. Hinton and Nowlan presented a computational model of the Baldwin effect in 1987. We explore a variation on their model, which we constructed explicitly to illustrate the lessons that the Baldwin effect has for research in bias shifting algorithms. The main lesson is that it appears that a good strategy for shift of bias in a learning algorithm is to begin with a weak bias and gradually shift to a strong bias

    Applying Iterative Flattening Search to the Job Shop Scheduling Problem with Alternative Resources and Sequence Dependent Setup Times

    Get PDF
    This paper tackles a complex version of the Job Shop Scheduling Problem (JSSP) that involves both the possibility to select alternative resources to activities and the presence of sequence dependent setup times. The proposed solving strategy is a variant of the known Iterative Flattening Search (IFS) metaheuristic. This work presents the following contributions: (1) a new constraint-based solving procedure produced by means of enhancing a previous JSSP-solving version of the same metaheuristic; (2) a new version of both the variable and value ordering heuristics, based on temporal flexibility, that capture the relevant features of the extended scheduling problem (i.e., the flexibility in the assignment of resources to activities, and the sequence dependent setup times); (3) a new relaxation strategy based on the random selection of the activities that are closer to the critical path of the solution, as opposed to the original approach based on a fully random relaxation. The performance of the proposed algorithm are tested on a new benchmark set produced as an extension of an existing well-known testset for the Flexible Job Shop Scheduling Problem by adding sequence dependent setup times to each original testset\u27s instance, and the behavior of the old and new relaxation strategies are compared

    AN INVESTIGATION INTO PARTITIONING ALGORITHMS FOR AUTOMATIC HETEROGENEOUS COMPILERS

    Get PDF
    Automatic Heterogeneous Compilers allows blended hardware-software solutions to be explored without the cost of a full-fledged design team, but limited research exists on current partitioning algorithms responsible for separating hardware and software. The purpose of this thesis is to implement various partitioning algorithms onto the same automatic heterogeneous compiler platform to create an apples to apples comparison for AHC partitioning algorithms. Both estimated outcomes and actual outcomes for the solutions generated are studied and scored. The platform used to implement the algorithms is Cal Poly’s own Twill compiler, created by Doug Gallatin last year. Twill’s original partitioning algorithm is chosen along with two other partitioning algorithms: Tabu Search + Simulated Annealing (TSSA) and Genetic Search (GS). These algorithms are implemented inside Twill and test bench input code from the CHStone HLS Benchmark tests is used as stimulus. Along with the algorithms cost models, one key attribute of interest is queue counts generated, as the more cuts between hardware and software requires queues to pass the data between partition crossings. These high communication costs can end up damaging the heterogeneous solution’s performance. The Genetic, TSSA, and Twill’s original partitioning algorithm are all scored against each other’s cost models as well, combining the fitness and performance cost models with queue counts to evaluate each partitioning algorithm. The solutions generated by TSSA are rated as better by both the cost model for the TSSA algorithm and the cost model for the Genetic algorithm while producing low queue counts

    Computational Molecular Design Using Tabu Search

    Get PDF
    The focus of this project is the use of computational molecular design (CMD) in the design of novel crosslinked polymers. A design example was completed for a dimethacrylate as part of a comonomer used in dental restoration, with the goal to create a dental adhesive with a longer clinical lifetime than those already on the market. The CMD methodology begins with the calculation of molecular descriptors that describe the crosslinked polymer structure. Connectivity index are used as the primary set of descriptors, and have been used successfully in other CMD projects. Quantitative structure property relationships (QSPRs) were developed relating the structural descriptors to the experimentally collected property data. Models were chosen using Mallows' Cp with correlation coefficient significance. Desirable target property values were chosen which lead to an improved clinical lifetime. Structural constraints were defined to increase stability and ease of synthesis. The Tabu Search optimization algorithm was used to design polymers with desirable properties. Finally, a prediction interval was calculated for each candidate to represent the possible error in the predicted properties. The described methodology provides a list of candidate monomers with predicted properties near the desired target values, which are selected such that the adhesives will show improved propertoes relative to the standard HEMA/BisGMA formulation. The methodology can be easily altered to allow for additional property calculations and structural constraints. This methodology can also be used for molecular design projects beyond crosslinked polymers
    • …
    corecore