219 research outputs found

    Decomposition, Reformulation, and Diving in University Course Timetabling

    Full text link
    In many real-life optimisation problems, there are multiple interacting components in a solution. For example, different components might specify assignments to different kinds of resource. Often, each component is associated with different sets of soft constraints, and so with different measures of soft constraint violation. The goal is then to minimise a linear combination of such measures. This paper studies an approach to such problems, which can be thought of as multiphase exploitation of multiple objective-/value-restricted submodels. In this approach, only one computationally difficult component of a problem and the associated subset of objectives is considered at first. This produces partial solutions, which define interesting neighbourhoods in the search space of the complete problem. Often, it is possible to pick the initial component so that variable aggregation can be performed at the first stage, and the neighbourhoods to be explored next are guaranteed to contain feasible solutions. Using integer programming, it is then easy to implement heuristics producing solutions with bounds on their quality. Our study is performed on a university course timetabling problem used in the 2007 International Timetabling Competition, also known as the Udine Course Timetabling Problem. In the proposed heuristic, an objective-restricted neighbourhood generator produces assignments of periods to events, with decreasing numbers of violations of two period-related soft constraints. Those are relaxed into assignments of events to days, which define neighbourhoods that are easier to search with respect to all four soft constraints. Integer programming formulations for all subproblems are given and evaluated using ILOG CPLEX 11. The wider applicability of this approach is analysed and discussed.Comment: 45 pages, 7 figures. Improved typesetting of figures and table

    The problems of selecting problems

    Get PDF
    We face several teaching problems where a set of exercises has to be selected based on their capability to make students discover typical misconceptions or their capability to evaluate the knowledge of the students. We consider four different optimization problems, developed from two basic decision problems. The first two optimization problems consist in selecting a set of exercises reaching some required levels of coverage for each topic. In the first problem we minimize the total time required to present the selected exercises, whereas the surplus coverage of topics is maximized in the second problem. The other two optimization problems consist in composing an exam in such a way that each student misconception reduces the overall mark of the exam to some specific required extent. In particular, we consider the problem of minimizing the size of the exam fulfilling these mark reduction constraints, and the problem of minimizing the differences between the required marks losses due to each misconception and the actual ones in the composed exam. For each optimization problem, we formally identify its approximation hardness and we heuristically solve it by using a genetic algorithm. We report experimental results for a case study based on a set of real exercises of Discrete Mathematics, a Computer Science degree subject

    Applications of river formation dynamics

    Get PDF
    River formation dynamics is a metaheuristic where solutions are constructed by iteratively modifying the values associated to the nodes of a graph. Its gradient orientation provides interesting features such as the fast reinforcement of new shortcuts, the natural avoidance of cycles, and the focused elimination of blind alleys. Since the method was firstly proposed in 2007, several research groups have applied it to a wide variety of application domains, such as telecommunications, software testing, industrial manufacturing processes, or navigation. In this paper we review the main works of the last decade where the river formation dynamics metaheuristic has been applied to solve optimization problems

    Energy-aware scheduling in heterogeneous computing systems

    Get PDF
    In the last decade, the grid computing systems emerged as useful provider of the computing power required for solving complex problems. The classic formulation of the scheduling problem in heterogeneous computing systems is NP-hard, thus approximation techniques are required for solving real-world scenarios of this problem. This thesis tackles the problem of scheduling tasks in a heterogeneous computing environment in reduced execution times, considering the schedule length and the total energy consumption as the optimization objectives. An efficient multithreading local search algorithm for solving the multi-objective scheduling problem in heterogeneous computing systems, named MEMLS, is presented. The proposed method follows a fully multi-objective approach, applying a Pareto-based dominance search that is executed in parallel by using several threads. The experimental analysis demonstrates that the new multithreading algorithm outperforms a set of fast and accurate two-phase deterministic heuristics based on the traditional MinMin. The new ME-MLS method is able to achieve significant improvements in both makespan and energy consumption objectives in reduced execution times for a large set of testbed instances, while exhibiting very good scalability. The ME-MLS was evaluated solving instances comprised of up to 2048 tasks and 64 machines. In order to scale the dimension of the problem instances even further and tackle large-sized problem instances, the Graphical Processing Unit (GPU) architecture is considered. This line of future work has been initially tackled with the gPALS: a hybrid CPU/GPU local search algorithm for efficiently tackling a single-objective heterogeneous computing scheduling problem. The gPALS shows very promising results, being able to tackle instances of up to 32768 tasks and 1024 machines in reasonable execution times.En la última década, los sistemas de computación grid se han convertido en útiles proveedores de la capacidad de cálculo necesaria para la resolución de problemas complejos. En su formulación clásica, el problema de la planificación de tareas en sistemas heterogéneos es un problema NP difícil, por lo que se requieren técnicas de resolución aproximadas para atacar instancias de tamaño realista de este problema. Esta tesis aborda el problema de la planificación de tareas en sistemas heterogéneos, considerando el largo de la planificación y el consumo energético como objetivos a optimizar. Para la resolución de este problema se propone un algoritmo de búsqueda local eficiente y multihilo. El método propuesto se trata de un enfoque plenamente multiobjetivo que consiste en la aplicación de una búsqueda basada en dominancia de Pareto que se ejecuta en paralelo mediante el uso de varios hilos de ejecución. El análisis experimental demuestra que el algoritmo multithilado propuesto supera a un conjunto de heurísticas deterministas rápidas y e caces basadas en el algoritmo MinMin tradicional. El nuevo método, ME-MLS, es capaz de lograr mejoras significativas tanto en el largo de la planificación y como en consumo energético, en tiempos de ejecución reducidos para un gran número de casos de prueba, mientras que exhibe una escalabilidad muy promisoria. El ME-MLS fue evaluado abordando instancias de hasta 2048 tareas y 64 máquinas. Con el n de aumentar la dimensión de las instancias abordadas y hacer frente a instancias de gran tamaño, se consideró la utilización de la arquitectura provista por las unidades de procesamiento gráfico (GPU). Esta línea de trabajo futuro ha sido abordada inicialmente con el algoritmo gPALS: un algoritmo híbrido CPU/GPU de búsqueda local para la planificación de tareas en en sistemas heterogéneos considerando el largo de la planificación como único objetivo. La evaluación del algoritmo gPALS ha mostrado resultados muy prometedores, siendo capaz de abordar instancias de hasta 32768 tareas y 1024 máquinas en tiempos de ejecución razonables

    Quantum algorithms for process parallel flexible job shop scheduling

    Get PDF
    Flexible Job Shop Scheduling is one of the most difficult optimization problems known. In addition, modern production planning and control strategies require continuous and process-parallel optimization of machine allocation and processing sequences. Therefore, this paper presents a new method for process parallel Flexible Job Shop Scheduling using the concept of quantum computing based optimization. A scientific benchmark and the application to a realistic use-case demonstrates the good performance and practicability of this new approach. A managerial insight shows how the approach for process parallel flexible job shop scheduling can be integrated in existing production planning and control IT-infrastructure

    Using GRASP and GA to design resilient and cost-effective IP/MPLS networks

    Get PDF
    The main objective of this thesis is to find good quality solutions for representative instances of the problem of designing a resilient and low cost IP/MPLS network, to be deployed over an existing optical transport network. This research is motivated by two complementary real-world application cases, which comprise the most important commercial and academic networks of Uruguay. To achieve this goal, we performed an exhaustive analysis of existing models and technologies. From all of them we took elements that were contrasted with the particular requirements of our counterparts. We highlight among these requirements, the need of getting solutions transparently implementable over a heterogeneous network environment, which limit us to use widely standardized features of related technologies. We decided to create new models more suitable to fit these needs. These models are intrinsically hard to solve (NP-Hard). Thus we developed metaheuristic based algorithms to find solutions to these real-world instances. Evolutionary Algorithms and Greedy Randomized Adaptive Search Procedures obtained the best results. As it usually happens, real-world planning problems are surrounded by uncertainty. Therefore, we have worked closely with our counterparts to reduce the fuzziness upon data to a set of representative cases. They were combined with different strategies of design to get to scenarios, which were translated into instances of these problems. Finally, the algorithms were fed with this information, and from their outcome we derived our results and conclusions

    Offline Learning for Sequence-based Selection Hyper-heuristics

    Get PDF
    This thesis is concerned with finding solutions to discrete NP-hard problems. Such problems occur in a wide range of real-world applications, such as bin packing, industrial flow shop problems, determining Boolean satisfiability, the traveling salesman and vehicle routing problems, course timetabling, personnel scheduling, and the optimisation of water distribution networks. They are typically represented as optimisation problems where the goal is to find a ``best'' solution from a given space of feasible solutions. As no known polynomial-time algorithmic solution exists for NP-hard problems, they are usually solved by applying heuristic methods. Selection hyper-heuristics are algorithms that organise and combine a number of individual low level heuristics into a higher level framework with the objective of improving optimisation performance. Many selection hyper-heuristics employ learning algorithms in order to enhance optimisation performance by improving the selection of single heuristics, and this learning may be classified as either online or offline. This thesis presents a novel statistical framework for the offline learning of subsequences of low level heuristics in order to improve the optimisation performance of sequenced-based selection hyper-heuristics. A selection hyper-heuristic is used to optimise the HyFlex set of discrete benchmark problems. The resulting sequences of low level heuristic selections and objective function values are used to generate an offline learning database of heuristic selections. The sequences in the database are broken down into subsequences and the mathematical concept of a logarithmic return is used to discriminate between ``effective'' subsequences, that tend to lead to improvements in optimisation performance, and ``disruptive'' subsequences that tend to lead to worsening performance. Effective subsequences are used to improve hyper-heuristics performance directly, by embedding them in a simple hyper-heuristic design, and indirectly as the inputs to an appropriate hyper-heuristic learning algorithm. Furthermore, by comparing effective subsequences across different problem domains it is possible to investigate the potential for cross-domain learning. The results presented here demonstrates that the use of well chosen subsequences of heuristics can lead to small, but statistically significant, improvements in optimisation performance

    Energy aware hybrid flow shop scheduling

    Get PDF
    Only if humanity acts quickly and resolutely can we limit global warming' conclude more than 25,000 academics with the statement of SCIENTISTS FOR FUTURE. The concern about global warming and the extinction of species has steadily increased in recent years
    corecore