439 research outputs found

    A hybrid shifting bottleneck-tabu search heuristic for the job shop total weighted tardiness problem

    Get PDF
    In this paper, we study the job shop scheduling problem with the objective of minimizing the total weighted tardiness. We propose a hybrid shifting bottleneck - tabu search (SB-TS) algorithm by replacing the reoptimization step in the shifting bottleneck (SB) algorithm by a tabu search (TS). In terms of the shifting bottleneck heuristic, the proposed tabu search optimizes the total weighted tardiness for partial schedules in which some machines are currently assumed to have infinite capacity. In the context of tabu search, the shifting bottleneck heuristic features a long-term memory which helps to diversify the local search. We exploit this synergy to develop a state-of-the-art algorithm for the job shop total weighted tardiness problem (JS-TWT). The computational effectiveness of the algorithm is demonstrated on standard benchmark instances from the literature

    Parallel local search

    Get PDF

    Parallel optimization algorithms for high performance computing : application to thermal systems

    Get PDF
    The need of optimization is present in every field of engineering. Moreover, applications requiring a multidisciplinary approach in order to make a step forward are increasing. This leads to the need of solving complex optimization problems that exceed the capacity of human brain or intuition. A standard way of proceeding is to use evolutionary algorithms, among which genetic algorithms hold a prominent place. These are characterized by their robustness and versatility, as well as their high computational cost and low convergence speed. Many optimization packages are available under free software licenses and are representative of the current state of the art in optimization technology. However, the ability of optimization algorithms to adapt to massively parallel computers reaching satisfactory efficiency levels is still an open issue. Even packages suited for multilevel parallelism encounter difficulties when dealing with objective functions involving long and variable simulation times. This variability is common in Computational Fluid Dynamics and Heat Transfer (CFD & HT), nonlinear mechanics, etc. and is nowadays a dominant concern for large scale applications. Current research in improving the performance of evolutionary algorithms is mainly focused on developing new search algorithms. Nevertheless, there is a vast knowledge of sequential well-performing algorithmic suitable for being implemented in parallel computers. The gap to be covered is efficient parallelization. Moreover, advances in the research of both new search algorithms and efficient parallelization are additive, so that the enhancement of current state of the art optimization software can be accelerated if both fronts are tackled simultaneously. The motivation of this Doctoral Thesis is to make a step forward towards the successful integration of Optimization and High Performance Computing capabilities, which has the potential to boost technological development by providing better designs, shortening product development times and minimizing the required resources. After conducting a thorough state of the art study of the mathematical optimization techniques available to date, a generic mathematical optimization tool has been developed putting a special focus on the application of the library to the field of Computational Fluid Dynamics and Heat Transfer (CFD & HT). Then the main shortcomings of the standard parallelization strategies available for genetic algorithms and similar population-based optimization methods have been analyzed. Computational load imbalance has been identified to be the key point causing the degradation of the optimization algorithm¿s scalability (i.e. parallel efficiency) in case the average makespan of the batch of individuals is greater than the average time required by the optimizer for performing inter-processor communications. It occurs because processors are often unable to finish the evaluation of their queue of individuals simultaneously and need to be synchronized before the next batch of individuals is created. Consequently, the computational load imbalance is translated into idle time in some processors. Several load balancing algorithms have been proposed and exhaustively tested, being extendable to any other population-based optimization method that needs to synchronize all processors after the evaluation of each batch of individuals. Finally, a real-world engineering application that consists on optimizing the refrigeration system of a power electronic device has been presented as an illustrative example in which the use of the proposed load balancing algorithms is able to reduce the simulation time required by the optimization tool.El aumento de las aplicaciones que requieren de una aproximación multidisciplinar para poder avanzar se constata en todos los campos de la ingeniería, lo cual conlleva la necesidad de resolver problemas de optimización complejos que exceden la capacidad del cerebro humano o de la intuición. En estos casos es habitual el uso de algoritmos evolutivos, principalmente de los algoritmos genéticos, caracterizados por su robustez y versatilidad, así como por su gran coste computacional y baja velocidad de convergencia. La multitud de paquetes de optimización disponibles con licencias de software libre representan el estado del arte actual en tecnología de optimización. Sin embargo, la capacidad de adaptación de los algoritmos de optimización a ordenadores masivamente paralelos alcanzando niveles de eficiencia satisfactorios es todavía una tarea pendiente. Incluso los paquetes adaptados al paralelismo multinivel tienen dificultades para gestionar funciones objetivo que requieren de tiempos de simulación largos y variables. Esta variabilidad es común en la Dinámica de Fluidos Computacional y la Transferencia de Calor (CFD & HT), mecánica no lineal, etc. y es una de las principales preocupaciones en aplicaciones a gran escala a día de hoy. La investigación actual que tiene por objetivo la mejora del rendimiento de los algoritmos evolutivos está enfocada principalmente al desarrollo de nuevos algoritmos de búsqueda. Sin embargo, ya se conoce una gran variedad de algoritmos secuenciales apropiados para su implementación en ordenadores paralelos. La tarea pendiente es conseguir una paralelización eficiente. Además, los avances en la investigación de nuevos algoritmos de búsqueda y la paralelización son aditivos, por lo que el proceso de mejora del software de optimización actual se verá incrementada si se atacan ambos frentes simultáneamente. La motivación de esta Tesis Doctoral es avanzar hacia una integración completa de las capacidades de Optimización y Computación de Alto Rendimiento para así impulsar el desarrollo tecnológico proporcionando mejores diseños, acortando los tiempos de desarrollo del producto y minimizando los recursos necesarios. Tras un exhaustivo estudio del estado del arte de las técnicas de optimización matemática disponibles a día de hoy, se ha diseñado una librería de optimización orientada al campo de la Dinámica de Fluidos Computacional y la Transferencia de Calor (CFD & HT). A continuación se han analizado las principales limitaciones de las estrategias de paralelización disponibles para algoritmos genéticos y otros métodos de optimización basados en poblaciones. En el caso en que el tiempo de evaluación medio de la tanda de individuos sea mayor que el tiempo medio que necesita el optimizador para llevar a cabo comunicaciones entre procesadores, se ha detectado que la causa principal de la degradación de la escalabilidad o eficiencia paralela del algoritmo de optimización es el desequilibrio de la carga computacional. El motivo es que a menudo los procesadores no terminan de evaluar su cola de individuos simultáneamente y deben sincronizarse antes de que se cree la siguiente tanda de individuos. Por consiguiente, el desequilibrio de la carga computacional se convierte en tiempo de inactividad en algunos procesadores. Se han propuesto y testado exhaustivamente varios algoritmos de equilibrado de carga aplicables a cualquier método de optimización basado en una población que necesite sincronizar los procesadores tras cada tanda de evaluaciones. Finalmente, se ha presentado como ejemplo ilustrativo un caso real de ingeniería que consiste en optimizar el sistema de refrigeración de un dispositivo de electrónica de potencia. En él queda demostrado que el uso de los algoritmos de equilibrado de carga computacional propuestos es capaz de reducir el tiempo de simulación que necesita la herramienta de optimización

    KNITTING CODE: EXAMINING THE RELATIONSHIP BETWEEN KNITTING AND COMPUTATIONAL THINKING SKILLS USING THE NEXUS OF PRACTICE

    Get PDF
    Due to the rise of careers in STEM-related fields, there is a growing need for schools to produce people to fill these positions. One area of STEM that is growing is computer science/coding. Due to this demand, schools need to be intentional about exposing students to computer science/coding. There are a variety of new tools to introduce students to this field. One growing belief is that knitting can teach computer science/coding to students. The goal of this study was to see if knitting can serve as an introduction to teach students computation skills. Kitting has historically been used to code information, and numerous statements have been made that knitting can teach computer coding. The rationale behind this thought is that both fields have similar components and can serve to make coding more accessible to a broader audience. Suppose students that generally would not identify with computer science/coding due to perceived social norms develop an interest in knitting. In that case, they could use what they learned as a foundation to develop an interest in computer coding. This is based on Scollon\u27s Nexus of Practice (2001), which studies how practices are linked together. This theory believes that combining different practices makes a possible crossover from one practice to another. As a result, what may not have been accessible at first due to biases or identity, may become more accessible. This study will focus on whether knitting can teach students computational skills and change students’ identity towards computer science/coding. There is limited research on the relationship between knitting and coding. This case study attempted to determine if knitting could teach coding. The research was conducted during two three-week summer enrichment programs. Results revealed that teaching computer coding through knitting was comparable to traditional instruction. While not necessarily better, this shows that knitting can teach computation skills and improve identity. This could be important for encouraging students that would not typically study computer science/coding to enter the field

    Distributed Approximation Algorithms for Weighted Shortest Paths

    Full text link
    A distributed network is modeled by a graph having nn nodes (processors) and diameter DD. We study the time complexity of approximating {\em weighted} (undirected) shortest paths on distributed networks with a O(logn)O(\log n) {\em bandwidth restriction} on edges (the standard synchronous \congest model). The question whether approximation algorithms help speed up the shortest paths (more precisely distance computation) was raised since at least 2004 by Elkin (SIGACT News 2004). The unweighted case of this problem is well-understood while its weighted counterpart is fundamental problem in the area of distributed approximation algorithms and remains widely open. We present new algorithms for computing both single-source shortest paths (\sssp) and all-pairs shortest paths (\apsp) in the weighted case. Our main result is an algorithm for \sssp. Previous results are the classic O(n)O(n)-time Bellman-Ford algorithm and an O~(n1/2+1/2k+D)\tilde O(n^{1/2+1/2k}+D)-time (8klog(k+1)1)(8k\lceil \log (k+1) \rceil -1)-approximation algorithm, for any integer k1k\geq 1, which follows from the result of Lenzen and Patt-Shamir (STOC 2013). (Note that Lenzen and Patt-Shamir in fact solve a harder problem, and we use O~()\tilde O(\cdot) to hide the O(\poly\log n) term.) We present an O~(n1/2D1/4+D)\tilde O(n^{1/2}D^{1/4}+D)-time (1+o(1))(1+o(1))-approximation algorithm for \sssp. This algorithm is {\em sublinear-time} as long as DD is sublinear, thus yielding a sublinear-time algorithm with almost optimal solution. When DD is small, our running time matches the lower bound of Ω~(n1/2+D)\tilde \Omega(n^{1/2}+D) by Das Sarma et al. (SICOMP 2012), which holds even when D=Θ(logn)D=\Theta(\log n), up to a \poly\log n factor.Comment: Full version of STOC 201

    Quantum Algorithms for the k-xor Problem

    Get PDF
    International audienceThe k-xor (or generalized birthday) problem is a widely studied question with many applications in cryptography. It aims at finding k elements of n bits, drawn at random, such that the xor of all of them is 0. The algorithms proposed by Wagner more than fifteen years ago remain the best known classical algorithms for solving them, when disregarding logarithmic factors. In this paper we study these problems in the quantum setting, when considering that the elements are created by querying a random function (or k random functions) H : {0, 1} n → {0, 1} n. We consider two scenarios: in one we are able to use a limited amount of quantum memory (i.e. a number O(n) of qubits, the same as the one needed by Grover's search algorithm), and in the other we consider that the algorithm can use an exponential amount of qubits. Our newly proposed algorithms are of general interest. In both settings, they provide the best known quantum time complexities. In particular, we are able to considerately improve the 3-xor algorithm: with limited qubits, we reach a complexity considerably better than what is currently possible for quantum collision search. Furthermore, when having access to exponential amounts of quantum memory, we can take this complexity below O(2 n/3), the well-known lower bound of quantum collision search, clearly improving the best known quantum time complexity also in this setting. We illustrate the importance of these results with some cryptographic applications
    corecore