168 research outputs found

    Serial-batch scheduling – the special case of laser-cutting machines

    Get PDF
    The dissertation deals with a problem in the field of short-term production planning, namely the scheduling of laser-cutting machines. The object of decision is the grouping of production orders (batching) and the sequencing of these order groups on one or more machines (scheduling). This problem is also known in the literature as "batch scheduling problem" and belongs to the class of combinatorial optimization problems due to the interdependencies between the batching and the scheduling decisions. The concepts and methods used are mainly from production planning, operations research and machine learning

    Advances and Novel Approaches in Discrete Optimization

    Get PDF
    Discrete optimization is an important area of Applied Mathematics with a broad spectrum of applications in many fields. This book results from a Special Issue in the journal Mathematics entitled ‘Advances and Novel Approaches in Discrete Optimization’. It contains 17 articles covering a broad spectrum of subjects which have been selected from 43 submitted papers after a thorough refereeing process. Among other topics, it includes seven articles dealing with scheduling problems, e.g., online scheduling, batching, dual and inverse scheduling problems, or uncertain scheduling problems. Other subjects are graphs and applications, evacuation planning, the max-cut problem, capacitated lot-sizing, and packing algorithms

    Production Scheduling

    Get PDF
    Generally speaking, scheduling is the procedure of mapping a set of tasks or jobs (studied objects) to a set of target resources efficiently. More specifically, as a part of a larger planning and scheduling process, production scheduling is essential for the proper functioning of a manufacturing enterprise. This book presents ten chapters divided into five sections. Section 1 discusses rescheduling strategies, policies, and methods for production scheduling. Section 2 presents two chapters about flow shop scheduling. Section 3 describes heuristic and metaheuristic methods for treating the scheduling problem in an efficient manner. In addition, two test cases are presented in Section 4. The first uses simulation, while the second shows a real implementation of a production scheduling system. Finally, Section 5 presents some modeling strategies for building production scheduling systems. This book will be of interest to those working in the decision-making branches of production, in various operational research areas, as well as computational methods design. People from a diverse background ranging from academia and research to those working in industry, can take advantage of this volume

    Optimization Models and Approximate Algorithms for the Aerial Refueling Scheduling and Rescheduling Problems

    Get PDF
    The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for fighter aircrafts (jobs) on multiple tankers (machines) to minimize the total weighted tardiness. ARSP can be modeled as a parallel machine scheduling with release times and due date-to-deadline window. ARSP assumes that the jobs have different release times, due dates, and due date-to-deadline windows between the refueling due date and a deadline to return without refueling. The Aerial Refueling Rescheduling Problem (ARRP), on the other hand, can be defined as updating the existing AR schedule after being disrupted by job related events including the arrival of new aircrafts, departure of an existing aircrafts, and changes in aircraft priorities. ARRP is formulated as a multiobjective optimization problem by minimizing the total weighted tardiness (schedule quality) and schedule instability. Both ARSP and ARRP are formulated as mixed integer programming models. The objective function in ARSP is a piecewise tardiness cost that takes into account due date-to-deadline windows and job priorities. Since ARSP is NP-hard, four approximate algorithms are proposed to obtain solutions in reasonable computational times, namely (1) apparent piecewise tardiness cost with release time rule (APTCR), (2) simulated annealing starting from random solution (SArandom ), (3) SA improving the initial solution constructed by APTCR (SAAPTCR), and (4) Metaheuristic for Randomized Priority Search (MetaRaPS). Additionally, five regeneration and partial repair algorithms (MetaRE, BestINSERT, SEPRE, LSHIFT, and SHUFFLE) were developed for ARRP to update instantly the current schedule at the disruption time. The proposed heuristic algorithms are tested in terms of solution quality and CPU time through computational experiments with randomly generated data to represent AR operations and disruptions. Effectiveness of the scheduling and rescheduling algorithms are compared to optimal solutions for problems with up to 12 jobs and to each other for larger problems with up to 60 jobs. The results show that, APTCR is more likely to outperform SArandom especially when the problem size increases, although it has significantly worse performance than SA in terms of deviation from optimal solution for small size problems. Moreover CPU time performance of APTCR is significantly better than SA in both cases. MetaRaPS is more likely to outperform SAAPTCR in terms of average error from optimal solutions for both small and large size problems. Results for small size problems show that MetaRaPS algorithm is more robust compared to SAAPTCR. However, CPU time performance of SA is significantly better than MetaRaPS in both cases. ARRP experiments were conducted with various values of objective weighting factor for extended analysis. In the job arrival case, MetaRE and BestINSERT have significantly performed better than SEPRE in terms of average relative error for small size problems. In the case of job priority disruption, there is no significant difference between MetaRE, BestINSERT, and SHUFFLE algorithms. MetaRE has significantly performed better than LSHIFT to repair job departure disruptions and significantly superior to the BestINSERT algorithm in terms of both relative error and computational time for large size problems

    PiCo: A Domain-Specific Language for Data Analytics Pipelines

    Get PDF
    In the world of Big Data analytics, there is a series of tools aiming at simplifying programming applications to be executed on clusters. Although each tool claims to provide better programming, data and execution models—for which only informal (and often confusing) semantics is generally provided—all share a common under- lying model, namely, the Dataflow model. Using this model as a starting point, it is possible to categorize and analyze almost all aspects about Big Data analytics tools from a high level perspective. This analysis can be considered as a first step toward a formal model to be exploited in the design of a (new) framework for Big Data analytics. By putting clear separations between all levels of abstraction (i.e., from the runtime to the user API), it is easier for a programmer or software designer to avoid mixing low level with high level aspects, as we are often used to see in state-of-the-art Big Data analytics frameworks. From the user-level perspective, we think that a clearer and simple semantics is preferable, together with a strong separation of concerns. For this reason, we use the Dataflow model as a starting point to build a programming environment with a simplified programming model implemented as a Domain-Specific Language, that is on top of a stack of layers that build a prototypical framework for Big Data analytics. The contribution of this thesis is twofold: first, we show that the proposed model is (at least) as general as existing batch and streaming frameworks (e.g., Spark, Flink, Storm, Google Dataflow), thus making it easier to understand high-level data-processing applications written in such frameworks. As result of this analysis, we provide a layered model that can represent tools and applications following the Dataflow paradigm and we show how the analyzed tools fit in each level. Second, we propose a programming environment based on such layered model in the form of a Domain-Specific Language (DSL) for processing data collections, called PiCo (Pipeline Composition). The main entity of this programming model is the Pipeline, basically a DAG-composition of processing elements. This model is intended to give the user an unique interface for both stream and batch processing, hiding completely data management and focusing only on operations, which are represented by Pipeline stages. Our DSL will be built on top of the FastFlow library, exploiting both shared and distributed parallelism, and implemented in C++11/14 with the aim of porting C++ into the Big Data world

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Processamento de eventos complexos como serviço em ambientes multi-nuvem

    Get PDF
    Orientadores: Luiz Fernando Bittencourt, Miriam Akemi Manabe CapretzTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O surgimento das tecnologias de dispositivos mĂłveis e da Internet das Coisas, combinada com avanços das tecnologias Web, criou um novo mundo de Big Data em que o volume e a velocidade da geração de dados atingiu uma escala sem precedentes. Por ser uma tecnologia criada para processar fluxos contĂ­nuos de dados, o Processamento de Eventos Complexos (CEP, do inglĂȘs Complex Event Processing) tem sido frequentemente associado a Big Data e aplicado como uma ferramenta para obter informaçÔes em tempo real. Todavia, apesar desta onda de interesse, o mercado de CEP ainda Ă© dominado por soluçÔes proprietĂĄrias que requerem grandes investimentos para sua aquisição e nĂŁo proveem a flexibilidade que os usuĂĄrios necessitam. Como alternativa, algumas empresas adotam soluçÔes de baixo nĂ­vel que demandam intenso treinamento tĂ©cnico e possuem alto custo operacional. A fim de solucionar esses problemas, esta pesquisa propĂ”e a criação de um sistema de CEP que pode ser oferecido como serviço e usado atravĂ©s da Internet. Um sistema de CEP como Serviço (CEPaaS, do inglĂȘs CEP as a Service) oferece aos usuĂĄrios as funcionalidades de CEP aliadas Ă s vantagens do modelo de serviços, tais como redução do investimento inicial e baixo custo de manutenção. No entanto, a criação de tal serviço envolve inĂșmeros desafios que nĂŁo sĂŁo abordados no atual estado da arte de CEP. Em especial, esta pesquisa propĂ”e soluçÔes para trĂȘs problemas em aberto que existem neste contexto. Em primeiro lugar, para o problema de entender e reusar a enorme variedade de procedimentos para gerĂȘncia de sistemas CEP, esta pesquisa propĂ”e o formalismo Reescrita de Grafos com Atributos para GerĂȘncia de Processamento de Eventos Complexos (AGeCEP, do inglĂȘs Attributed Graph Rewriting for Complex Event Processing Management). Este formalismo inclui modelos para consultas CEP e transformaçÔes de consultas que sĂŁo independentes de tecnologia e linguagem. Em segundo lugar, para o problema de avaliar estratĂ©gias de gerĂȘncia e processamento de consultas CEP, esta pesquisa apresenta CEPSim, um simulador de sistemas CEP baseado em nuvem. Por fim, esta pesquisa tambĂ©m descreve um sistema CEPaaS fundamentado em ambientes multi-nuvem, sistemas de gerĂȘncia de contĂȘineres e um design multiusuĂĄrio baseado em AGeCEP. Para demonstrar sua viabilidade, o formalismo AGeCEP foi usado para projetar um gerente autĂŽnomo e um conjunto de polĂ­ticas de auto-gerenciamento para sistemas CEP. AlĂ©m disso, o simulador CEPSim foi minuciosamente avaliado atravĂ©s de experimentos que demonstram sua capacidade de simular sistemas CEP com acurĂĄcia e baixo custo adicional de processamento. Por fim, experimentos adicionais validaram o sistema CEPaaS e demonstraram que o objetivo de oferecer funcionalidades CEP como um serviço escalĂĄvel e tolerante a falhas foi atingido. Em conjunto, esses resultados confirmam que esta pesquisa avança significantemente o estado da arte e tambĂ©m oferece novas ferramentas e metodologias que podem ser aplicadas Ă  pesquisa em CEPAbstract: The rise of mobile technologies and the Internet of Things, combined with advances in Web technologies, have created a new Big Data world in which the volume and velocity of data generation have achieved an unprecedented scale. As a technology created to process continuous streams of data, Complex Event Processing (CEP) has been often related to Big Data and used as a tool to obtain real-time insights. However, despite this recent surge of interest, the CEP market is still dominated by solutions that are costly and inflexible or too low-level and hard to operate. To address these problems, this research proposes the creation of a CEP system that can be offered as a service and used over the Internet. Such a CEP as a Service (CEPaaS) system would give its users CEP functionalities associated with the advantages of the services model, such as no up-front investment and low maintenance cost. Nevertheless, creating such a service involves challenges that are not addressed by current CEP systems. This research proposes solutions for three open problems that exist in this context. First, to address the problem of understanding and reusing existing CEP management procedures, this research introduces the Attributed Graph Rewriting for Complex Event Processing Management (AGeCEP) formalism as a technology- and language-agnostic representation of queries and their reconfigurations. Second, to address the problem of evaluating CEP query management and processing strategies, this research introduces CEPSim, a simulator of cloud-based CEP systems. Finally, this research also introduces a CEPaaS system based on a multi-cloud architecture, container management systems, and an AGeCEP-based multi-tenant design. To demonstrate its feasibility, AGeCEP was used to design an autonomic manager and a selected set of self-management policies. Moreover, CEPSim was thoroughly evaluated by experiments that showed it can simulate existing systems with accuracy and low execution overhead. Finally, additional experiments validated the CEPaaS system and demonstrated it achieves the goal of offering CEP functionalities as a scalable and fault-tolerant service. In tandem, these results confirm this research significantly advances the CEP state of the art and provides novel tools and methodologies that can be applied to CEP researchDoutoradoCiĂȘncia da ComputaçãoDoutor em CiĂȘncia da Computação140920/2012-9CNP
    • 

    corecore