161 research outputs found

    Energy-aware coordination of machine scheduling and support device recharging in production systems

    Get PDF
    Electricity generation from renewable energy sources is crucial for achieving climate targets, including greenhouse gas neutrality. Germany has made significant progress in increasing renewable energy generation. However, feed-in management actions have led to losses of renewable electricity in the past years, primarily from wind energy. These actions aim to maintain grid stability but result in excess renewable energy that goes unused. The lost electricity could have powered a multitude of households and saved CO2 emissions. Moreover, feed-in management actions incurred compensation claims of around 807 million Euros in 2021. Wind-abundant regions like Schleswig-Holstein are particularly affected by these actions, resulting in substantial losses of renewable electricity production. Expanding the power grid infrastructure is a costly and time-consuming solution to avoid feed-in management actions. An alternative approach is to increase local electricity consumption during peak renewable generation periods, which can help balance electricity supply and demand and reduce feed-in management actions. The dissertation focuses on energy-aware manufacturing decision-making, exploring ways to counteract feed-in management actions by increasing local industrial consumption during renewable generation peaks. The research proposes to guide production management decisions, synchronizing a company's energy consumption profile with renewable energy availability for more environmentally friendly production and improved grid stability

    Operational research:methods and applications

    Get PDF
    Throughout its history, Operational Research has evolved to include a variety of methods, models and algorithms that have been applied to a diverse and wide range of contexts. This encyclopedic article consists of two main sections: methods and applications. The first aims to summarise the up-to-date knowledge and provide an overview of the state-of-the-art methods and key developments in the various subdomains of the field. The second offers a wide-ranging list of areas where Operational Research has been applied. The article is meant to be read in a nonlinear fashion. It should be used as a point of reference or first-port-of-call for a diverse pool of readers: academics, researchers, students, and practitioners. The entries within the methods and applications sections are presented in alphabetical order

    Modelling and Optimizing Supply Chain Integrated Production Scheduling Problems

    Full text link
    Globalization and advanced information technologies (e.g., Internet of Things) have considerably impacted supply chains (SCs) by persistently forcing original equipment manufacturers (OEMs) to switch production strategies from make-to-stock (MTS) to make-to-order (MTO) to survive in competition. Generally, an OEM follows the MTS strategy for products with steady demand. In contrast, the MTO strategy exists under a pull system with irregular demand in which the received customer orders are scheduled and launched into production. In comparison to MTS, MTO has the primary challenges of ensuring timely delivery at the lowest possible cost, satisfying the demands of high customization and guaranteeing the accessibility of raw materials throughout the production process. These challenges are increasing substantially since industrial productions are becoming more flexible, diversified, and customized. Besides, independently making the production scheduling decisions from other stages of these SCs often find sub-optimal results, creating substantial challenges to fulfilling demands timely and cost-effectively. Since adequately managing these challenges asynchronously are difficult, constructing optimization models by integrating SC decisions, such as customer requirements, supply portfolio (supplier selection and order allocation), delivery batching decisions, and inventory portfolio (inventory replenishment, consumption, and availability), with shop floor scheduling under a deterministic and dynamic environment is essential to fulfilling customer expectations at the least possible cost. These optimization models are computationally intractable. Consequently, designing algorithms to schedule or reschedule promptly is also highly challenging for these time-sensitive, operationally integrated optimization models. Thus, this thesis focuses on modelling and optimizing SC-integrated production scheduling problems, named SC scheduling problems (SCSPs). The objective of optimizing job shop scheduling problems (JSSPs) is to ensure that the requisite resources are accessible when required and that their utilization is maximally efficient. Although numerous algorithms have been devised, they can sometimes become computationally exorbitant and yield sub-optimal outcomes, rendering production systems inefficient. These could be due to a variety of causes, such as an imbalance in population quality over generations, recurrent generation and evaluation of identical schedules, and permitting an under-performing method to conduct the evolutionary process. Consequently, this study designs two methods, a sequential approach (Chapter 2) and a multi-method approach (Chapter 3), to address the aforementioned issues and to acquire competitive results in finding optimal or near-optimal solutions for JSSPs in a single objective setting. The devised algorithms for JSSPs optimize workflows for each job by accurate mapping between/among related resources, generating more optimal results than existing algorithms. Production scheduling can not be accomplished precisely without considering supply and delivery decisions and customer requirements simultaneously. Thus, a few recent studies have operationally integrated SCs to accurately predict process insights for executing, monitoring, and controlling the planned production. However, these studies are limited to simple shop-floor configurations and can provide the least flexibility to address the MTO-based SC challenges. Thus, this study formulates a bi-objective optimization model that integrates the supply portfolio into a flexible job shop scheduling environment with a customer-imposed delivery window to cost-effectively meet customized and on-time delivery requirements (Chapter 4). Compared to the job shop that is limited to sequence flexibility only, the flexible job shop has been deemed advantageous due to its capacity to provide increased scheduling flexibility (both process and sequence flexibility). To optimize the model, the performance of the multi-objective particle swarm optimization algorithm has been enhanced, with the results providing decision-makers with an increased degree of flexibility, offering a larger number of Pareto solutions, more varied and consistent frontiers, and a reasonable time for MTO-based SCs. Environmental sustainability is spotlighted for increasing environmental awareness and follow-up regulations. Consequently, the related factors strongly regulate the supply portfolio for sustainable development, which remained unexplored in the SCSP as those criteria are primarily qualitative (e.g., green production, green product design, corporate social responsibility, and waste disposal system). These absences may lead to an unacceptable supply portfolio. Thus, this study overcomes the problem by integrating VIKORSORT into the proposed solution methodology of the extended SCSP. In addition, forming delivery batches of heterogeneous customer orders is challenging, as one order can lead to another being delayed. Therefore, the previous optimization model is extended by integrating supply, manufacturing, and delivery batching decisions and concurrently optimizing them in response to heterogeneous customer requirements with time window constraints, considering both economic and environmental sustainability for the supply portfolio (Chapter 5). Since the proposed optimization model is an extension of the flexible job shop, it can be classified as a non-deterministic polynomial-time (NP)-hard problem, which cannot be solved by conventional optimization techniques, particularly in the case of larger instances. Therefore, a reinforcement learning-based hyper-heuristic (HH) has been designed, where four solution-updating heuristics are intelligently guided to deliver the best possible results compared to existing algorithms. The optimization model furnishes a set of comprehensive schedules that integrate the supply portfolio, production portfolio (work-center/machine assignment and customer orders sequencing), and batching decisions. This provides numerous meaningful managerial insights and operational flexibility prior to the execution phase. Recently, SCs have been experiencing unprecedented and massive disruptions caused by an abrupt outbreak, resulting in difficulties for OEMs to recover from disruptive demand-supply equilibrium. Hence, this study proposes a multi-portfolio (supply, production, and inventory portfolios) approach for a proactive-reactive scheme, which concerns the SCSP with complex multi-level products, simultaneously including unpredictably dynamic supply, demand, and shop floor disruptions (Chapter 6). This study considers fabrication and assembly in a multi-level product structure. To effectively address this time-sensitive model based on real-time data, a Q-learning-based multi-operator differential evolution algorithm in a HH has been designed to address disruptive events and generate a timely rescheduling plan. The numerical results and analyses demonstrate the proposed model's capability to effectively address single and multiple disruptions, thus providing significant managerial insights and ensuring SC resilience

    A machine learning enhanced multi-start heuristic to efficiently solve a serial-batch scheduling problem

    Get PDF
    Serial-batch scheduling problems are widespread in several industries (e.g., the metal processing industry or industrial 3D printing) and consist of two subproblems that must be solved simultaneously: the grouping of jobs into batches and the sequencing of the created batches. This problem’s NP-hard nature prevents optimally solving large-scale problems; therefore, heuristic solution methods are a common choice to effectively tackle the problem. One of the best-performing heuristics in the literature is the ATCS–BATCS(β) heuristic which has three control parameters. To achieve a good solution quality, most appropriate parameters must be determined a priori or within a multi-start approach. As multi-start approaches performing (full) grid searches on the parameters lack efficiency, we propose a machine learning enhanced grid search. To that, Artificial Neural Networks are used to predict the performance of the heuristic given a specific problem instance and specific heuristic parameters. Based on these predictions, we perform a grid search on a smaller set of most promising heuristic parameters. The comparison to the ATCS–BATCS(β) heuristics shows that our approach reaches a very competitive mean solution quality that is only 2.5% lower and that it is computationally much more efficient: computation times can be reduced by 89.2% on average

    20. ASIM Fachtagung Simulation in Produktion und Logistik 2023

    Get PDF

    Operational Research: Methods and Applications

    Get PDF
    Throughout its history, Operational Research has evolved to include a variety of methods, models and algorithms that have been applied to a diverse and wide range of contexts. This encyclopedic article consists of two main sections: methods and applications. The first aims to summarise the up-to-date knowledge and provide an overview of the state-of-the-art methods and key developments in the various subdomains of the field. The second offers a wide-ranging list of areas where Operational Research has been applied. The article is meant to be read in a nonlinear fashion. It should be used as a point of reference or first-port-of-call for a diverse pool of readers: academics, researchers, students, and practitioners. The entries within the methods and applications sections are presented in alphabetical order. The authors dedicate this paper to the 2023 Turkey/Syria earthquake victims. We sincerely hope that advances in OR will play a role towards minimising the pain and suffering caused by this and future catastrophes

    PLATFORM-DRIVEN CROWDSOURCED MANUFACTURING FOR MANUFACTURING AS A SERVICE

    Get PDF
    Platform-driven crowdsourced manufacturing is an emerging manufacturing paradigm to instantiate the adoption of the open business model in the context of achieving Manufacturing-as-a-Service (MaaS). It has attracted attention from both industries and academia as a powerful way of searching for manufacturing solutions extensively in a smart manufacturing era. In this regard, this work examines the origination and evolution of the open business model and highlights the trends towards platform-driven crowdsourced manufacturing as a solution for MaaS. Platform-driven crowdsourced manufacturing has a full function of value capturing, creation, and delivery approach, which is fulfilled by the cooperation among manufacturers, open innovators, and platforms. The platform-driven crowdsourced manufacturing workflow is proposed to organize these three decision agents by specifying the domains and interactions, following a functional, behavioral, and structural mapping model. A MaaS reference model is proposed to outline the critical functions and inter-relationships. A series of quantitative, qualitative, and computational solutions are developed for fulfilling the outlined functions. The case studies demonstrate the proposed methodologies and can pace the way towards a service-oriented product fulfillment process. This dissertation initially proposes a manufacturing theory and decision models by integrating manufacturer crowds through a cyber platform. This dissertation reveals the elementary conceptual framework based on stakeholder analysis, including dichotomy analysis of industrial applicability, decision agent identification, workflow, and holistic framework of platform-driven crowdsourced manufacturing. Three stakeholders require three essential service fields, and their cooperation requires an information service system as a kernel. These essential functions include contracting evaluation services for open innovators, manufacturers' task execution services, and platforms' management services. This research tackles these research challenges to provide a technology implementation roadmap and transition guidebook for industries towards crowdsourcing.Ph.D

    Online learning on the programmable dataplane

    Get PDF
    This thesis makes the case for managing computer networks with datadriven methods automated statistical inference and control based on measurement data and runtime observations—and argues for their tight integration with programmable dataplane hardware to make management decisions faster and from more precise data. Optimisation, defence, and measurement of networked infrastructure are each challenging tasks in their own right, which are currently dominated by the use of hand-crafted heuristic methods. These become harder to reason about and deploy as networks scale in rates and number of forwarding elements, but their design requires expert knowledge and care around unexpected protocol interactions. This makes tailored, per-deployment or -workload solutions infeasible to develop. Recent advances in machine learning offer capable function approximation and closed-loop control which suit many of these tasks. New, programmable dataplane hardware enables more agility in the network— runtime reprogrammability, precise traffic measurement, and low latency on-path processing. The synthesis of these two developments allows complex decisions to be made on previously unusable state, and made quicker by offloading inference to the network. To justify this argument, I advance the state of the art in data-driven defence of networks, novel dataplane-friendly online reinforcement learning algorithms, and in-network data reduction to allow classification of switchscale data. Each requires co-design aware of the network, and of the failure modes of systems and carried traffic. To make online learning possible in the dataplane, I use fixed-point arithmetic and modify classical (non-neural) approaches to take advantage of the SmartNIC compute model and make use of rich device local state. I show that data-driven solutions still require great care to correctly design, but with the right domain expertise they can improve on pathological cases in DDoS defence, such as protecting legitimate UDP traffic. In-network aggregation to histograms is shown to enable accurate classification from fine temporal effects, and allows hosts to scale such classification to far larger flow counts and traffic volume. Moving reinforcement learning to the dataplane is shown to offer substantial benefits to stateaction latency and online learning throughput versus host machines; allowing policies to react faster to fine-grained network events. The dataplane environment is key in making reactive online learning feasible—to port further algorithms and learnt functions, I collate and analyse the strengths of current and future hardware designs, as well as individual algorithms

    Developing Logic Synthesis Flow for NVDLA IP

    Get PDF
    Modern digital devices require high computing performance; thus, markets have a huge demand for SoC. The most powerful SoC are implemented on ASIC chips since, it is the most cost-efficient technology when production volumes are high. An important step on ASIC design process is the logic synthesis. By utilizing dedicated software tool, it transfers RTL code into gate level netlist. The logic synthesis process is executed multiple times alongside the RTL code development to meet the desired specifications for the chip. This thesis project used the NVDLA IP as a use case to execute logic synthesis. NVDLA is an open-source deep learning accelerator developed by NVIDIA. The design is able to execute CNNs making it efficient. Each component in the NVDLA can be configured independently, which make it flexible and cost effective. NVDLA software ecosystem has extensive cover of software features. NVDLA is divided into five partitions according to their functionality. Each partition is an individual top-level synthesis hierarchy. The target of this thesis is to develop a logic synthesis flow for NVDLA in the company design environment. This was achieved by exploiting NVDLA design environment, company internal memory wrapper, and Synopsys Design Compiler and IC Compiler 2 tools to execute logic synthesis for TSMC 7 nm standard cell technology. All the used RTL codes and scripts were downloaded from NVDLA GitHub webpage. The memory wrapper was created by the company memory wrapper tool. It connects the NVDLA design and the RAM instances. The Design Compiler tool was used to generate the initial netlist for NVDLA partitions. The IC Compiler 2 tool was used to create individual floorplans for each partition. The generated DEF file was used for second pass synthesis to obtain the final logic synthesis results. The results demonstrate that the company design environment can be used to run synthesis for open-source IP blocks. Further, the developed flow provides a platform to exploit different kind of open-source IP’s on industrial development environment since, it can generate synthesis results for 7 nm standard cell technology quickly
    • …
    corecore