230 research outputs found

    Production Scheduling

    Get PDF
    Generally speaking, scheduling is the procedure of mapping a set of tasks or jobs (studied objects) to a set of target resources efficiently. More specifically, as a part of a larger planning and scheduling process, production scheduling is essential for the proper functioning of a manufacturing enterprise. This book presents ten chapters divided into five sections. Section 1 discusses rescheduling strategies, policies, and methods for production scheduling. Section 2 presents two chapters about flow shop scheduling. Section 3 describes heuristic and metaheuristic methods for treating the scheduling problem in an efficient manner. In addition, two test cases are presented in Section 4. The first uses simulation, while the second shows a real implementation of a production scheduling system. Finally, Section 5 presents some modeling strategies for building production scheduling systems. This book will be of interest to those working in the decision-making branches of production, in various operational research areas, as well as computational methods design. People from a diverse background ranging from academia and research to those working in industry, can take advantage of this volume

    Models for Improvement Management and Operational Performance

    Get PDF
    Achieving high levels of performance in the manufacturing environment requires an increase in speed, quality and reliability of existing technologies. This is inherently related with the need to develop adequate process monitoring and modelling/simulation approaches, along with innovative optimization and maintenance strategies. The purpose of this dissertation is to provide a framework that can be used as a tool by decision makers when evaluating and controlling the performance of a system. To achieve this, a structured conceptual model should be developed, which should not be tied to any particular model. The idea with this framework is to assess information obtained during production and use it to generate relevant KPIs for monitoring and evaluating the system?s performance. This information should then be fed into a DSS that can provide suggestions or even actively influence simulation parameters to apply the identified improvement measures. Finally, the proposed framework is to be applied in a case-study to evaluate its relevance in the improvement of manufacturing operations in a real-world context

    Adaptive Order Dispatching based on Reinforcement Learning: Application in a Complex Job Shop in the Semiconductor Industry

    Get PDF
    Heutige Produktionssysteme tendieren durch die Marktanforderungen getrieben zu immer kleineren Losgrößen, höherer Produktvielfalt und größerer Komplexität der Materialflusssysteme. Diese Entwicklungen stellen bestehende Produktionssteuerungsmethoden in Frage. Im Zuge der Digitalisierung bieten datenbasierte Algorithmen des maschinellen Lernens einen alternativen Ansatz zur Optimierung von Produktionsabläufen. Aktuelle Forschungsergebnisse zeigen eine hohe Leistungsfähigkeit von Verfahren des Reinforcement Learning (RL) in einem breiten Anwendungsspektrum. Im Bereich der Produktionssteuerung haben sich jedoch bisher nur wenige Autoren damit befasst. Eine umfassende Untersuchung verschiedener RL-Ansätze sowie eine Anwendung in der Praxis wurden noch nicht durchgeführt. Unter den Aufgaben der Produktionsplanung und -steuerung gewährleistet die Auftragssteuerung (order dispatching) eine hohe Leistungsfähigkeit und Flexibilität der Produktionsabläufe, um eine hohe Kapazitätsauslastung und kurze Durchlaufzeiten zu erreichen. Motiviert durch komplexe Werkstattfertigungssysteme, wie sie in der Halbleiterindustrie zu finden sind, schließt diese Arbeit die Forschungslücke und befasst sich mit der Anwendung von RL für eine adaptive Auftragssteuerung. Die Einbeziehung realer Systemdaten ermöglicht eine genauere Erfassung des Systemverhaltens als statische Heuristiken oder mathematische Optimierungsverfahren. Zusätzlich wird der manuelle Aufwand reduziert, indem auf die Inferenzfähigkeiten des RL zurückgegriffen wird. Die vorgestellte Methodik fokussiert die Modellierung und Implementierung von RL-Agenten als Dispatching-Entscheidungseinheit. Bekannte Herausforderungen der RL-Modellierung in Bezug auf Zustand, Aktion und Belohnungsfunktion werden untersucht. Die Modellierungsalternativen werden auf der Grundlage von zwei realen Produktionsszenarien eines Halbleiterherstellers analysiert. Die Ergebnisse zeigen, dass RL-Agenten adaptive Steuerungsstrategien erlernen können und bestehende regelbasierte Benchmarkheuristiken übertreffen. Die Erweiterung der Zustandsrepräsentation verbessert die Leistung deutlich, wenn ein Zusammenhang mit den Belohnungszielen besteht. Die Belohnung kann so gestaltet werden, dass sie die Optimierung mehrerer Zielgrößen ermöglicht. Schließlich erreichen spezifische RL-Agenten-Konfigurationen nicht nur eine hohe Leistung in einem Szenario, sondern weisen eine Robustheit bei sich ändernden Systemeigenschaften auf. Damit stellt die Forschungsarbeit einen wesentlichen Beitrag in Richtung selbstoptimierender und autonomer Produktionssysteme dar. Produktionsingenieure müssen das Potenzial datenbasierter, lernender Verfahren bewerten, um in Bezug auf Flexibilität wettbewerbsfähig zu bleiben und gleichzeitig den Aufwand für den Entwurf, den Betrieb und die Überwachung von Produktionssteuerungssystemen in einem vernünftigen Gleichgewicht zu halten

    Composition and synchronization of real-time components upon one processor

    Get PDF
    Many industrial systems have various hardware and software functions for controlling mechanics. If these functions act independently, as they do in legacy situations, their overall performance is not optimal. There is a trend towards optimizing the overall system performance and creating a synergy between the different functions in a system, which is achieved by replacing more and more dedicated, single-function hardware by software components running on programmable platforms. This increases the re-usability of the functions, but their synergy requires also that (parts of) the multiple software functions share the same embedded platform. In this work, we look at the composition of inter-dependent software functions on a shared platform from a timing perspective. We consider platforms comprised of one preemptive processor resource and, optionally, multiple non-preemptive resources. Each function is implemented by a set of tasks; the group of tasks of a function that executes on the same processor, along with its scheduler, is called a component. The tasks of a component typically have hard timing constraints. Fulfilling these timing constraints of a component requires analysis. Looking at a single function, co-operative scheduling of the tasks within a component has already proven to be a powerful tool to make the implementation of a function more predictable. For example, co-operative scheduling can accelerate the execution of a task (making it easier to satisfy timing constraints), it can reduce the cost of arbitrary preemptions (leading to more realistic execution-time estimates) and it can guarantee access to other resources without the need for arbitration by other protocols. Since timeliness is an important functional requirement, (re-)use of a component for composition and integration on a platform must deal with timing. To enable us to analyze and specify the timing requirements of a particular component in isolation from other components, we reserve and enforce the availability of all its specified resources during run-time. The real-time systems community has proposed hierarchical scheduling frameworks (HSFs) to implement this isolation between components. After admitting a component on a shared platform, a component in an HSF keeps meeting its timing constraints as long as it behaves as specified. If it violates its specification, it may be penalized, but other components are temporally isolated from the malignant effects. A component in an HSF is said to execute on a virtual platform with a dedicated processor at a speed proportional to its reserved processor supply. Three effects disturb this point of view. Firstly, processor time is supplied discontinuously. Secondly, the actual processor is faster. Thirdly, the HSF no longer guarantees the isolation of an individual component when two arbitrary components violate their specification during access to non-preemptive resources, even when access is arbitrated via well-defined real-time protocols. The scientific contributions of this work focus on these three issues. Our solutions to these issues cover the system design from component requirements to run-time allocation. Firstly, we present a novel scheduling method that enables us to integrate the component into an HSF. It guarantees that each integrated component executes its tasks exactly in the same order regardless of a continuous or a discontinuous supply of processor time. Using our method, the component executes on a virtual platform and it only experiences that the processor speed is different from the actual processor speed. As a result, we can focus on the traditional scheduling problem of meeting deadline constraints of tasks on a uni-processor platform. For such platforms, we show how scheduling tasks co-operatively within a component helps to meet the deadlines of this component. We compare the strength of these cooperative scheduling techniques to theoretically optimal schedulers. Secondly, we standardize the way of computing the resource requirements of a component, even in the presence of non-preemptive resources. We can therefore apply the same timing analysis to the components in an HSF as to the tasks inside, regardless of their scheduling or their protocol being used for non-preemptive resources. This increases the re-usability of the timing analysis of components. We also make non-preemptive resources transparent during the development cycle of a component, i.e., the developer of a component can be unaware of the actual protocol being used in an HSF. Components can therefore be unaware that access to non-preemptive resources requires arbitration. Finally, we complement the existing real-time protocols for arbitrating access to non-preemptive resources with mechanisms to confine temporal faults to those components in the HSF that share the same non-preemptive resources. We compare the overheads of sharing non-preemptive resources between components with and without mechanisms for confinement of temporal faults. We do this by means of experiments within an HSF-enabled real-time operating system

    Makespan Minimization in Re-entrant Permutation Flow Shops

    Get PDF
    Re-entrant permutation flow shop problems occur in practical applications such as wafer manufacturing, paint shops, mold and die processes and textile industry. A re-entrant material flow means that the production jobs need to visit at least one working station multiple times. A comprehensive review gives an overview of the literature on re-entrant scheduling. The influence of missing operations received just little attention so far and splitting the jobs into sublots was not examined in re-entrant permutation flow shops before. The computational complexity of makespan minimization in re-entrant permutation flow shop problems requires heuristic solution approaches for large problem sizes. The problem provides promising structural properties for the application of a variable neighborhood search because of the repeated processing of jobs on several machines. Furthermore the different characteristics of lot streaming and their impact on the makespan of a schedule are examined in this thesis and the heuristic solution methods are adjusted to manage the problem’s extension

    Secure Virtualization of Latency-Constrained Systems

    Get PDF
    Virtualization is a mature technology in server and desktop environments where multiple systems are consolidate onto a single physical hardware platform, increasing the utilization of todays multi-core systems as well as saving resources such as energy, space and costs compared to multiple single systems. Looking at embedded environments reveals that many systems use multiple separate computing systems inside, including requirements for real-time and isolation properties. For example, modern high-comfort cars use up to a hundred embedded computing systems. Consolidating such diverse configurations promises to save resources such as energy and weight. In my work I propose a secure software architecture that allows consolidating multiple embedded software systems with timing constraints. The base of the architecture builds a microkernel-based operating system that supports a variety of different virtualization approaches through a generic interface, supporting hardware-assisted virtualization and paravirtualization as well as multiple architectures. Studying guest systems with latency constraints with regards to virtualization showed that standard techniques such as high-frequency time-slicing are not a viable approach. Generally, guest systems are a combination of best-effort and real-time work and thus form a mixed-criticality system. Further analysis showed that such systems need to export relevant internal scheduling information to the hypervisor to support multiple guests with latency constraints. I propose a mechanism to export those relevant events that is secure, flexible, has good performance and is easy to use. The thesis concludes with an evaluation covering the virtualization approach on the ARM and x86 architectures and two guest operating systems, Linux and FreeRTOS, as well as evaluating the export mechanism

    Automation considerations for a manufacturing system

    Get PDF
    This thesis examines the present manufacturing system of Apollo Valve Company, a solenoid control valves manufacturing. After analyzing the present system, the automation considerations and proposed new system were recommended. Chapter 1 presenti the background material of automation and manufacturing system. The development of the automated factory is also included. The plant layout, organization, and departments functions of the present system are briefly described in the Chapter 2. Analysis of the present manufacturing system by the production volume, by plant layout, and by the manufacturing operations, is discussed in Chapter 3. Proposed automation considerations and improvements, such as group technology (GT), computer-aided process planning (CAPP), computer-aided manufacturing (CAM), automatic assembly and testing, packing, and flexible manufacturing system (FMS), are presented in the Chapter 4. The last chapter, the conclusions are discussed and the new manufacturing system is recommended

    Automated process modelling and continuous improvement.

    Get PDF
    This thesis discusses and demonstrates the benefits of simulating and optimising a manufacturing control system in order to improve flow of production material through a system with high variety low volume output requirements. The need for and factors affecting synchronous flow are also discussed along with the consequences of poor flow and various solutions for overcoming it. A study into and comparison of various planning and control methodologies designed to promote flow of material through a manufacturing system was carried out to identify a suitable system to model. The research objectives are; • Identify the best system to model that will promote flow, • Identify the potential failure mechanisms within that system that exist and have not been yet resolved, • Produce a model that can fully resolve or reduce the probability of the identified failure mechanisms having an effect. This research led to an investigation into the main elements of a Drum-Buffer-Rope (DBR) environment in order to generate a comprehensive description of the requirements for DBR implementation and operation and attempt to improve the limitations that have been identified via the research literature. These requirements have been grouped into three areas, i.e.: a. plant layout and kanban controls, b. planning and control, and c. DBR infrastructure. A DBR model was developed combined with Genetic Algorithms with the aim of maximising the throughput level for an individual product mix. The results of the experiments have identified new knowledge on how DBR processes facilitate and impede material flow synchronisation within high variety/low volume manufacturing environments. The research results were limited to the assumptions made and constraints of the model, this research has highlighted that as such a model becomes more complex it also becomes more volatile and more difficult to control, leading to the conclusions that more research is required by extending the complexity of the model by adding more product mix and system variability to compare results with the results of this research. After which it will be expected that the model will be useful to enable a quick system response to large variations in product demand within the mixed model manufacturing industry.EPSR

    Implementation of automated assembly

    Get PDF
    Research has shown that about 60 - 80% wealth producing activities is related to manufacturing in major industrial countries. Increased competition in industry has resulted in a greater emphasis on using automation to improve productivity and quality and also to reduce cost. Most of the manufacturing works such as machining, painting, storage, retrieval, inspection and transportation have changed to automation successfully, except assembly. Manual assembly is predominant over automatic assembly techniques due to inherent assembly problem and the fact that the assembly machines lack the innate intelligence of human operator and lack sufficient flexibility to changeover when product designs and market demands change. With the advent of flexible manufacturing systems, which involve very large capital costs and complex interactions. For the reduction the risk of the investment and analyze the system, simulation is a valuable tool in planning the systems and in analyzing their behavior, and get the best use of them. This thesis applies animation techniques to simulate an automatic assembly system. In chapter 1 to 9, we cover some of the fundamental concepts and principles of automatic assembly and simulation. Some manufacturers put the subject of part orientation first on their list of priorities; but design for assembly (DFA) techniques have proven extremely valuable in developing better assembly techniques and ultimately, better products. We discuss DFA in chapter 1, part feeding and orientation in chapter 2. Chapter 3, 4 and 5 are concerned with assembly process, machines and control system, respectively. Annual sales for industrial robots have been growing at the rate of about 25 percent per year in major industrial countries, we review the robot application in chapter 6. The cost of material handling is a significant portion of the total cost of production, material storage uses valuable space and consumes investment, we cover these two topics in chapter 7 and 8. Chapter 9 is concerned with simulation. In chapter 10, 11,12 and 13, we implement a software package IGRIP to build a model of an automatic assembly system and analyze the result

    Energy-aware evolutionary optimization for cyber-physical systems in Industry 4.0

    Get PDF
    • …
    corecore