70 research outputs found

    Petri Net as a Manufacturing System Scheduling Tool

    Get PDF

    Developing manufacturing control software: A survey and critique

    Full text link
    The complexity and diversity of manufacturing software and the need to adapt this software to the frequent changes in the production requirements necessitate the use of a systematic approach to developing this software. The software life-cycle model (Royce, 1970) that consists of specifying the requirements of a software system, designing, implementing, testing, and evolving this software can be followed when developing large portions of manufacturing software. However, the presence of hardware devices in these systems and the high costs of acquiring and operating hardware devices further complicate the manufacturing software development process and require that the functionality of this software be extended to incorporate simulation and prototyping.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45542/1/10696_2005_Article_BF01328739.pd

    Modeling and Analysis of Resource Sharing Approach in Common Platform Strategy Using Petri Net Theory

    Get PDF
    The most competitive advantages in business and manufacturing is resource-sharing.We must share common resources to produce a group of product family with using common platform strategy. This strategy helps us to increase profit and value in business. It is necessary to apply this strategy to model and analyze resource achievability in different situations. In this paper we try to develop a practical model for analyzing common resource behavioral in platform area with using Petri net theory. Petri Nets have been successfully used for modeling and control the dynamics of flexible manufacturing systems.This paper presents some important concepts about common platform and petri net theory and then presents numerical examples to show how to use Petri net for modeling and analysis in common platform. This model is very useful for common platform strategy and can be used to determine reliability of common platform systems in an effective way

    Specification and Automatic Generation of Simulation Models with Applications in Semiconductor Manufacturing

    Get PDF
    The creation of large-scale simulation models is a difficult and time-consuming task. Yet simulation is one of the techniques most frequently used by practitioners in Operations Research and Industrial Engineering, as it is less limited by modeling assumptions than many analytical methods. The effective generation of simulation models is an important challenge. Due to the rapid increase in computing power, it is possible to simulate significantly larger systems than in the past. However, the verification and validation of these large-scale simulations is typically a very challenging task. This thesis introduces a simulation framework that can generate a large variety of manufacturing simulation models. These models have to be described with a simulation data specification. This specification is then used to generate a simulation model which is described as a Petri net. This approach reduces the effort of model verification. The proposed Petri net data structure has extensions for time and token priorities. Since it builds on existing theory for classical Petri nets, it is possible to make certain assertions about the behavior of the generated simulation model. The elements of the proposed framework and the simulation execution mechanism are described in detail. Measures of complexity for simulation models that are built with the framework are also developed. The applicability of the framework to real-world systems is demonstrated by means of a semiconductor manufacturing system simulation model.Ph.D.Committee Chair: Alexopoulos, Christos; Committee Co-Chair: McGinnis, Leon; Committee Member: Egerstedt, Magnus; Committee Member: Fujimoto, Richard; Committee Member: Goldsman, Davi

    Analysis of disruptions caused by construction field rework on productivity in residential projects

    Get PDF
    Operational performance in residential construction production systems is assessed based on measures such as average house-completion time, number of houses under construction, lead time, and customer service. These systems, however, are prone to nonuniformity and interruptions caused by a wide range of variables such as inclement weather conditions, accidents at worksites, fluctuations in demand for houses, and rework. The availability and capacity of resources therefore are not the sole measures for evaluating construction production systems capacity, especially when rework is involved. The writers’ aim is to investigate the effects of rework timeframe and frequency/length on tangible performance measures. Different call-back timeframes for rework and their impact on house-completion times are modeled and analyzed. Volume home-building was chosen as the industry sector studied in the research reported in this paper because it is a data-rich environment. The writers designed several experiments to model on time, late, and early call-back timeframes in the presence of rework with different length and frequency. Both mathematical modeling and discrete-event simulation were then used to compare and contrast outputs. The measurements showed that the average completion time is shorter in systems interrupted by frequent but short rework. In other words, a smaller downstream buffer between processes is required to avoid work starvation than those systems affected by infrequent but long interruptions. Early call-backs for rework can significantly increase the number of house completions over the long run. This indicates that there is an opportunity for the mass house-building sector to improve work practice and project delivery by effectively managing rework and its related variables. The research reported in this paper builds on the current body-of-knowledge by applying even-flow production theory to the analysis of rework in the residential construction sector, with the intention of ensuring minimal disruption to construction production process and improving productivity

    3 Petri Net as a Manufacturing System Scheduling Tool

    Get PDF

    Scheduling Algorithms: Challenges Towards Smart Manufacturing

    Get PDF
    Collecting, processing, analyzing, and driving knowledge from large-scale real-time data is now realized with the emergence of Artificial Intelligence (AI) and Deep Learning (DL). The breakthrough of Industry 4.0 lays a foundation for intelligent manufacturing. However, implementation challenges of scheduling algorithms in the context of smart manufacturing are not yet comprehensively studied. The purpose of this study is to show the scheduling No.s that need to be considered in the smart manufacturing paradigm. To attain this objective, the literature review is conducted in five stages using publish or perish tools from different sources such as Scopus, Pubmed, Crossref, and Google Scholar. As a result, the first contribution of this study is a critical analysis of existing production scheduling algorithms\u27 characteristics and limitations from the viewpoint of smart manufacturing. The other contribution is to suggest the best strategies for selecting scheduling algorithms in a real-world scenario

    Verification of soundness and other properties of business processes

    Get PDF
    In this thesis we focus on improving current modeling and verification techniques for complex business processes. The objective of the thesis is to consider several aspects of real-life business processes and give specific solutions to cope with their complexity. In particular, we address verification of a proper termination property for workflows, called generalized soundness. We give a new decision procedure for generalized soundness that improves the original decision procedure. The new decision procedure reports on the decidability status of generalized soundness and returns a counterexample in case the workflow net is not generalized sound. We report on experimental results obtained with the prototype implementation we made and describe how to verify large workflows compositionally, using reduction rules. Next, we concentrate on modeling and verification of adaptive workflows — workflows that are able to change their structure at runtime, for instance when some exceptional events occur. In order to model the exception handling properly and allow structural changes of the system in a modular way, we introduce a new class of nets, called adaptive workflow nets. Adaptive workflow nets are a special type of Nets in Nets and they allow for creation, deletion and transformation of net tokens at runtime and for two types of synchronizations: synchronization on proper termination and synchronization on exception. We define some behavioral properties of adaptive workflow nets: soundness and circumspectness and employ an abstraction to reduce the verification of these properties to the verification of behavioral properties of a finite state abstraction. Further, we study how formal methods can help in understanding and designing business processes. We investigate this for the extended event-driven process chains (eEPCs), a popular industrial business process language used in the ARIS Toolset. Several semantics have been proposed for EPCs. However, most of them concentrated solely on the control flow. We argue that other aspects of business processes must also be taken into account in order to analyze eEPCs and propose a semantics that takes data and time information from eEPCs into account. Moreover, we provide a translation of eEPCs to Timed Colored Petri nets in order to facilitate verification of eEPCs. Finally, we discuss modeling issues for business processes whose behavior may depend on the previous behavior of the process, history which is recorded by workflow management systems as a log. To increase the precision of models with respect to modeling choices depending on the process history, we introduce history-dependent guards. The obtained business processes are called historydependent processes.We introduce a logic, called LogLogics for the specification of guards based on a log of a current running process and give an evaluation algorithm for such guards. Moreover, we show how these guards can be used in practice and define LogLogics patterns for properties that occur most commonly in practice

    Optimizing performance of workflow executions under authorization control

    Get PDF
    “Business processes or workflows are often used to model enterprise or scientific applications. It has received considerable attention to automate workflow executions on computing resources. However, many workflow scenarios still involve human activities and consist of a mixture of human tasks and computing tasks. Human involvement introduces security and authorization concerns, requiring restrictions on who is allowed to perform which tasks at what time. Role- Based Access Control (RBAC) is a popular authorization mechanism. In RBAC, the authorization concepts such as roles and permissions are defined, and various authorization constraints are supported, including separation of duty, temporal constraints, etc. Under RBAC, users are assigned to certain roles, while the roles are associated with prescribed permissions. When we assess resource capacities, or evaluate the performance of workflow executions on supporting platforms, it is often assumed that when a task is allocated to a resource, the resource will accept the task and start the execution once a processor becomes available. However, when the authorization policies are taken into account,” this assumption may not be true and the situation becomes more complex. For example, when a task arrives, a valid and activated role has to be assigned to a task before the task can start execution. The deployed authorization constraints may delay the workflow execution due to the roles’ availability, or other restrictions on the role assignments, which will consequently have negative impact on application performance. When the authorization constraints are present to restrict the workflow executions, it entails new research issues that have not been studied yet in conventional workflow management. This thesis aims to investigate these new research issues. First, it is important to know whether a feasible authorization solution can be found to enable the executions of all tasks in a workflow, i.e., check the feasibility of the deployed authorization constraints. This thesis studies the issue of the feasibility checking and models the feasibility checking problem as a constraints satisfaction problem. Second, it is useful to know when the performance of workflow executions will not be affected by the given authorization constraints. This thesis proposes the methods to determine the time durations when the given authorization constraints do not have impact. Third, when the authorization constraints do have the performance impact, how can we quantitatively analyse and determine the impact? When there are multiple choices to assign the roles to the tasks, will different choices lead to the different performance impact? If so, can we find an optimal way to conduct the task-role assignments so that the performance impact is minimized? This thesis proposes the method to analyze the delay caused by the authorization constraints if the workflow arrives beyond the non-impact time duration calculated above. Through the analysis of the delay, we realize that the authorization method, i.e., the method to select the roles to assign to the tasks affects the length of the delay caused by the authorization constraints. Based on this finding, we propose an optimal authorization method, called the Global Authorization Aware (GAA) method. Fourth, a key reason why authorization constraints may have impact on performance is because the authorization control directs the tasks to some particular roles. Then how to determine the level of workload directed to each role given a set of authorization constraints? This thesis conducts the theoretical analysis about how the authorization constraints direct the workload to the roles, and proposes the methods to calculate the arriving rate of the requests directed to each role under the role, temporal and cardinality constraints. Finally, the amount of resources allocated to support each individual role may have impact on the execution performance of the workflows. Therefore, it is desired to develop the strategies to determine the adequate amount of resources when the authorization control is present in the system. This thesis presents the methods to allocate the appropriate quantity for resources, including both human resources and computing resources. Different features of human resources and computing resources are taken into account. For human resources, the objective is to maximize the performance subject to the budgets to hire the human resources, while for computing resources, the strategy aims to allocate adequate amount of computing resources to meet the QoS requirements
    corecore