2,794 research outputs found

    Analytical evaluation of the output variability in production systems with general Markovian structure

    Get PDF
    Performance evaluation models are used by companies to design, adapt, manage and control their production systems. In the literature, most of the effort has been dedicated to the development of efficient methodologies to estimate the first moment performance measures of production systems, such as the expected production rate, the buffer levels and the mean completion time. However, there is industrial evidence that the variability of the production output may drastically impact on the capability of managing the system operations, causing the observed system performance to be highly different from what expected. This paper presents a general methodology to analyze the variability of the output of unreliable single machines and small-scale multi-stage production systems modeled as General Markovian structure. The generality of the approach allows modeling and studying performance measures such as the variance of the cumulated output and the variance of the inter-departure time under many system configurations within a unique framework. The proposed method is based on the characterization of the autocorrelation structure of the system output. The impact of different system parameters on the output variability is investigated and characterized. Moreover, managerial actions that allow reducing the output variability are identified. The computational complexity of the method is studied on an extensive set of computer experiments. Finally, the limits of this approach while studying long multi-stage production lines are highlighted. © 2013 Springer-Verlag Berlin Heidelberg

    Performance analysis of a decoupling stock in a make-to-order system

    Get PDF
    In a Make-to-Order system, products are only manufactured when orders are placed. As this may lead to overly long delivery times, a stock of semi-finished products can be installed to reduce production time: the so-called decoupling stock. As performance of the decoupling stock is critical to the overall performance and cost of the production system, we propose and analyse a Markovian model of the decoupling stock. In particular, we focus on a queueing model with two buffers, thereby accounting for both the decoupling stock as well as for possible backlog of orders. By means of numerical examples, we then quantify the impact of production inefficiency on delivery times and overall cost

    A smoothing replenishment policy with endogenous lead times.

    Get PDF
    We consider a two echelon supply chain consisting of a single retailer and a single manufacturer. Inventory control policies at the retailer level often transmit customer demand variability to the manufacturer, sometimes even in an amplified form (known as the bullwhip effect). When the manufacturer produces in a make-to-order fashion though, he prefers a smooth order pattern. But dampening the variability in orders inflates the retailer's safety stock due to the increased variance of the retailers inventory levels. We can turn this issue of conflicting objectives into a win-win situation for both supply chain echelons when we treat the lead time as an endogenous variable. A less variable order pattern generates shorter and less variable (production/replenishment) lead times, introducing a compensating effect on the retailer's safety stock. We show that by including endogenous lead times, the order pattern can be smoothed to a considerable extent without increasing stock levels.Bullwhip effect; Demand; endogenous lead times; Fashion; Inventory; Inventory control; Markov processes; Order; Policy; Queueing; Research; Safety stock; Smoothing; Supply chain; Supply chain management; Time; Variability; Variance;

    Learning and Designing Stochastic Processes from Logical Constraints

    Get PDF
    Stochastic processes offer a flexible mathematical formalism to model and reason about systems. Most analysis tools, however, start from the premises that models are fully specified, so that any parameters controlling the system's dynamics must be known exactly. As this is seldom the case, many methods have been devised over the last decade to infer (learn) such parameters from observations of the state of the system. In this paper, we depart from this approach by assuming that our observations are {\it qualitative} properties encoded as satisfaction of linear temporal logic formulae, as opposed to quantitative observations of the state of the system. An important feature of this approach is that it unifies naturally the system identification and the system design problems, where the properties, instead of observations, represent requirements to be satisfied. We develop a principled statistical estimation procedure based on maximising the likelihood of the system's parameters, using recent ideas from statistical machine learning. We demonstrate the efficacy and broad applicability of our method on a range of simple but non-trivial examples, including rumour spreading in social networks and hybrid models of gene regulation

    Decomposition of discrete-time open tandem queues with Poisson arrivals and general service times

    Get PDF
    In der Grobplanungsphase vernetzter Logistik- und Produktionssysteme ist man häufig daran interessiert, mit geringem Berechnungsaufwand eine zufriedenstellende Approximation der Leistungskennzahlen des Systems zu bestimmen. Hierbei bietet die Modellierung mittels zeitdiskreter Methoden gegenüber der zeitkontinuierlichen Modellierung den Vorteil, dass die gesamte Wahrscheinlichkeitsverteilung der Leistungskenngrößen berechnet werden kann. Da Produktions- und Logistiksysteme in der Regel so konzipiert sind, dass sie die Leistung nicht im Durchschnitt, sondern mit einer bestimmten Wahrscheinlichkeit (z.B. 95%) zusichern, können zeitdiskrete Warteschlangenmodelle detailliertere Informationen über die Leistung des Systems (wie z.B. der Warte- oder Durchlaufzeit) liefern. Für die Analyse vernetzter zeitdiskreter Bediensysteme sind Dekompositionsmethoden häufig der einzig praktikable und recheneffiziente Ansatz, um stationäre Leistungsmaße in den einzelnen Bediensystemen zu berechnen. Hierbei wird das Netzwerk in die einzelnen Knoten zerlegt und diese getrennt voneinander analysiert. Der Ansatz basiert auf der Annahme, dass der Punktprozess des Abgangsstroms stromaufwärts liegender Stationen durch einen Erneuerungsprozess approximiert werden kann, und so eine unabhängige Analyse der Bediensysteme möglich ist. Die Annahme der Unabhängigkeit ermöglicht zwar eine effiziente Berechnung, führt jedoch zu teilweise starken Approximationsfehlern in den berechneten Leistungskenngrößen. Der Untersuchungsgegenstand dieser Arbeit sind offene zeitdiskrete Tandem-Netzwerke mit Poisson-verteilten Ankünften am stromaufwärts liegenden Bediensystem und generell verteilten Bedienzeiten. Das Netzwerk besteht folglich aus einem stromaufwärts liegenden M/G/1-Bediensystem und einem stromabwärts liegenden G/G/1-System. Diese Arbeit verfolgt drei Ziele, (1) die Defizite des Dekompositionsansatzes aufzuzeigen und dessen Approximationsgüte mittels statistischer Schätzmethoden zu bestimmen, (2) die Autokorrelation des Abgangsprozesses des M/G/1-Systems zu modellieren um die Ursache des Approximationsfehlers erklären zu können und (3) einen Dekompositionsansatz zu entwickeln, der die Abhängigkeit des Abgangsstroms berücksichtigt und so beliebig genaue Annäherungen der Leistungskenngrößen ermöglicht. Im ersten Teil der Arbeit wird die Approximationsgüte des Dekompositionsverfahrens am stromabwärts liegenden G/G/1-Bediensystem mit Hilfe von Linearer Regression (Punktschätzung) und Quantilsregression (Intervallschätzung) bestimmt. Beide Schätzverfahren werden jeweils auf die relativen Fehler des Erwartungswerts und des 95%-Quantils der Wartezeit im Vergleich zu den simulierten Ergebnissen berechnet. Als signifikante Einflussfaktoren auf die Approximationsgüte werden die Auslastung des Systems und die Variabilität des Ankunftsstroms identifiziert. Der zweite Teil der Arbeit fokussiert sich auf die Berechnung der Autokorrelation im Abgangsstroms des M/G/1-Bediensystems. Aufeinanderfolgende Zwischenabgangszeiten sind miteinander korreliert, da die Abgangszeit eines Kunden von dem Systemzustand abhängt, den der vorherige Kunde bei dessen Abgang zurückgelassen hat. Die Autokorrelation ist ursächlich für den Dekompositionsfehler, da die Ankunftszeiten am stromabwärts liegenden Bediensystem nicht unabhängig identisch verteilt sind. Im dritten Teil der Arbeit wird ein neuer Dekompositionsansatz vorgestellt, der die Abhängigkeit im Abgangsstroms des M/G/1-Systems mittels eines semi-Markov Prozesses modelliert. Um eine explosionsartige Zunahme des Zustandsraums zu verhindern, wird ein Verfahren eingeführt, das den Zustandsraum der eingebetteten Markov-Kette beschränkt. Numerischen Auswertungen zeigen, dass die mit stark limitierten Zustandsraum erzielten Ergebnisse eine bessere Approximation bieten als der bisherige Dekompositionsansatz. Mit zunehmender Größe des Zustandsraums konvergieren die Leistungskennzahlen beliebig genau

    Serial production line performance under random variation:Dealing with the ‘Law of Variability’

    Get PDF
    Many Queueing Theory and Production Management studies have investigated specific effects of variability on the performance of serial lines since variability has a significant impact on performance. To date, there has been no single summary source of the most relevant research results concerned with variability, particularly as they relate to the need to better understand the ‘Law of Variability’. This paper fills this gap and provides readers the foundational knowledge needed to develop intuition and insights on the complexities of stochastic simple serial lines, and serves as a guide to better understand and manage the effects of variability and design factors related to improving serial production line performance, i.e. throughput, inter-departure time and flow time, under random variation

    DECISION SUPPORT MODEL IN FAILURE-BASED COMPUTERIZED MAINTENANCE MANAGEMENT SYSTEM FOR SMALL AND MEDIUM INDUSTRIES

    Get PDF
    Maintenance decision support system is crucial to ensure maintainability and reliability of equipments in production lines. This thesis investigates a few decision support models to aid maintenance management activities in small and medium industries. In order to improve the reliability of resources in production lines, this study introduces a conceptual framework to be used in failure-based maintenance. Maintenance strategies are identified using the Decision-Making Grid model, based on two important factors, including the machines’ downtimes and their frequency of failures. The machines are categorized into three downtime criterions and frequency of failures, which are high, medium and low. This research derived a formula based on maintenance cost, to re-position the machines prior to Decision-Making Grid analysis. Subsequently, the formula on clustering analysis in the Decision-Making Grid model is improved to solve multiple-criteria problem. This research work also introduced a formula to estimate contractor’s response and repair time. The estimates are used as input parameters in the Analytical Hierarchy Process model. The decisions were synthesized using models based on the contractors’ technical skills such as experience in maintenance, skill to diagnose machines and ability to take prompt action during troubleshooting activities. Another important criteria considered in the Analytical Hierarchy Process is the business principles of the contractors, which includes the maintenance quality, tools, equipments and enthusiasm in problem-solving. The raw data collected through observation, interviews and surveys in the case studies to understand some risk factors in small and medium food processing industries. The risk factors are analysed with the Ishikawa Fishbone diagram to reveal delay time in machinery maintenance. The experimental studies are conducted using maintenance records in food processing industries. The Decision Making Grid model can detect the top ten worst production machines on the production lines. The Analytical Hierarchy Process model is used to rank the contractors and their best maintenance practice. This research recommends displaying the results on the production’s indicator boards and implements the strategies on the production shop floor. The proposed models can be used by decision makers to identify maintenance strategies and enhance competitiveness among contractors in failure-based maintenance. The models can be programmed as decision support sub-procedures in computerized maintenance management systems

    On the automated extraction of regression knowledge from databases

    Get PDF
    The advent of inexpensive, powerful computing systems, together with the increasing amount of available data, conforms one of the greatest challenges for next-century information science. Since it is apparent that much future analysis will be done automatically, a good deal of attention has been paid recently to the implementation of ideas and/or the adaptation of systems originally developed in machine learning and other computer science areas. This interest seems to stem from both the suspicion that traditional techniques are not well-suited for large-scale automation and the success of new algorithmic concepts in difficult optimization problems. In this paper, I discuss a number of issues concerning the automated extraction of regression knowledge from databases. By regression knowledge is meant quantitative knowledge about the relationship between a vector of predictors or independent variables (x) and a scalar response or dependent variable (y). A number of difficulties found in some well-known tools are pointed out, and a flexible framework avoiding many such difficulties is described and advocated. Basic features of a new tool pursuing this direction are reviewed
    corecore