36,784 research outputs found

    An empirical learning-based validation procedure for simulation workflow

    Full text link
    Simulation workflow is a top-level model for the design and control of simulation process. It connects multiple simulation components with time and interaction restrictions to form a complete simulation system. Before the construction and evaluation of the component models, the validation of upper-layer simulation workflow is of the most importance in a simulation system. However, the methods especially for validating simulation workflow is very limit. Many of the existing validation techniques are domain-dependent with cumbersome questionnaire design and expert scoring. Therefore, this paper present an empirical learning-based validation procedure to implement a semi-automated evaluation for simulation workflow. First, representative features of general simulation workflow and their relations with validation indices are proposed. The calculation process of workflow credibility based on Analytic Hierarchy Process (AHP) is then introduced. In order to make full use of the historical data and implement more efficient validation, four learning algorithms, including back propagation neural network (BPNN), extreme learning machine (ELM), evolving new-neuron (eNFN) and fast incremental gaussian mixture model (FIGMN), are introduced for constructing the empirical relation between the workflow credibility and its features. A case study on a landing-process simulation workflow is established to test the feasibility of the proposed procedure. The experimental results also provide some useful overview of the state-of-the-art learning algorithms on the credibility evaluation of simulation models

    Survey of dynamic scheduling in manufacturing systems

    Get PDF

    Short-Term Robustness of Production Management Systems: New Methodology

    Get PDF
    This paper investigates the short-term robustness of production planning and control systems. This robustness is defined here as the systems ability to maintain short-term service probabilities (i.e., the probability that the fill rate remains within a prespecified range), in a variety of environments (scenarios). For this investigation, the paper introduces a heuristic, stagewise methodology that combines the techniques of discrete-event simulation, heuristic optimization, risk or uncertainty analysis, and bootstrapping. This methodology compares production control systems, subject to a short-term fill-rate constraint while minimizing long- term work-in-process (WIP). This provides a new tool for performance analysis in operations management. The methodology is illustrated via the example of a production line with four stations and a single product; it compares Kanban, Conwip, Hybrid, and Generic production control schemes.manufacturing;inventory;risk analysis;robustness and sensitivity analysis;scenarios

    Some advances in importance sampling of reliability models based on zero variance approximation

    Get PDF
    We are interested in estimating, through simulation, the probability of entering a rare failure state before a regeneration state. Since this probability is typically small, we apply importance sampling. The method that we use is based on finding the most likely paths to failure. We present an algorithm that is guaranteed to produce an estimator that meets the conditions presented in [10] [9] for vanishing relative error. We furthermore demonstrate how the procedure that is used to obtain the change of measure can be executed a second time to achieve even further variance reduction, using ideas from [5], and also apply this technique to the method of failure biasing, with which we compare our results

    Combining long memory and level shifts in modeling and forecasting the volatility of asset returns

    Full text link
    We propose a parametric state space model of asset return volatility with an accompanying estimation and forecasting framework that allows for ARFIMA dynamics, random level shifts and measurement errors. The Kalman filter is used to construct the state-augmented likelihood function and subsequently to generate forecasts, which are mean- and path-corrected. We apply our model to eight daily volatility series constructed from both high-frequency and daily returns. Full sample parameter estimates reveal that random level shifts are present in all series. Genuine long memory is present in high-frequency measures of volatility whereas there is little remaining dynamics in the volatility measures constructed using daily returns. From extensive forecast evaluations, we find that our ARFIMA model with random level shifts consistently belongs to the 10% Model Confidence Set across a variety of forecast horizons, asset classes, and volatility measures. The gains in forecast accuracy can be very pronounced, especially at longer horizons
    corecore