84,344 research outputs found

    Dynamic modelling and optimisation of carbon management strategies in gold processing

    Get PDF
    This thesis presents the development and application of a dynamic model of gold adsorption onto activated carbon in gold processing. The primary aim of the model is to investigate different carbon management strategies of the Carbon in Pulp (CIP) process. This model is based on simple film-diffusion mass transfer and the Freundlich isotherm to describe the equilibrium between the gold in solution and gold adsorbed onto carbon. A major limitation in the development of a dynamic model is the availability of accurate plant data that tracks the dynamic behaviour of the plant. This limitation is overcome by using a pilot scale CIP gold processing plant to obtain such data. All operating parameters of this pilot plant can be manipulated and controlled to a greater degree than that of a full scale plant. This enables a greater amount of operating data to be obtained and utilised. Two independent experiments were performed to build the model. A series of equilibrium tests were performed to obtain parameter values for the Freundlich isotherm, and results from an experimental run of the CIP pilot plant were used to obtain other model parameter values. The model was then verified via another independent experiment. The results show that for a given set of operating conditions, the simulated predictions were in good agreement with the CIP pilot plant experimental data. The model was then used to optimise the operations of the pilot plant. The evaluation of the plant optimisation simulations was based on an objective function developed to quantitatively compare different simulated conditions. This objective function was derived from the revenue and costs of the CIP plant. The objective function costings developed for this work were compared with published data and were found to be within the published range. This objective function can be used to evaluate the performance of any CIP plant from a small scale laboratory plant to a full scale gold plant. The model, along with its objective function, was used to investigate different carbon management strategies and to determine the most cost effective approach. A total of 17 different carbon management strategies were investigated. An additional two experimental runs were performed on the CIP pilot plant to verify the simulation model and objective function developed. Finally an application of the simulation model is discussed. The model was used to generate plant data to develop an operational classification model of the CIP process using machine learning algorithms. This application can then be used as part of an online diagnosis tool

    Techniques for the Fast Simulation of Models of Highly dependable Systems

    Get PDF
    With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system

    Nonparametric bootstrapping of the reliability function for multiple copies of a repairable item modeled by a birth process

    Get PDF
    Nonparametric bootstrap inference is developed for the reliability function estimated from censored, nonstationary failure time data for multiple copies of repairable items. We assume that each copy has a known, but not necessarily the same, observation period; and upon failure of one copy, design modifications are implemented for all copies operating at that time to prevent further failures arising from the same fault. This implies that, at any point in time, all operating copies will contain the same set of faults. Failures are modeled as a birth process because there is a reduction in the rate of occurrence at each failure. The data structure comprises a mix of deterministic and random censoring mechanisms corresponding to the known observation period of the copy, and the random censoring time of each fault. Hence, bootstrap confidence intervals and regions for the reliability function measure the length of time a fault can remain within the item until realization as failure in one of the copies. Explicit formulae derived for the re-sampling probabilities greatly reduce dependency on Monte-Carlo simulation. Investigations show a small bias arising in re-sampling that can be quantified and corrected. The variability generated by the re-sampling approach approximates the variability in the underlying birth process, and so supports appropriate inference. An illustrative example describes application to a problem, and discusses the validity of modeling assumptions within industrial practice

    Data-driven Localization and Estimation of Disturbance in the Interconnected Power System

    Full text link
    Identifying the location of a disturbance and its magnitude is an important component for stable operation of power systems. We study the problem of localizing and estimating a disturbance in the interconnected power system. We take a model-free approach to this problem by using frequency data from generators. Specifically, we develop a logistic regression based method for localization and a linear regression based method for estimation of the magnitude of disturbance. Our model-free approach does not require the knowledge of system parameters such as inertia constants and topology, and is shown to achieve highly accurate localization and estimation performance even in the presence of measurement noise and missing data

    Estimating the cost of offshore maintenance and the benefit from condition monitoring

    Get PDF
    The EU generally, and the UK, Belgium, Netherlands and Germany specifically, have ambitious plans for the large scale installation of offshore wind-power capacity. However, the cost of energy from offshore wind is much higher than that from land-based generation and a substantial portion of that cost, anything between 15% and 30%, may be due to the cost of O&M alone, largely driven by delays in access and repair caused by adverse weather and sea-state, high vessel costs, higher wage costs, and lost revenue from extended down-time. As part of a condition monitoring project commissioned and funded by the ETI (Energy Technologies Institute), the authors have developed a simple tool to estimate the cost of O&M and associated lost revenue, and also to estimate the potential for condition monitoring to allow operators to reduce those costs and the loss in revenue through better maintenance scheduling. The tool builds on earlier work conducted at Strathclyde and presented at EOW 2009 on estimating offshore access delays and turbine availability using a closed form probabilistic method based on an event tree, but without extensive time-domain or Monte Carlo simulation. It currently uses wind and wave data, reliability data and component cost data mainly available in the public domain. Repairs and replacements of subsystems have been classified into a small range of different repair severities, each having their specific requirements for vessels, plant, personnel and time. Expected delays can be calculated directly for each type of repair and the overall effects are summed. Condition monitoring and other maintenance strategies are assumed to change the allocation of a particular subsystem's faults between repair categories and thereby affect its overall impact on down-time and other costs.Calculations are carried out in a spreadsheet that updates instantly when any parameter is changed. The advantage of the approach developed is that it is possible to explore the impact of changing access thresholds, reliabilities or site parameters quickly and easily without having to run a long series of simulations for each new situation

    Multivariate reliability modelling with empirical Bayes inference

    Get PDF
    Recent developments in technology permit detailed descriptions of system performance to be collected and stored. Consequently, more data are available about the occurrence, or non-occurrence, of events across a range of classes through time. Typically this implies that reliability analysis has more information about the exposure history of a system within different classes of events. For highly reliable systems, there may be relatively few failure events. Thus there is a need to develop statistical inference to support reliability estimation when there is a low ratio of failures relative to event classes. In this paper we show how Empirical Bayes methods can be used to estimate a multivariate reliability function for a system by modelling the vector of times to realise each failure root cause

    Statistical Reliability Estimation of Microprocessor-Based Systems

    Get PDF
    What is the probability that the execution state of a given microprocessor running a given application is correct, in a certain working environment with a given soft-error rate? Trying to answer this question using fault injection can be very expensive and time consuming. This paper proposes the baseline for a new methodology, based on microprocessor error probability profiling, that aims at estimating fault injection results without the need of a typical fault injection setup. The proposed methodology is based on two main ideas: a one-time fault-injection analysis of the microprocessor architecture to characterize the probability of successful execution of each of its instructions in presence of a soft-error, and a static and very fast analysis of the control and data flow of the target software application to compute its probability of success. The presented work goes beyond the dependability evaluation problem; it also has the potential to become the backbone for new tools able to help engineers to choose the best hardware and software architecture to structurally maximize the probability of a correct execution of the target softwar
    corecore