43 research outputs found

    Surgically generated aerosol and mitigation strategies: combined use of irrigation, respirators and suction massively reduces particulate matter aerosol

    Get PDF
    Background Aerosol is a health risk to theatre staff. This laboratory study quantifies the reduction in particulate matter aerosol concentrations produced by electrocautery and drilling when using mitigation strategies such as irrigation, respirator filtration and suction in a lab environment to prepare for future work under live OR conditions. Methods We combined one aerosol-generating procedure (monopolar cutting or coagulating diathermy or high-speed diamond- or steel-tipped drilling of cadaveric porcine tissue) with one or multiple mitigation strategies (instrument irrigation, plume suction and filtration using an FFP3 respirator filter) and using an optical particle counter to measure particulate matter aerosol size and concentrations. Results Significant aerosol concentrations were observed during all aerosol-generating procedures with concentrations exceeding 3 × 106 particles per 100 ml. Considerable reductions in concentrations were observed with mitigation. In drilling, suction, FFP3 filtration and wash alone respectively reduced aerosol by 19.3–31.6%, 65.1–70.8% and 97.2 to > 99.9%. The greatest reduction (97.38 to > 99.9%) was observed when combining irrigation and filtration. Coagulating diathermy reduced concentrations by 88.0–96.6% relative to cutting, but produced larger particles. Suction alone, and suction with filtration reduced aerosol concentration by 41.0–49.6% and 88.9–97.4% respectively. No tested mitigation strategies returned aerosol concentrations to baseline. Conclusion Aerosol concentrations are significantly reduced through the combined use of filtration, suction and irrigation. Further research is required to characterise aerosol concentrations in the live OR and to find acceptable exposure limits, and in their absence, to find methods to further reduce exposure to theatre staff

    Dynamic control of a single-server system with abandonments

    Get PDF
    In this paper, we discuss the dynamic server control in a two-class service system with abandonments. Two models are considered. In the first case, rewards are received upon service completion, and there are no abandonment costs (other than the lost opportunity to gain rewards). In the second, holding costs per customer per unit time are accrued, and each abandonment involves a fixed cost. Both cases are considered under the discounted or average reward/cost criterion. These are extensions of the classic scheduling question (without abandonments) where it is well known that simple priority rules hold. The contributions in this paper are twofold. First, we show that the classic c-μ rule does not hold in general. An added condition on the ordering of the abandonment rates is sufficient to recover the priority rule. Counterexamples show that this condition is not necessary, but when it is violated, significant loss can occur. In the reward case, we show that the decision involves an intuitive tradeoff between getting more rewards and avoiding idling. Secondly, we note that traditional solution techniques are not directly applicable. Since customers may leave in between services, an interchange argument cannot be applied. Since the abandonment rates are unbounded we cannot apply uniformization-and thus cannot use the usual discrete-time Markov decision process techniques. After formulating the problem as a continuous-time Markov decision process (CTMDP), we use sample path arguments in the reward case and a savvy use of truncation in the holding cost case to yield the results. As far as we know, this is the first time that either have been used in conjunction with the CTMDP to show structure in a queueing control problem. The insights made in each model are supported by a detailed numerical study. © 2010 Springer Science+Business Media, LLC

    Finite model approximations in decentralized stochastic control

    No full text
    Due to copyright restrictions, the access to the full text of this article is only available via subscription.In this chapter, we study the approximation of static and dynamic team problems using finite models which are obtained through the uniform discretization, on a finite grid, of the observation and action spaces of agents. In particular, we are interested in the asymptotic optimality of quantized policies

    Analysis of a multiserver queue with setup times

    No full text
    This paper deals with the analysis of an M/M/c queueing system with setup times. This queueing model captures the major characteristics of phenomena occurring in production when the system consists in a set of machines monitored by a single operator.We carry out an extensive analysis of the system including limiting distribution of the system state, waiting time analysis, busy period and maximum queue length

    Finite-action approximation of Markov decision processes

    No full text
    Due to copyright restrictions, the access to the full text of this article is only available via subscription.In this chapter, we study the finite-action approximation of optimal control policies for discrete-time Markov decision processes (MDPs) with Borel state and action spaces, under discounted and average cost criteria. One main motivation for considering this problem stems from the optimal information transmission problem in networked control systems. In many applications of networked control, perfect transmission of the control actions to an actuator is infeasible when there is a communication channel of finite capacity between a controller and an actuator. Hence, the actions of the controller must be discretized (quantized) to facilitate reliable transmission. Although the problem of optimal information transmission from a plant/sensor to a controller has been studied extensively (see, e.g., [148] and references therein), much less is known about the problem of transmitting actions from a controller to an actuator. Such transmission schemes usually require a simple encoding/decoding rule since the actuator does not have the computational capability of the controller to use complex algorithms. For this reason, time-invariant scalar quantization is a practically useful encoding method for controller-actuator communication
    corecore