30 research outputs found
Reducing the environmental impact of surgery on a global scale: systematic review and co-prioritization with healthcare workers in 132 countries
Background
Healthcare cannot achieve net-zero carbon without addressing operating theatres. The aim of this study was to prioritize feasible interventions to reduce the environmental impact of operating theatres.
Methods
This study adopted a four-phase Delphi consensus co-prioritization methodology. In phase 1, a systematic review of published interventions and global consultation of perioperative healthcare professionals were used to longlist interventions. In phase 2, iterative thematic analysis consolidated comparable interventions into a shortlist. In phase 3, the shortlist was co-prioritized based on patient and clinician views on acceptability, feasibility, and safety. In phase 4, ranked lists of interventions were presented by their relevance to high-income countries and low–middle-income countries.
Results
In phase 1, 43 interventions were identified, which had low uptake in practice according to 3042 professionals globally. In phase 2, a shortlist of 15 intervention domains was generated. In phase 3, interventions were deemed acceptable for more than 90 per cent of patients except for reducing general anaesthesia (84 per cent) and re-sterilization of ‘single-use’ consumables (86 per cent). In phase 4, the top three shortlisted interventions for high-income countries were: introducing recycling; reducing use of anaesthetic gases; and appropriate clinical waste processing. In phase 4, the top three shortlisted interventions for low–middle-income countries were: introducing reusable surgical devices; reducing use of consumables; and reducing the use of general anaesthesia.
Conclusion
This is a step toward environmentally sustainable operating environments with actionable interventions applicable to both high– and low–middle–income countries
Linear Quadratic Control Problem For Jump Linear Systems With No Observation Of The Markov Chain States
The subject matter of this paper is the study of a stochastic control problem for a class of linear systems subject to Markovian jumps among different forms and quadratic cost. It is assumed that the system is partially observable in the sense that we do not have access to the jumping parameters and the control can only depend on the present value of the linear state variable. The main feature of the approach here is that we do not recast the problem as one with complete observations, and the solution is determined by a set of interconnected Riccati equations, similar to the complete observation case. A peculiar attribute of the approach here is a robust flavor and the explicit form for the optimal control policy.21392139
Weak Detectability And The Lq Problem Of Discrete-time Infinite Markov Jump Linear Systems
The paper deals with a concept of weak detectability for discrete-tine infinite Markov jump linear systems, which relates the stochastic convergence of the output with the stochastic convergence of the state and generalizes previous concepts. Certain invariant sets are introduced, which allow us to find a related system that is stochastically detectable if and only if the original system is weakly detectable. This provides the necessary tools to show, via an additional assumption, that the weak detectability concept is invariant with respect to linear state feedback control. As an immediate extension, the result provides that linear state feedback controls are stabilizing whenever the associated cost functional is bounded. In addition, it is shown that the detectability concept assures that the solution of the JLQ is stabilizing and the solution of the associated algebraic Riccati equation is unique and stabilizing, thus retrieving the usual role that detectability concepts play in finite dimensional MJLS and linear deterministic systems. Finally, regarding the assumption, the paper shows that: it is not related to the detectability concept, it always holds for finite dimensional Markov jump linear systems, and it holds under a condition of uniform observability on trajectories associated with non-convergent output.657895794Costa, E.F., Do Val, J.B.R., On the detectability and observability of discrete-time Markov jump linear systems (2001) System Control Lett., 44, pp. 135-145Costa, E.F., Do Val, J.B.R., On the detectability and observability of continuous-time Markov jump linear systems (2002) SIAM J. Control Optim., 41 (4), pp. 1295-1314Costa, E.F., Do Val, J.B.R., Weak detectability and the linear quadratic control problem of discrete-time Markov jump linear systems (2002) Internat. J. Control, 75 (16-17), pp. 1282-1292Costa, E.F., Do Val, J.B.R., Fragoso, M.D., On a detectability concept of discrete-time infinite Markov jump linear systems (2002) IFAC 15th World Congress, pp. 2660-2665. , BarcelonaCosta, O.L.V., Fragoso, M.D., Discrete-time LQ-optimal control problems for infinite Markov jump parameter systems (1995) IEEE Trans. Automat. Control, AC-40, pp. 2076-2088Fragoso, M., Baczynski, J., Optimal control for continuous time LQ problems with infinite Markov jump parameters (2001) SIAM J. Control Optim., 40 (1), pp. 270-297Fragoso, M.D., Baczynski, J., Stochastic versus mean square stability in continuous time linear infinite Markov jump parameter systems (2002) Stochastic Anal. Appl., 20 (20), pp. 347-356Golub, G.H., Loan, C.V., (1996) Matrix Computation, , Johns Hopkins Press, London, third editio
Approximate dynamic programming via direct search in the space of value function approximations
This paper deals with approximate value iteration (AVI) algorithms applied to discounted dynamic programming (DP) problems. For a fixed control policy, the span semi-norm of the so-called Bellman residual is shown to be convex in the Banach space of candidate solutions to the DP problem. This fact motivates the introduction of an AVI algorithm with local search that seeks to minimize the span semi-norm of the Bellman residual in a convex value function approximation space. The novelty here is that the optimality of a point in the approximation architecture is characterized by means of convex optimization concepts and necessary and sufficient conditions to local optimality are derived. The procedure employs the classical AVI algorithm direction (Bellman residual) combined with a set of independent search directions, to improve the convergence rate. It has guaranteed convergence and satisfies, at least, the necessary optimality conditions over a prescribed set of directions. To illustrate the method, examples are presented that deal with a class of problems from the literature and a large state space queueing problem setting.</p
An Application Of Convex Optimization Concepts To Approximate Dynamic Programming
This paper deals with approximate value iteration (AVI) algorithms applied to discounted dynamic (DP) programming problems. The so-called Bellman residual is shown to be convex in the Banach space of candidate solutions to the DP problem. This fact motivates the introduction of an AVI algorithm with local search that seeks an approximate solution in a lower dimensional space called approximation architecture. The optimality of a point in the approximation architecture is characterized by means of convex optimization concepts and necessary and sufficient conditions to global optimality are derived. To illustrate the method, two examples are presented which were previously explored in the literature. ©2008 AACC.42384243Arruda, E.F., do Val, J.B.R., Approximate dynamic programming based on expansive projections (2006) Proceedings of the 45th IEEE International Conference on Decision and Control, pp. 5537-5542. , San DiegoBaird, L.C., Residual algorithms: Reinforcement learning with function approximation (1995) Proceedings of the 12th International Conference on Machine Learning, pp. 30-37. , Tahoe City CABaird, L.C., Moore, A., Gradient descent for general reinforcement learning (1999) Advances in Neural Information Processing Systems, 11Bazaraa, M.S., Sherali, H.D., Shetty, C.M., (1993) Nonlinear programming: Theory and algorithms, , John Wiley & Sons, New York, 2 editionBellman, R., (1957) Dynamic programming, , Princeton University Press, Princeton, NJBertsekas, D.P., (1995) Dynamic programming and optimal control, , Athena Scientific, Belmont, 2 editionBertsekas, D.P., Tsitsiklis, J.N., (1996) Neuro-dynamic programming, , Athena Scientific, BelmontBoyan, J.A., Moore, A.W., Generalization in reinforcement learning: Safely approximating the value function (1995) Advances in Neural Information Processing Systems 7, pp. 369-376. , G. Tesauro, D. Touretzky, and T. Leen, editors, MIT Press, Cambridge MAGordon, G.J., Stable function approximation in dynamic programming (1995) Proceedings of the 12th International Conference on Machine Learning, pp. 261-268. , Tahoe City CAKoller, D., Parr, R., Policy iteration for factored MDPs (1998) Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pp. 2130-2155. , Stanford CALagoudakis, M.G., Parr, R., Least-squares policy iteration (2003) Journal of Machine Learning Research, 4, pp. 1107-1149Puterman, M.L., (1994) Markov decision processes: Discrete stochastic dynamic programming, , John Wiley & Sons, New YorkSi, J., Barto, A., Powell, W., Wunsch, D., (2004) Handbook of learning and approximate dynamic programming, , John Wiley & Sons-IEEE Press, Piscataway NJSutton, R.S., Barto, A.G., (1998) Reinforcement learning: An introduction, , MIT Press, CambridgeTesauro, G., Practical issues in temporal difference learning (1992) Machine Learning, 8 (3), pp. 257-27
Approximate dynamic programming via direct search in the space of value function approximations
This paper deals with approximate value iteration (AVI) algorithms applied to discounted dynamic programming (DP) problems. For a fixed control policy, the span semi-norm of the so-called Bellman residual is shown to be convex in the Banach space of candidate solutions to the DP problem. This fact motivates the introduction of an AVI algorithm with local search that seeks to minimize the span semi-norm of the Bellman residual in a convex value function approximation space. The novelty here is that the optimality of a point in the approximation architecture is characterized by means of convex optimization concepts and necessary and sufficient conditions to local optimality are derived. The procedure employs the classical AVI algorithm direction (Bellman residual) combined with a set of independent search directions, to improve the convergence rate. It has guaranteed convergence and satisfies, at least, the necessary optimality conditions over a prescribed set of directions. To illustrate the method, examples are presented that deal with a class of problems from the literature and a large state space queueing problem setting.Dynamic programming Markov decision processes Convex optimization Direct search methods
A New Approach To Detectability Of Discrete-time Infinite Markov Jump Linear Systems
This paper deals with detectability for the class of discrete-time Markov jump linear systems (MJLS) with the underlying Markov chain having countably infinite state space. The formulation here relates the convergence of the output with that of the state variables. Our approach introduces invariant subspaces for the autonomous system and exhibits the role that they play. This allows us to show that detectability can be written equivalently in term of two conditions: stability of the autonomous system in a certain invariant space and convergence of general state trajectories to this invariant space under convergence of input and output variables. This, in turn, provides the tools to show that detectability here generalizes uniform observability ideas as well as previous detectability notions for MJLS with finite state Markov chain, and allows us to solve the jump-linear-quadratic control problem. In addition, it is shown for the MJLS with finite Markov state that the second condition is redundant and that detectability retrieves previously well-known concepts in their respective scenarios. detectability, stochastic systems, Markov jump systems, infinite Markov state space, optimal control © 2005 IEEE.200566626667E. F. COSTA AND J. B. R. DO VAL, On the detectability and observability of discrete-time Markov jump linear systems, Systems Control Lett., 44 (2001), pp. 135-145E. F. COSTA AND J. B. R. DO VAL, On the detectability and observability of continuous-time Markov jump linear systems, SIAM J. Control Optim., 41 (2002), pp. 1295-1314E. F. COSTA AND J. B. R. DO VAL, Weak detectability and the linear quadratic control problem of discrete-time Markov jump linear systems, Int. J. Control, 16/17 (2002), pp. 1282-1292COSTA, E.F., VAL, J.B.R.D., FRAGOSO, M.D., On a detectability concept of discrete-time infinite Markov jump linear systems Stochastic Analysis and Applications, , to appearCOSTA, E.F., VAL, J.B.R.D., FRAGOSO, M.D., On a detectability concept of discrete-time infinite Markov jump linear systems (2002) Proceedings of the 15th Triennal World Congress IFAC, pp. 2660-2665COSTA, O.L.V., Discrete-time coupled Riccati equations for systems with Markov switching parameters (1995) J. Math. Anal. Appl, 194, pp. 197-216COSTA, O.L.V., FRAGOSO, M.D., Discrete-time LQ-optimal control problems for infinite Markov jump parameter systems (1995) IEEE Trans. Automat. Control, 40, pp. 2076-2088FRAGOSO, M.D., BACZYNSKI, J., Optimal control for continuous time problems with infinite Markov jump parameters (2001) SIAM J. Control Optim, 40, pp. 270-297FRAGOSO, M.D., BACZYNSKI, J., Lyapunov coupled equations for continuous-time infinite Markov jump linear systems (2002) J. Math. Anal. Appl, 274, pp. 319-335FRAGOSO, M.D., BACZYNSKI, J., Stochastic versus mean square stability in continuous time linear infinite Markov jump parameter systems (2002) Stochastic Anal. Appl, 20, pp. 347-356Y. JI AND H. J. CHIZECK, Controllability, stabilizability and continuous time Markovian jump linear quadratic control, IEEE Trans. Automat. Control, 35 (1990), pp. 777-788MOROZAN, T., Stability and control for linear systems with jump Markov perturbations (1995) Stochastic Anal. Appl, 13, pp. 91-110W. S. GRAY AND J. P. MESKO, Observability functions for linear and nonlinear systems, Systems Control Lett., 38 (1999), pp. 99-113R. A. HORN AND C. R. JOHNSON, Matrix Analysis, Cambridge University Press, Cambridge, UK, 1990KAILATH, T., (1980) Linear Systems, , Prentice-Hall, Englewood Cliffs, NJKNOPP, K., (1956) Infinite Sequences and Series, , Dover, New YorkPETERSEN, I.R., Notions of observability for uncertain linear systems with structured uncertainty (2002) SIAM J. Control Optim, 41, pp. 345-36
Characterizations Of Radon Spaces *
Assuming hypothesis only on the σ-algebra F, we characterize (via Radon spaces) the class of measurable spaces (Ω,F) that admits regular conditional probability for all probabilities on F. © 1999 Elsevier Science B.V.424409413Dellacherie, C., Meyer, P.A., (1978) Probabilities and Potential., , North-Holland, AmsterdamFaden, A.M., The existence of regular conditional probabilities: Necessary and sufficient conditions (1985) Ann. Probab., 13 (1), pp. 288-298Gnedenko, B.V., Kolmogorov, A.N., (1949) Limit Distributions for Sums of Independent Random Variables, , (in Russian) Moscow-Leningrad. (English Translation) CambridgeHalmos, P., (1950) Measure Theory, , Springer, BerlinHoffmann-Jorgensen, J., Existence of conditional probabilities (1971) Math. Scand., 28, pp. 257-261Jirina, M., Probabilités conditionnelles sur les algèbres à base dénombrable (1954) Czechoslovak Math. J., 4, pp. 372-380Kolmogorov, A.N., (1933) Grundbegriffi der Wahrscheinlichkeitsrechnung, , BerlinMarczewski, E., The characteristic function of a sequence of sets and some of its applications (1938) Fund. Math., 31, pp. 207-223Marczewski, E., On compact measures (1954) Fund. Math., 40, pp. 113-124Pachl, J.K., Disintegration and compact measures (1978) Math. Scand., 43, pp. 157-168Sazonov, V.V., On perfect measures (1965) Amer. Math. Soc. Transl. Ser. 2, 48, pp. 229-254Sharpe, M., (1988) General Theory of Markov Processes, , Academic Press, New YorkTortrat, A., Desintegration d'une probabilite, statistiques exhaustives (1977) Lecture Notes in Mathematics, 581, pp. 539-565. , Séminaire de Probabilités X