447 research outputs found

    Markov Chain-based Cost-Optimal Control Charts for Healthcare Data

    Get PDF
    Control charts have traditionally been used in industrial statistics, but are constantly seeing new areas of application, especially in the age of Industry 4.0. This paper introduces a new method, which is suitable for applications in the healthcare sector, especially for monitoring a health-characteristic of a patient. We adapt a Markov chain-based approach and develop a method in which not only the shift size (i.e. the degradation of the patient's health) can be random, but the effect of the repair (i.e. treatment) and time between samplings (i.e. visits) too. This means that we do not use many often-present assumptions which are usually not applicable for medical treatments. The average cost of the protocol, which is determined by the time between samplings and the control limit, can be estimated using the stationary distribution of the Markov chain. Furthermore, we incorporate the standard deviation of the cost into the optimisation procedure, which is often very important from a process control viewpoint. The sensitivity of the optimal parameters and the resulting average cost and cost standard deviation on different parameter values is investigated. We demonstrate the usefulness of the approach for real-life data of patients treated in Hungary: namely the monitoring of cholesterol level of patients with cardiovascular event risk. The results showed that the optimal parameters from our approach can be somewhat different from the original medical parameters

    The nonlinear Bernstein-Schr\"odinger equation in Economics

    Full text link
    In this paper we relate the Equilibrium Assignment Problem (EAP), which is underlying in several economics models, to a system of nonlinear equations that we call the "nonlinear Bernstein-Schr\"odinger system", which is well-known in the linear case, but whose nonlinear extension does not seem to have been studied. We apply this connection to derive an existence result for the EAP, and an efficient computational method.Comment: 8 pages, submitted to Lecture Notes in Computer Scienc

    Ultrasonic Nondestructive Evaluation of Cracked Composite Laminates

    Get PDF
    The use of guided waves in the ultrasonic nondestructive evaluation of structural components, e.g., bonded plates and composite laminates, has received considerable attention in recent years. Highly accurate and efficient experimental techniques have been developed to generate, record and analyze these waves in laboratory specimens, leading to an improved capability in flaw detection and material characterization in a variety of materials [1–4]. A convenient method to generate guided waves in a plate or laminate is the so-called leaky Lamb wave (LLW) technique. It has been demonstrated in several recent papers [5–7] that phase velocity and amplitude of guided waves composite laminates can be determined very accurately in a broad range of frequencies and velocities by the LLW technique

    lp-Recovery of the Most Significant Subspace among Multiple Subspaces with Outliers

    Full text link
    We assume data sampled from a mixture of d-dimensional linear subspaces with spherically symmetric distributions within each subspace and an additional outlier component with spherically symmetric distribution within the ambient space (for simplicity we may assume that all distributions are uniform on their corresponding unit spheres). We also assume mixture weights for the different components. We say that one of the underlying subspaces of the model is most significant if its mixture weight is higher than the sum of the mixture weights of all other subspaces. We study the recovery of the most significant subspace by minimizing the lp-averaged distances of data points from d-dimensional subspaces, where p>0. Unlike other lp minimization problems, this minimization is non-convex for all p>0 and thus requires different methods for its analysis. We show that if 0<p<=1, then for any fraction of outliers the most significant subspace can be recovered by lp minimization with overwhelming probability (which depends on the generating distribution and its parameters). We show that when adding small noise around the underlying subspaces the most significant subspace can be nearly recovered by lp minimization for any 0<p<=1 with an error proportional to the noise level. On the other hand, if p>1 and there is more than one underlying subspace, then with overwhelming probability the most significant subspace cannot be recovered or nearly recovered. This last result does not require spherically symmetric outliers.Comment: This is a revised version of the part of 1002.1994 that deals with single subspace recovery. V3: Improved estimates (in particular for Lemma 3.1 and for estimates relying on it), asymptotic dependence of probabilities and constants on D and d and further clarifications; for simplicity it assumes uniform distributions on spheres. V4: minor revision for the published versio

    A global optimisation approach to range-restricted survey calibration

    Get PDF
    Survey calibration methods modify minimally unit-level sample weights to fit domain-level benchmark constraints (BC). This allows exploitation of auxiliary information, e.g. census totals, to improve the representativeness of sample data (addressing coverage limitations, non-response) and the quality of estimates of population parameters. Calibration methods may fail with samples presenting small/zero counts for some benchmark groups or when range restrictions (RR), such as positivity, are imposed to avoid unrealistic or extreme weights. User-defined modifications of BC/RR performed after encountering non-convergence allow little control on the solution, and penalization approaches modelling infeasibility may not guarantee convergence. Paradoxically, this has led to underuse in calibration of highly disaggregated information, when available. We present an always-convergent flexible two-step Global Optimisation (GO) survey calibration approach. The feasibility of the calibration problem is assessed, and automatically controlled minimum errors in BC or changes in RR are allowed to guarantee convergence in advance, while preserving the good properties of calibration estimators. Modelling alternatives under different scenarios, using various error/change and distance measures are formulated and discussed. The GO approach is validated by calibrating the weights of the 2012 Health Survey for England to a fine age-gender-region cross-tabulation (378 counts) from the 2011 Census in England and Wales

    A global optimisation approach to range-restricted survey calibration

    Get PDF
    Survey calibration methods modify minimally unit-level sample weights to fit domain-level benchmark constraints (BC). This allows exploitation of auxiliary information, e.g. census totals, to improve the representativeness of sample data (addressing coverage limitations, non-response) and the quality of estimates of population parameters. Calibration methods may fail with samples presenting small/zero counts for some benchmark groups or when range restrictions (RR), such as positivity, are imposed to avoid unrealistic or extreme weights. User-defined modifications of BC/RR performed after encountering non-convergence allow little control on the solution, and penalization approaches modelling infeasibility may not guarantee convergence. Paradoxically, this has led to underuse in calibration of highly disaggregated information, when available. We present an always-convergent flexible two-step Global Optimisation (GO) survey calibration approach. The feasibility of the calibration problem is assessed, and automatically controlled minimum errors in BC or changes in RR are allowed to guarantee convergence in advance, while preserving the good properties of calibration estimators. Modelling alternatives under different scenarios, using various error/change and distance measures are formulated and discussed. The GO approach is validated by calibrating the weights of the 2012 Health Survey for England to a fine age-gender-region cross-tabulation (378 counts) from the 2011 Census in England and Wales

    Beyond chance? The persistence of performance in online poker

    Get PDF
    A major issue in the widespread controversy about the legality of poker and the appropriate taxation of winnings is whether poker should be considered a game of skill or a game of chance. To inform this debate we present an analysis into the role of skill in the performance of online poker players, using a large database with hundreds of millions of player-hand observations from real money ring games at three different stakes levels. We find that players whose earlier profitability was in the top (bottom) deciles perform better (worse) and are substantially more likely to end up in the top (bottom) performance deciles of the following time period. Regression analyses of performance on historical performance and other skill-related proxies provide further evidence for persistence and predictability. Simulations point out that skill dominates chance when performance is measured over 1,500 or more hands of play

    Total quality: its origins and its future

    Full text link
    This article discusses how an efficient organization is characterized by its knowledge and learning capability. It examines the learning ability of the human animal, the logic of continuous, never-ending improvement, the catalysis of learning by scientific method, and Grosseteste's Inductive-Deductive iteration related to the Shewhart Cycle. Total Quality is seen as the democratization and comprehensive diffusion of Scientific Method and involves extrapolating knowledge from experiment to reality which is the essence of the idea of robustness. Finally, barriers to progress are discussed and the question of how these can be tackled is considered
    • …
    corecore