6,590 research outputs found

    Ensemble model output statistics for wind vectors

    Full text link
    A bivariate ensemble model output statistics (EMOS) technique for the postprocessing of ensemble forecasts of two-dimensional wind vectors is proposed, where the postprocessed probabilistic forecast takes the form of a bivariate normal probability density function. The postprocessed means and variances of the wind vector components are linearly bias-corrected versions of the ensemble means and ensemble variances, respectively, and the conditional correlation between the wind components is represented by a trigonometric function of the ensemble mean wind direction. In a case study on 48-hour forecasts of wind vectors over the North American Pacific Northwest with the University of Washington Mesoscale Ensemble, the bivariate EMOS density forecasts were calibrated and sharp, and showed considerable improvement over the raw ensemble and reference forecasts, including ensemble copula coupling

    Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    Get PDF
    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled differential evolution adaptive Metropolis (DREAM), that is especially designed to efficiently estimate the posterior probability density function of hydrologic model parameters in complex, high-dimensional sampling problems. This MCMC scheme adaptively updates the scale and orientation of the proposal distribution during sampling and maintains detailed balance and ergodicity. It is then demonstrated how DREAM can be used to analyze forcing data error during watershed model calibration using a five-parameter rainfall-runoff model with streamflow data from two different catchments. Explicit treatment of precipitation error during hydrologic model calibration not only results in prediction uncertainty bounds that are more appropriate but also significantly alters the posterior distribution of the watershed model parameters. This has significant implications for regionalization studies. The approach also provides important new ways to estimate areal average watershed precipitation, information that is of utmost importance for testing hydrologic theory, diagnosing structural errors in models, and appropriately benchmarking rainfall measurement devices

    A nonmanipulable test

    Full text link
    A test is said to control for type I error if it is unlikely to reject the data-generating process. However, if it is possible to produce stochastic processes at random such that, for all possible future realizations of the data, the selected process is unlikely to be rejected, then the test is said to be manipulable. So, a manipulable test has essentially no capacity to reject a strategic expert. Many tests proposed in the existing literature, including calibration tests, control for type I error but are manipulable. We construct a test that controls for type I error and is nonmanipulable.Comment: Published in at http://dx.doi.org/10.1214/08-AOS597 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Contributions to an improved phenytoin monitoring and dosing in hospitalized patients

    Get PDF
    Phenytoin (PHT) is one of the mostly used and well established anticonvulsants for the treatment of epilepsy and a standard in the antiepileptic prophylaxis in adults with severe traumatic brain injuries before and after neurosurgical intervention. Its therapeutic use is challenging as PHT has a narrow therapeutic range and shows non-linear kinetics. It is extensively metabolized by a variety of CYP enzymes. PHT shows 85-95% binding to plasma proteins mostly albumin. This renders PHT also an important drug interaction candidate. Therefore, therapeutic drug monitoring is often required. A rational timing for good interpretation of the lab data translated in optimal individual dosing are necessary. Therapeutic guidance especially in teaching hospitals are needed and have to be implemented. Bayesian Forecasting (BF) versus conventional dosing (CD): a retrospective, long-term, single centre analysis In the hospital, medication management for effective antiepileptic therapy with PHT often needs rapid IV loading and subsequent dose adjustment according to TDM. To investigate PHT performance in reaching therapeutic target serum concentration, a BF regimen was compared to CD, according to the official summary of product characteristics. In a Swiss acute care teaching hospital (Kantonsspital Aarau), a retrospective, single centre, and long-term analysis was assessed by using all PHT serum tests from the central lab from 1997 to 2007. The BF regimen consisted of a guided, body weight-adapted rapid IV PHT loading over five days with pre-defined TDM time points. The CD was applied without written guidance. Assuming non-normally distributed data, non-parametric statistical methods were used. A total of 6’120 PHT serum levels (2’819 BF and 3’301 CD) from 2’589 patients (869 BF and 1’720 CD) were evaluated and compared. 63.6% of the PHT serum levels from the BF group were within the therapeutic range versus only 34.0% in the CD group (p<0.0001). The mean BF serum level was 52.0 ± 22.1 ”mol/L (within target range), whereas the mean serum level of the CD was 39.8 ± 28.2 ”mol/L (sub-target range). In the BF group, men had small but significantly lower PHT serum levels compared to women (p<0.0001). The CD group showed no significant gender difference (p=0.187). A comparative sub-analysis of age-related groups (children, adolescents, adults, seniors, and elderly) showed significant lower target levels (p<0.0001) for each group in the CD group, compared to BF. Comparing the two groups, BF showed significantly better performance in reaching therapeutic PHT serum levels. Free PHT assessment However, total serum drug levels of difficult-to-dose drugs like PHT are sometimes insufficient. The knowledge of the free fraction is necessary for correct dosing. In a subgroup analysis of the above BF vs. CD study we evaluated the suitability of the Sheiner-Tozer algorithm to calculate the free PHT fraction in hypoalbuminemic patients. Free PHT serum concentrations were calculated from total PHT concentration in hypoalbuminemic patients and compared with the measured free PHT. The patients were separated into two groups (a low albumin group; 35 ≀ albumin ≄ 25 g/L and a very low albumin group; albumin < 25 g/L). These two groups were compared and statistically analysed for the calculated and the measured free PHT concentration. The calculated (1.2 mg/L, SD=0.7) and the measured (1.1 mg/L, SD=0.5) free PHT concentration correlated. The mean difference in the low and the very low albumin group was 0.10 mg/L (SD=1.4, n=11) and 0.13 mg/L (SD=0.24, n=12), respectively. Although the variability of the data could be a bias, no statistically significant difference between the groups was found: t-test (p=0.78), the Passing-Bablok regression, the Spearman’s rank correlation coefficient of r=0.907 and p=0.00, and the Bland-Altman plot including the regression analysis between the calculated and the measured value (M=0.11, SD=0.28). We concluded that in absence of a free PHT serum concentration measurement also in hypoalbuminemic patients, the Sheiner-Tozer algorithm represents a useful tool to assist TDM to calculate or control free PHT by using total PHT and the albumin concentration. GC-MS Analysis of biological PHT samples To correlate PHT blood serum levels, with “brain PHT levels” (the site of action of PHT), extracellular fluid from microdialysates in neurosurgical patients could be analyzed for PHT by an appropriate quantifying analytical method. In this investigation we describe the development and validation of a sensitive gas chromatography–mass spectrometry (GC–MS) method to identify and quantitate PHT in brain microdialysate, saliva and blood from human samples. For sample clean-up a SPE was performed with a nonpolar C8-SCX column. The eluate was evaporated with nitrogen (50°C) and derivatized with trimethylsulfonium hydroxide before GC-MS analysis. 5-(p-methylphenyl)-5-phenylhydantoin was used as internal standard. The MS was run in scan mode and the identification was made with three ion fragment masses. All peaks were identified with MassLib. Spiked PHT samples showed recovery after SPE of ≄ 94%. The calibration curve (PHT 50 to 1’200 ng/ml, n=6 at six concentration levels) showed good linearity and correlation (r2 > 0.998). The limit of detection was 15 ng/mL, the limit of quantification was 50 ng/mL. Dried extracted samples were stable within a 15% deviation range for ≄ 4 weeks at room temperature. The method met International Organization for Standardization standards and was able to detect and quantify PHT in different biological matrices and patient samples. The GC-MS method with SPE is specific, sensitive, robust and well reproducible and therefore, an appropriate candidate for pharmacokinetic assessment of PHT concentrations in different biological samples of treated patients

    Competitive on-line learning with a convex loss function

    Get PDF
    We consider the problem of sequential decision making under uncertainty in which the loss caused by a decision depends on the following binary observation. In competitive on-line learning, the goal is to design decision algorithms that are almost as good as the best decision rules in a wide benchmark class, without making any assumptions about the way the observations are generated. However, standard algorithms in this area can only deal with finite-dimensional (often countable) benchmark classes. In this paper we give similar results for decision rules ranging over an arbitrary reproducing kernel Hilbert space. For example, it is shown that for a wide class of loss functions (including the standard square, absolute, and log loss functions) the average loss of the master algorithm, over the first NN observations, does not exceed the average loss of the best decision rule with a bounded norm plus O(N−1/2)O(N^{-1/2}). Our proof technique is very different from the standard ones and is based on recent results about defensive forecasting. Given the probabilities produced by a defensive forecasting algorithm, which are known to be well calibrated and to have good resolution in the long run, we use the expected loss minimization principle to find a suitable decision.Comment: 26 page
    • 

    corecore