225 research outputs found

    Care of the adult patient after sexual assault

    Get PDF

    Advancing current approaches to disease management evaluation:Capitalizing on heterogeneity to understand what works and for whom

    Get PDF
    BACKGROUND: Evaluating large-scale disease management interventions implemented in actual health care settings is a complex undertaking for which universally accepted methods do not exist. Fundamental issues, such as a lack of control patients and limited generalizability, hamper the use of the ‘gold-standard’ randomized controlled trial, while methodological shortcomings restrict the value of observational designs. Advancing methods for disease management evaluation in practice is pivotal to learn more about the impact of population-wide approaches. Methods must account for the presence of heterogeneity in effects, which necessitates a more granular assessment of outcomes. METHODS: This paper introduces multilevel regression methods as valuable techniques to evaluate ‘real-world’ disease management approaches in a manner that produces meaningful findings for everyday practice. In a worked example, these methods are applied to retrospectively gathered routine health care data covering a cohort of 105,056 diabetes patients who receive disease management for type 2 diabetes mellitus in the Netherlands. Multivariable, multilevel regression models are fitted to identify trends in clinical outcomes and correct for differences in characteristics of patients (age, disease duration, health status, diabetes complications, smoking status) and the intervention (measurement frequency and range, length of follow-up). RESULTS: After a median one year follow-up, the Dutch disease management approach was associated with small average improvements in systolic blood pressure and low-density lipoprotein, while a slight deterioration occurred in glycated hemoglobin. Differential findings suggest that patients with poorly controlled diabetes tend to benefit most from disease management in terms of improved clinical measures. Additionally, a greater measurement frequency was associated with better outcomes, while longer length of follow-up was accompanied by less positive results. CONCLUSIONS: Despite concerted efforts to adjust for potential sources of confounding and bias, there ultimately are limits to the validity and reliability of findings from uncontrolled research based on routine intervention data. While our findings are supported by previous randomized research in other settings, the trends in outcome measures presented here may have alternative explanations. Further practice-based research, perhaps using historical data to retrospectively construct a control group, is necessary to confirm results and learn more about the impact of population-wide disease management

    Repeatability and reproducibility of multiparametric magnetic resonance imaging of the liver

    Get PDF
    As the burden of liver disease reaches epidemic levels, there is a high unmet medical need to develop robust, accurate and reproducible non-invasive methods to quantify liver tissue characteristics for use in clinical development and ultimately in clinical practice. This prospective cross-sectional study systematically examines the repeatability and reproducibility of iron-corrected T1 (cT1), T2*, and hepatic proton density fat fraction (PDFF) quantification with multiparametric MRI across different field strengths, scanner manufacturers and models. 61 adult participants with mixed liver disease aetiology and those without any history of liver disease underwent multiparametric MRI on combinations of 5 scanner models from two manufacturers (Siemens and Philips) at different field strengths (1.5T and 3T). We report high repeatability and reproducibility across different field strengths, manufacturers, and scanner models in standardized cT1 (repeatability CoV: 1.7%, bias -7.5ms, 95% LoA of -53.6 ms to 38.5 ms; reproducibility CoV 3.3%, bias 6.5 ms, 95% LoA of -76.3 to 89.2 ms) and T2* (repeatability CoV: 5.5%, bias -0.18 ms, 95% LoA -5.41 to 5.05 ms; reproducibility CoV 6.6%, bias -1.7 ms, 95% LoA -6.61 to 3.15 ms) in human measurements. PDFF repeatability (0.8%) and reproducibility (0.75%) coefficients showed high precision of this metric. Similar precision was observed in phantom measurements. Inspection of the ICC model indicated that most of the variance in cT1 could be accounted for by study participants (ICC = 0.91), with minimal contribution from technical differences. We demonstrate that multiparametric MRI is a non-invasive, repeatable and reproducible method for quantifying liver tissue characteristics across manufacturers (Philips and Siemens) and field strengths (1.5T and 3T)

    Long-Time Behavior of Macroscopic Quantum Systems: Commentary Accompanying the English Translation of John von Neumann's 1929 Article on the Quantum Ergodic Theorem

    Full text link
    The renewed interest in the foundations of quantum statistical mechanics in recent years has led us to study John von Neumann's 1929 article on the quantum ergodic theorem. We have found this almost forgotten article, which until now has been available only in German, to be a treasure chest, and to be much misunderstood. In it, von Neumann studied the long-time behavior of macroscopic quantum systems. While one of the two theorems announced in his title, the one he calls the "quantum H-theorem", is actually a much weaker statement than Boltzmann's classical H-theorem, the other theorem, which he calls the "quantum ergodic theorem", is a beautiful and very non-trivial result. It expresses a fact we call "normal typicality" and can be summarized as follows: For a "typical" finite family of commuting macroscopic observables, every initial wave function ψ0\psi_0 from a micro-canonical energy shell so evolves that for most times tt in the long run, the joint probability distribution of these observables obtained from ψt\psi_t is close to their micro-canonical distribution.Comment: 34 pages LaTeX, no figures; v2: minor improvements and additions. The English translation of von Neumann's article is available as arXiv:1003.213

    Planck 2015 results. XXVII. The Second Planck Catalogue of Sunyaev-Zeldovich Sources

    Get PDF
    We present the all-sky Planck catalogue of Sunyaev-Zeldovich (SZ) sources detected from the 29 month full-mission data. The catalogue (PSZ2) is the largest SZ-selected sample of galaxy clusters yet produced and the deepest all-sky catalogue of galaxy clusters. It contains 1653 detections, of which 1203 are confirmed clusters with identified counterparts in external data-sets, and is the first SZ-selected cluster survey containing > 10310^3 confirmed clusters. We present a detailed analysis of the survey selection function in terms of its completeness and statistical reliability, placing a lower limit of 83% on the purity. Using simulations, we find that the Y5R500 estimates are robust to pressure-profile variation and beam systematics, but accurate conversion to Y500 requires. the use of prior information on the cluster extent. We describe the multi-wavelength search for counterparts in ancillary data, which makes use of radio, microwave, infra-red, optical and X-ray data-sets, and which places emphasis on the robustness of the counterpart match. We discuss the physical properties of the new sample and identify a population of low-redshift X-ray under- luminous clusters revealed by SZ selection. These objects appear in optical and SZ surveys with consistent properties for their mass, but are almost absent from ROSAT X-ray selected samples

    Allocation to highly sensitized patients based on acceptable mismatches results in low rejection rates comparable to nonsensitized patients

    Get PDF
    Contains fulltext : 208426.pdf (publisher's version ) (Open Access)Whereas regular allocation avoids unacceptable mismatches on the donor organ, allocation to highly sensitized patients within the Eurotransplant Acceptable Mismatch (AM) program is based on the patient's HLA phenotype plus acceptable antigens. These are HLA antigens to which the patient never made antibodies, as determined by extensive laboratory testing. AM patients have superior long-term graft survival compared with highly sensitized patients in regular allocation. Here, we questioned whether the AM program also results in lower rejection rates. From the PROCARE cohort, consisting of all Dutch kidney transplants in 1995-2005, we selected deceased donor single transplants with a minimum of 1 HLA mismatch and determined the cumulative 6-month rejection incidence for patients in AM or regular allocation. Additionally, we determined the effect of minimal matching criteria of 1 HLA-B plus 1 HLA-DR, or 2 HLA-DR antigens on rejection incidence. AM patients showed significantly lower rejection rates than highly immunized patients in regular allocation, comparable to nonsensitized patients, independent of other risk factors for rejection. In contrast to highly sensitized patients in regular allocation, minimal matching criteria did not affect rejection rates in AM patients. Allocation based on acceptable antigens leads to relatively low-risk transplants for highly sensitized patients with rejection rates similar to those of nonimmunized individuals

    Planck early results XIV : ERCSC validation and extreme radio sources

    Get PDF
    Peer reviewe

    Planck early results XXV : Thermal dust in nearby molecular clouds

    Get PDF
    Peer reviewe

    Planck early results XVII : Origin of the submillimetre excess dust emission in the Magellanic Clouds

    Get PDF
    Peer reviewe
    • 

    corecore