4,856 research outputs found

    Risk models and scores for type 2 diabetes: Systematic review

    Get PDF
    This article is published under a Creative Commons Attribution Non Commercial (CC BY-NC 3.0) licence that allows reuse subject only to the use being non-commercial and to the article being fully attributed (http://creativecommons.org/licenses/by-nc/3.0).Objective - To evaluate current risk models and scores for type 2 diabetes and inform selection and implementation of these in practice. Design - Systematic review using standard (quantitative) and realist (mainly qualitative) methodology. Inclusion - criteria Papers in any language describing the development or external validation, or both, of models and scores to predict the risk of an adult developing type 2 diabetes. Data sources - Medline, PreMedline, Embase, and Cochrane databases were searched. Included studies were citation tracked in Google Scholar to identify follow-on studies of usability or impact. Data extraction - Data were extracted on statistical properties of models, details of internal or external validation, and use of risk scores beyond the studies that developed them. Quantitative data were tabulated to compare model components and statistical properties. Qualitative data were analysed thematically to identify mechanisms by which use of the risk model or score might improve patient outcomes. Results - 8864 titles were scanned, 115 full text papers considered, and 43 papers included in the final sample. These described the prospective development or validation, or both, of 145 risk prediction models and scores, 94 of which were studied in detail here. They had been tested on 6.88 million participants followed for up to 28 years. Heterogeneity of primary studies precluded meta-analysis. Some but not all risk models or scores had robust statistical properties (for example, good discrimination and calibration) and had been externally validated on a different population. Genetic markers added nothing to models over clinical and sociodemographic factors. Most authors described their score as “simple” or “easily implemented,” although few were specific about the intended users and under what circumstances. Ten mechanisms were identified by which measuring diabetes risk might improve outcomes. Follow-on studies that applied a risk score as part of an intervention aimed at reducing actual risk in people were sparse. Conclusion - Much work has been done to develop diabetes risk models and scores, but most are rarely used because they require tests not routinely available or they were developed without a specific user or clear use in mind. Encouragingly, recent research has begun to tackle usability and the impact of diabetes risk scores. Two promising areas for further research are interventions that prompt lay people to check their own diabetes risk and use of risk scores on population datasets to identify high risk “hotspots” for targeted public health interventions.Tower Hamlets, Newham, and City and Hackney primary care trusts and National Institute of Health Research

    CP Violation and Moduli Stabilization in Heterotic Models

    Get PDF
    The role of moduli stabilization in predictions for CP violation is examined in the context of four-dimensional effective supergravity models obtained from the weakly coupled heterotic string. We point out that while stabilization of compactification moduli has been studied extensively, the determination of background values for other scalars by dynamical means has not been subjected to the same degree of scrutiny. These other complex scalars are important potential sources of CP violation and we show in a simple model how their background values (including complex phases) may be determined from the minimization of the supergravity scalar potential, subject to the constraint of vanishing cosmological constant.Comment: 8 Pages. Based on a talk given at the CP Violation Conference, University of Michigan, Ann Arbor, November 4-18, 2001, correction to Eq. (27

    Effect on smoking quit rate of telling patients their lung age: the Step2quit randomised controlled trial

    Get PDF
    Objective To evaluate the impact of telling patients their estimated spirometric lung age as an incentive to quit smoking.Design Randomised controlled trial.Setting Five general practices in Hertfordshire, England.Participants 561 current smokers aged over 35.Intervention All participants were offered spirometric assessment of lung function. Participants in intervention group received their results in terms of "lung age" (the age of the average healthy individual who would perform similar to them on spirometry). Those in the control group received a raw figure for forced expiratory volume at one second (FEV1). Both groups were advised to quit and offered referral to local NHS smoking cessation services.Main outcome measures The primary outcome measure was verified cessation of smoking by salivary cotinine testing 12 months after recruitment. Secondary outcomes were reported changes in daily consumption of cigarettes and identification of new diagnoses of chronic obstructive lung disease.Results Follow-up was 89%. Independently verified quit rates at 12 months in the intervention and control groups, respectively, were 13.6% and 6.4% (difference 7.2%, P=0.005, 95% confidence interval 2.2% to 12.1%; number needed to treat 14). People with worse spirometric lung age were no more likely to have quit than those with normal lung age in either group. Cost per successful quitter was estimated at 280 pound ((euro) 365, $556). A new diagnosis of obstructive lung disease was made in 17% in the intervention group and 14% in the control group; a total of 16% (89/561) of participants.Conclusion Telling smokers their lung age significantly improves the likelihood of them quitting smoking, but the mechanism by which this intervention achieves its effect is unclear.Trial registration National Research Register N0096173751

    Renormalization effects on neutrino masses and mixing in a string-inspired SU(4) X SU(2)_L X SU(2)_R X U(1)_X model

    Full text link
    We discuss renormalization effects on neutrino masses and mixing angles in a supersymmetric string-inspired SU(4) X SU(2)_L X SU(2)_R X U(1)_X model, with matter in fundamental and antisymmetric tensor representations and singlet Higgs fields charged under the anomalous U(1)_X family symmetry. The quark, lepton and neutrino Yukawa matrices are distinguished by different Clebsch-Gordan coefficients. The presence of a second U(1)_X breaking singlet with fractional charge allows a more realistic, hierarchical light neutrino mass spectrum with bi-large mixing. By numerical investigation we find a region in the model parameter space where the neutrino mass-squared differences and mixing angles at low energy are consistent with experimental data.Comment: 9 pages, 7 figures; references adde

    Competing bounds on the present-day time variation of fundamental constants

    Full text link
    We compare the sensitivity of a recent bound on time variation of the fine structure constant from optical clocks with bounds on time varying fundamental constants from atomic clocks sensitive to the electron-to-proton mass ratio, from radioactive decay rates in meteorites, and from the Oklo natural reactor. Tests of the Weak Equivalence Principle also lead to comparable bounds on present variations of constants. The "winner in sensitivity" depends on what relations exist between the variations of different couplings in the standard model of particle physics, which may arise from the unification of gauge interactions. WEP tests are currently the most sensitive within unified scenarios. A detection of time variation in atomic clocks would favour dynamical dark energy and put strong constraints on the dynamics of a cosmological scalar field.Comment: ~4 Phys Rev page

    The blind leading the blind: Mutual refinement of approximate theories

    Get PDF
    The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another

    Dynamic Normalization for Compact Binary Coalescence Searches in Non-Stationary Noise

    Get PDF
    The output of gravitational-wave interferometers, such as LIGO and Virgo, can be highly non-stationary. Broadband detector noise can affect the detector sensitivity on the order of tens of seconds. Gravitational-wave transient searches, such as those for colliding black holes, estimate this noise in order to identify gravitational-wave events. During times of non-stationarity we see a higher rate of false events being reported. To accurately separate signal from noise, it is imperative to incorporate the changing detector state into gravitational-wave searches. We develop a new statistic which estimates the variation of the interferometric detector noise. We use this statistic to re-rank candidate events identified during LIGO-Virgo's second observing run by the PyCBC search pipeline. This results in a 7% improvement in the sensitivity volume for low mass binaries, particularly binary neutron stars mergers
    corecore