122 research outputs found

    Nurses' views of using computerized decision support software in NHS Direct

    Get PDF
    Background. Nurses working in NHS Direct, the 24-hour telephone advice line in England, use computerized decision support software to recommend to callers the most appropriate service to contact, or to advise on self-care. Aims. To explore nurses' views of their roles and the computerized decision support software in NHS Direct. Methods. Qualitative analysis of semi-structured interviews with 24 NHS Direct nurses in 12 sites. Findings. Nurses described both the software and themselves as essential to the clinical decision-making process. The software acted as safety net, provider of consistency, and provider of script, and was relied upon more when nurses did not have clinical knowledge relevant to the call. The nurse handled problems not covered by the software, probed patients for the appropriate information to enter into the software, and interpreted software recommendations in the light of contextual information which the software was unable to use. Nurses described a dual process of decision-making, with the nurse as active decision maker looking for consensus with the software recommendation and ready to override recommendations made by the software if necessary. However, nurses' accounts of the software as a guide, prompt or support did not fully acknowledge the power of the software, which they are required to use, and the recommendation of which they are required to follow under some management policies. Over time, the influence of nurse and software merges as nurses internalize the software script as their own knowledge, and navigate the software to produce recommendations that they feel are most appropriate. Conclusions. The nurse and the software have distinct roles in NHS Direct, although the effect of each on the clinical decision-making process may be difficult to determine in practice

    APOL1-Associated glomerular disease among African-American children: A collaboration of the chronic kidney disease in children (CKiD) and nephrotic syndrome study network (NEPTUNE) cohorts

    Get PDF
    Background: Individuals of African ancestry harboring two variant alleles within apolipoprotein L1 (APOL1) are classified with a high-risk (HR) genotype. Adults with an HR genotype have increased risk of focal segmental glomerulosclerosis and chronic kidney disease compared with those with a low-risk (LR) genotype (0 or 1 variants). The role of APOL1 risk genotypes in children with glomerular disease is less well known. Methods: This study characterized 104 African-American children with a glomerular disease by APOL1 genotype in two cohorts: The Chronic Kidney Disease in Children (CKiD) and Nephrotic Syndrome Study Network (NEPTUNE). Results: Among these subjects, 46% had an HR genotype with a similar age at cohort enrollment. For APOL1 HR children, the median age of disease onset was older (CKiD: 4.5 versus 11.5 years for LR versus HR; NEPTUNE: 11 versus 14 years for LR versus HR, respectively) and preterm birth was more common [CKiD: 27 versus 4%; NEPTUNE: 26 versus 12%; combined odds ratio 4.6 (95% confidence interval: 1.4, 15.5)].Within studies, HR children had lower initial estimated glomerular filtration rate (EGFR) (CKiD: 53 versus 69 mL/min/1.73 m2; NEPTUNE: 74 versus 94 mL/min/1.73 m2). Longitudinal EGFR decline was faster among HR children versus LR (CKiD: -18 versus -8% per year; NEPTUNE: -13 versus-3% per year). Conclusions: Children with an HR genotype in CKiD and NEPTUNE seem to have a more aggressive form of glomerular disease, in part due to a higher prevalence of focal segmental glomerulosclerosis. These consistent findings across independent cohorts suggest a common natural history for children with APOL1-Associated glomerular disease. Further study is needed to determine the generalizability of these findings

    Urinary Epidermal Growth Factor as a Marker of Disease Progression in Children With Nephrotic Syndrome

    Get PDF
    Introduction: Childhood-onset nephrotic syndrome has a variable clinical course. Improved predictive markers of long-term outcomes in children with nephrotic syndrome are needed. This study tests the association between baseline urinary epidermal growth factor (uEGF) excretion and longitudinal kidney function in children with nephrotic syndrome. Methods: The study evaluated 191 participants younger than 18 years enrolled in the Nephrotic Syndrome Study Network, including 118 with their first clinically indicated kidney biopsy (68 minimal change disease; 50 focal segmental glomerulosclerosis) and 73 with incident nephrotic syndrome without a biopsy. uEGF was measured at baseline for all participants and normalized by the urine creatinine (Cr) concentration. Renal epidermal growth factor (EGF) mRNA was measured in the tubular compartment microdissected from kidney biopsy cores from a subset of patients. Linear mixed models were used to test if baseline uEGF/Cr and EGF mRNA expression were associated with change in estimated glomerular filtration rate (eGFR) over time. Results: Higher uEGF/Cr at baseline was associated with slower eGFR decline during follow-up (median follow-up = 30 months). Halving of uEGF/Cr was associated with a decrease in eGFR slope of 2.0 ml/min per 1.73 m2 per year (P < 0.001) adjusted for age, race, diagnosis, baseline eGFR and proteinuria, and APOL1 genotype. In the biopsied subgroup, uEGF/Cr was correlated with EGF mRNA expression (r = 0.74; P < 0.001), but uEGF/Cr was retained over mRNA expression as the stronger predictor of eGFR slope after multivariable adjustment (decrease in eGFR slope of 1.7 ml/min per 1.73 m2 per year per log2 decrease in uEGF/Cr; P < 0.001). Conclusion: uEGF/Cr may be a useful noninvasive biomarker that can assist in predicting the long-term course of kidney function in children with incident nephrotic syndrome

    Global, regional, and national comparative risk assessment of 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks, 1990-2015: A systematic analysis for the Global Burden of Disease Study 2015

    Get PDF
    Background: The Global Burden of Diseases, Injuries, and Risk Factors Study 2015 provides an up-to-date synthesis of the evidence for risk factor exposure and the attributable burden of disease. By providing national and subnational assessments spanning the past 25 years, this study can inform debates on the importance of addressing risks in context. Methods: We used the comparative risk assessment framework developed for previous iterations of the Global Burden of Disease Study to estimate attributable deaths, disability-adjusted life-years (DALYs), and trends in exposure by age group, sex, year, and geography for 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks from 1990 to 2015. This study included 388 risk-outcome pairs that met World Cancer Research Fund-defined criteria for convincing or probable evidence. We extracted relative risk and exposure estimates from randomised controlled trials, cohorts, pooled cohorts, household surveys, census data, satellite data, and other sources. We used statistical models to pool data, adjust for bias, and incorporate covariates. We developed a metric that allows comparisons of exposure across risk factors—the summary exposure value. Using the counterfactual scenario of theoretical minimum risk level, we estimated the portion of deaths and DALYs that could be attributed to a given risk. We decomposed trends in attributable burden into contributions from population growth, population age structure, risk exposure, and risk-deleted cause-specific DALY rates. We characterised risk exposure in relation to a Socio-demographic Index (SDI). Findings: Between 1990 and 2015, global exposure to unsafe sanitation, household air pollution, childhood underweight, childhood stunting, and smoking each decreased by more than 25%. Global exposure for several occupational risks, high body-mass index (BMI), and drug use increased by more than 25% over the same period. All risks jointly evaluated in 2015 accounted for 57·8% (95% CI 56·6–58·8) of global deaths and 41·2% (39·8–42·8) of DALYs. In 2015, the ten largest contributors to global DALYs among Level 3 risks were high systolic blood pressure (211·8 million [192·7 million to 231·1 million] global DALYs), smoking (148·6 million [134·2 million to 163·1 million]), high fasting plasma glucose (143·1 million [125·1 million to 163·5 million]), high BMI (120·1 million [83·8 million to 158·4 million]), childhood undernutrition (113·3 million [103·9 million to 123·4 million]), ambient particulate matter (103·1 million [90·8 million to 115·1 million]), high total cholesterol (88·7 million [74·6 million to 105·7 million]), household air pollution (85·6 million [66·7 million to 106·1 million]), alcohol use (85·0 million [77·2 million to 93·0 million]), and diets high in sodium (83·0 million [49·3 million to 127·5 million]). From 1990 to 2015, attributable DALYs declined for micronutrient deficiencies, childhood undernutrition, unsafe sanitation and water, and household air pollution; reductions in risk-deleted DALY rates rather than reductions in exposure drove these declines. Rising exposure contributed to notable increases in attributable DALYs from high BMI, high fasting plasma glucose, occupational carcinogens, and drug use. Environmental risks and childhood undernutrition declined steadily with SDI; low physical activity, high BMI, and high fasting plasma glucose increased with SDI. In 119 countries, metabolic risks, such as high BMI and fasting plasma glucose, contributed the most attributable DALYs in 2015. Regionally, smoking still ranked among the leading five risk factors for attributable DALYs in 109 countries; childhood underweight and unsafe sex remained primary drivers of early death and disability in much of sub-Saharan Africa. Interpretation: Declines in some key environmental risks have contributed to declines in critical infectious diseases. Some risks appear to be invariant to SDI. Increasing risks, including high BMI, high fasting plasma glucose, drug use, and some occupational exposures, contribute to rising burden from some conditions, but also provide opportunities for intervention. Some highly preventable risks, such as smoking, remain major causes of attributable DALYs, even as exposure is declining. Public policy makers need to pay attention to the risks that are increasingly major contributors to global burden. Funding: Bill & Melinda Gates Foundation

    Loss-of-function ABCC8 mutations in pulmonary arterial hypertension

    Get PDF
    Background: In pulmonary arterial hypertension (PAH), pathological changes in pulmonary arterioles progressively raise pulmonary artery pressure and increase pulmonary vascular resistance, leading to right heart failure and high mortality rates. Recently, the first potassium channelopathy in PAH, because of mutations in KCNK3, was identified as a genetic cause and pharmacological target. Methods: Exome sequencing was performed to identify novel genes in a cohort of 99 pediatric and 134 adult-onset group I PAH patients. Novel rare variants in the gene identified were independently identified in a cohort of 680 adult-onset patients. Variants were expressed in COS cells and function assessed by patch-clamp and rubidium flux analysis. Results: We identified a de novo novel heterozygous predicted deleterious missense variant c.G2873A (p.R958H) in ABCC8 in a child with idiopathic PAH. We then evaluated all individuals in the original and a second cohort for rare or novel variants in ABCC8 and identified 11 additional heterozygous predicted damaging ABCC8 variants. ABCC8 encodes SUR1 (sulfonylurea receptor 1)—a regulatory subunit of the ATP-sensitive potassium channel. We observed loss of ATP-sensitive potassium channel function for all ABCC8 variants evaluated and pharmacological rescue of all channel currents in vitro by the SUR1 activator, diazoxide. Conclusions: Novel and rare missense variants in ABCC8 are associated with PAH. Identified ABCC8 mutations decreased ATP-sensitive potassium channel function, which was pharmacologically recovered

    Whole-genome sequencing reveals host factors underlying critical COVID-19

    Get PDF
    Critical COVID-19 is caused by immune-mediated inflammatory lung injury. Host genetic variation influences the development of illness requiring critical care1 or hospitalization2,3,4 after infection with SARS-CoV-2. The GenOMICC (Genetics of Mortality in Critical Care) study enables the comparison of genomes from individuals who are critically ill with those of population controls to find underlying disease mechanisms. Here we use whole-genome sequencing in 7,491 critically ill individuals compared with 48,400 controls to discover and replicate 23 independent variants that significantly predispose to critical COVID-19. We identify 16 new independent associations, including variants within genes that are involved in interferon signalling (IL10RB and PLSCR1), leucocyte differentiation (BCL11A) and blood-type antigen secretor status (FUT2). Using transcriptome-wide association and colocalization to infer the effect of gene expression on disease severity, we find evidence that implicates multiple genes—including reduced expression of a membrane flippase (ATP11A), and increased expression of a mucin (MUC1)—in critical disease. Mendelian randomization provides evidence in support of causal roles for myeloid cell adhesion molecules (SELE, ICAM5 and CD209) and the coagulation factor F8, all of which are potentially druggable targets. Our results are broadly consistent with a multi-component model of COVID-19 pathophysiology, in which at least two distinct mechanisms can predispose to life-threatening disease: failure to control viral replication; or an enhanced tendency towards pulmonary inflammation and intravascular coagulation. We show that comparison between cases of critical illness and population controls is highly efficient for the detection of therapeutically relevant mechanisms of disease

    Combination of searches for heavy spin-1 resonances using 139 fb−1 of proton-proton collision data at √s = 13 TeV with the ATLAS detector

    Get PDF
    A combination of searches for new heavy spin-1 resonances decaying into diferent pairings of W, Z, or Higgs bosons, as well as directly into leptons or quarks, is presented. The data sample used corresponds to 139 fb−1 of proton-proton collisions at √s = 13 TeV collected during 2015–2018 with the ATLAS detector at the CERN Large Hadron Collider. Analyses selecting quark pairs (qq, bb, tt¯, and tb) or third-generation leptons (τν and τ τ ) are included in this kind of combination for the frst time. A simplifed model predicting a spin-1 heavy vector-boson triplet is used. Cross-section limits are set at the 95% confdence level and are compared with predictions for the benchmark model. These limits are also expressed in terms of constraints on couplings of the heavy vector-boson triplet to quarks, leptons, and the Higgs boson. The complementarity of the various analyses increases the sensitivity to new physics, and the resulting constraints are stronger than those from any individual analysis considered. The data exclude a heavy vector-boson triplet with mass below 5.8 TeV in a weakly coupled scenario, below 4.4 TeV in a strongly coupled scenario, and up to 1.5 TeV in the case of production via vector-boson fusion

    Searches for exclusive Higgs boson decays into D⁎γ and Z boson decays into D0γ and Ks0γ in pp collisions at √s = 13 TeV with the ATLAS detector

    Get PDF
    Searches for exclusive decays of the Higgs boson into D⁎γ and of the Z boson into D0γ and Ks0γ can probe flavour-violating Higgs boson and Z boson couplings to light quarks. Searches for these decays are performed with a pp collision data sample corresponding to an integrated luminosity of 136.3 fb−1 collected at s=13TeV between 2016–2018 with the ATLAS detector at the CERN Large Hadron Collider. In the D⁎γ and D0γ channels, the observed (expected) 95% confidence-level upper limits on the respective branching fractions are B(H→D⁎γ)&lt;1.0(1.2)×10−3, B(Z→D0γ)&lt;4.0(3.4)×10−6, while the corresponding results in the Ks0γ channel are B(Z→Ks0γ)&lt;3.1(3.0)×10−6

    Measurement of vector boson production cross sections and their ratios using pp collisions at √s = 13.6 TeV with the ATLAS detector

    Get PDF
    Abstract available from publisher's website

    Beam-induced backgrounds measured in the ATLAS detector during local gas injection into the LHC beam vacuum

    Get PDF
    Inelastic beam-gas collisions at the Large Hadron Collider (LHC), within a few hundred metres of the ATLAS experiment, are known to give the dominant contribution to beam backgrounds. These are monitored by ATLAS with a dedicated Beam Conditions Monitor (BCM) and with the rate of fake jets in the calorimeters. These two methods are complementary since the BCM probes backgrounds just around the beam pipe while fake jets are observed at radii of up to several metres. In order to quantify the correlation between the residual gas density in the LHC beam vacuum and the experimental backgrounds recorded by ATLAS, several dedicated tests were performed during LHC Run 2. Local pressure bumps, with a gas density several orders of magnitude higher than during normal operation, were introduced at different locations. The changes of beam-related backgrounds, seen in ATLAS, are correlated with the local pressure variation. In addition the rates of beam-gas events are estimated from the pressure measurements and pressure bump profiles obtained from calculations. Using these rates, the efficiency of the ATLAS beam background monitors to detect beam-gas events is derived as a function of distance from the interaction point. These efficiencies and characteristic distributions of fake jets from the beam backgrounds are found to be in good agreement with results of beam-gas simulations performed with theFluka Monte Carlo programme
    corecore