575 research outputs found

    Perinatal mortality associated with induction of labour versus expectant management in nulliparous women aged 35 years or over: An English national cohort study.

    Get PDF
    BACKGROUND: A recent randomised controlled trial (RCT) demonstrated that induction of labour at 39 weeks of gestational age has no short-term adverse effect on the mother or infant among nulliparous women aged ≥35 years. However, the trial was underpowered to address the effect of routine induction of labour on the risk of perinatal death. We aimed to determine the association between induction of labour at ≥39 weeks and the risk of perinatal mortality among nulliparous women aged ≥35 years. METHODS AND FINDINGS: We used English Hospital Episode Statistics (HES) data collected between April 2009 and March 2014 to compare perinatal mortality between induction of labour at 39, 40, and 41 weeks of gestation and expectant management (continuation of pregnancy to either spontaneous labour, induction of labour, or caesarean section at a later gestation). Analysis was by multivariable Poisson regression with adjustment for maternal characteristics and pregnancy-related conditions. Among the cohort of 77,327 nulliparous women aged 35 to 50 years delivering a singleton infant, 33.1% had labour induced: these women tended to be older and more likely to have medical complications of pregnancy, and the infants were more likely to be small for gestational age. Induction of labour at 40 weeks (compared with expectant management) was associated with a lower risk of in-hospital perinatal death (0.08% versus 0.26%; adjusted risk ratio [adjRR] 0.33; 95% CI 0.13-0.80, P = 0.015) and meconium aspiration syndrome (0.44% versus 0.86%; adjRR 0.52; 95% CI 0.35-0.78, P = 0.002). Induction at 40 weeks was also associated with a slightly increased risk of instrumental vaginal delivery (adjRR 1.06; 95% CI 1.01-1.11, P = 0.020) and emergency caesarean section (adjRR 1.05; 95% CI 1.01-1.09, P = 0.019). The number needed to treat (NNT) analysis indicated that 562 (95% CI 366-1,210) inductions of labour at 40 weeks would be required to prevent 1 perinatal death. Limitations of the study include the reliance on observational data in which gestational age is recorded in weeks rather than days. There is also the potential for unmeasured confounders and under-recording of induction of labour or perinatal death in the dataset. CONCLUSIONS: Bringing forward the routine offer of induction of labour from the current recommendation of 41-42 weeks to 40 weeks of gestation in nulliparous women aged ≥35 years may reduce overall rates of perinatal death

    A guide to evaluating linkage quality for the analysis of linked data.

    Get PDF
    Linked datasets are an important resource for epidemiological and clinical studies, but linkage error can lead to biased results. For data security reasons, linkage of personal identifiers is often performed by a third party, making it difficult for researchers to assess the quality of the linked dataset in the context of specific research questions. This is compounded by a lack of guidance on how to determine the potential impact of linkage error. We describe how linkage quality can be evaluated and provide widely applicable guidance for both data providers and researchers. Using an illustrative example of a linked dataset of maternal and baby hospital records, we demonstrate three approaches for evaluating linkage quality: applying the linkage algorithm to a subset of gold standard data to quantify linkage error; comparing characteristics of linked and unlinked data to identify potential sources of bias; and evaluating the sensitivity of results to changes in the linkage procedure. These approaches can inform our understanding of the potential impact of linkage error and provide an opportunity to select the most appropriate linkage procedure for a specific analysis. Evaluating linkage quality in this way will improve the quality and transparency of epidemiological and clinical research using linked data

    Measuring physiological stress in the common marmoset (Callithrix jacchus): Validation of a salivary cortisol collection and assay technique

    Get PDF
    Cortisol levels are often used as a physiological measure of the stress response in captive primates, with non-invasive measures of this being an important step in welfare assessment. We report a method of collecting saliva samples voluntarily from unrestrained captive common marmosets (Callithrix jacchus), and validate an enzyme-linked immunosorbent assay (ELISA) technique previously unused in this species. Saliva samples were collected from marmosets housed in pairs in a UK laboratory. The assay showed parallelism, precision, accuracy and sensitivity, meeting the criteria typically used to investigate the effectiveness of new analytical techniques. Use of Salimetrics® Oral Swabs considerably increased the amount of cortisol recovered in comparison with previous studies using cotton buds. However, while use of banana on the swabs can encourage chewing, it may influence results. Although increases in cortisol levels have traditionally been interpreted as an indicator of stress in primates, there are many factors that affect the hypothalamic-pituitary-adrenal axis, with some studies showing decreases in cortisol levels post-stressor. Following a likely stressful event (capture for weighing), we also found cortisol levels significantly decreased, possibly due to social buffering or ‘blunting’ of the HPA axis. Order of weighing also had an effect. The method therefore provided an effective non-invasive means of assessing acute changes in cortisol level that may be more useful than previous methods, improving our ability to study physiological aspects of welfare in primates. We discuss methodological considerations, as well as implications of using cortisol as a measure of stress

    Short and long-read genome sequencing methodologies for somatic variant detection; genomic analysis of a patient with diffuse large B-cell lymphoma

    Get PDF
    Recent advances in throughput and accuracy mean that the Oxford Nanopore Technologies PromethiON platform is a now a viable solution for genome sequencing. Much of the validation of bioinformatic tools for this long-read data has focussed on calling germline variants (including structural variants). Somatic variants are outnumbered many-fold by germline variants and their detection is further complicated by the effects of tumour purity/subclonality. Here, we evaluate the extent to which Nanopore sequencing enables detection and analysis of somatic variation. We do this through sequencing tumour and germline genomes for a patient with diffuse B-cell lymphoma and comparing results with 150 bp short-read sequencing of the same samples. Calling germline single nucleotide variants (SNVs) from specific chromosomes of the long-read data achieved good specificity and sensitivity. However, results of somatic SNV calling highlight the need for the development of specialised joint calling algorithms. We find the comparative genome-wide performance of different tools varies significantly between structural variant types, and suggest long reads are especially advantageous for calling large somatic deletions and duplications. Finally, we highlight the utility of long reads for phasing clinically relevant variants, confirming that a somatic 1.6 Mb deletion and a p.(Arg249Met) mutation involving TP53 are oriented in trans

    Mode of birth and risk of infection-related hospitalisation in childhood: A population cohort study of 7.17 million births from 4 high-income countries

    Get PDF
    BACKGROUND: The proportion of births via cesarean section (CS) varies worldwide and in many countries exceeds WHO-recommended rates. Long-term health outcomes for children born by CS are poorly understood, but limited data suggest that CS is associated with increased infection-related hospitalisation. We investigated the relationship between mode of birth and childhood infection-related hospitalisation in high-income countries with varying CS rates. METHODS AND FINDINGS: We conducted a multicountry population-based cohort study of all recorded singleton live births from January 1, 1996 to December 31, 2015 using record-linked birth and hospitalisation data from Denmark, Scotland, England, and Australia (New South Wales and Western Australia). Birth years within the date range varied by site, but data were available from at least 2001 to 2010 for each site. Mode of birth was categorised as vaginal or CS (emergency/elective). Infection-related hospitalisations (overall and by clinical type) occurring after the birth-related discharge date were identified in children until 5 years of age by primary/secondary International Classification of Diseases, 10th Revision (ICD-10) diagnosis codes. Analysis used Cox regression models, adjusting for maternal factors, birth parameters, and socioeconomic status, with results pooled using meta-analysis. In total, 7,174,787 live recorded births were included. Of these, 1,681,966 (23%, range by jurisdiction 17%-29%) were by CS, of which 727,755 (43%, range 38%-57%) were elective. A total of 1,502,537 offspring (21%) had at least 1 infection-related hospitalisation. Compared to vaginally born children, risk of infection was greater among CS-born children (hazard ratio (HR) from random effects model, HR 1.10, 95% confidence interval (CI) 1.09-1.12, p < 0.001). The risk was higher following both elective (HR 1.13, 95% CI 1.12-1.13, p < 0.001) and emergency CS (HR 1.09, 95% CI 1.06-1.12, p < 0.001). Increased risks persisted to 5 years and were highest for respiratory, gastrointestinal, and viral infections. Findings were comparable in prespecified subanalyses of children born to mothers at low obstetric risk and unchanged in sensitivity analyses. Limitations include site-specific and longitudinal variations in clinical practice and in the definition and availability of some data. Data on postnatal factors were not available. CONCLUSIONS: In this study, we observed a consistent association between birth by CS and infection-related hospitalisation in early childhood. Notwithstanding the limitations of observational data, the associations may reflect differences in early microbial exposure by mode of birth, which should be investigated by mechanistic studies. If our findings are confirmed, they could inform efforts to reduce elective CS rates that are not clinically indicated

    Juvenile Justice—Translational Research on Interventions for Adolescents in the Legal System (JJ-TRIALS): A Cluster Randomized Trial Targeting System-Wide Improvement in Substance Use Services

    Get PDF
    Background: The purpose of this paper is to describe the Juvenile Justice—Translational Research on Interventions for Adolescents in the Legal System (JJ-TRIALS) study, a cooperative implementation science initiative involving the National Institute on Drug Abuse, six research centers, a coordinating center, and Juvenile Justice Partners representing seven US states. While the pooling of resources across centers enables a robust implementation study design involving 36 juvenile justice agencies and their behavioral health partner agencies, co-producing a study protocol that has potential to advance implementation science, meets the needs of all constituencies (funding agency, researchers, partners, study sites), and can be implemented with fidelity across the cooperative can be challenging. This paper describes (a) the study background and rationale, including the juvenile justice context and best practices for substance use disorders, (b) the selection and use of an implementation science framework to guide study design and inform selection of implementation components, and (c) the specific study design elements, including research questions, implementation interventions, measurement, and analytic plan. Methods/design: The JJ-TRIALS primary study uses a head-to-head cluster randomized trial with a phased rollout to evaluate the differential effectiveness of two conditions (Core and Enhanced) in 36 sites located in seven states. A Core strategy for promoting change is compared to an Enhanced strategy that incorporates all core strategies plus active facilitation. Target outcomes include improvements in evidence-based screening, assessment, and linkage to substance use treatment. Discussion: Contributions to implementation science are discussed as well as challenges associated with designing and deploying a complex, collaborative project. Trial registration: NCT02672150

    Birth "Out-of-Hours": An Evaluation of Obstetric Practice and Outcome According to the Presence of Senior Obstetricians on the Labour Ward.

    Get PDF
    BACKGROUND: Concerns have been raised that a lack of senior obstetricians ("consultants") on the labour ward outside normal hours may lead to worse outcomes among babies born during periods of reduced cover. METHODS AND FINDINGS: We carried out a multicentre cohort study using data from 19 obstetric units in the United Kingdom between 1 April 2012 and 31 March 2013 to examine whether rates of obstetric intervention and outcome change "out-of-hours," i.e., when consultants are not providing dedicated, on-site labour ward cover. At the 19 hospitals, obstetric rotas ranged from 51 to 106 h of on-site labour ward cover per week. There were 87,501 singleton live births during the year, and 55.8% occurred out-of-hours. Women who delivered out-of-hours had slightly lower rates of intrapartum caesarean section (CS) (12.7% versus 13.4%, adjusted odds ratio [OR] 0.94; 95% confidence interval [CI] 0.90 to 0.98) and instrumental delivery (15.6% versus 17.0%, adj. OR 0.92; 95% CI 0.89 to 0.96) than women who delivered at times of on-site labour ward cover. There was some evidence that the severe perineal tear rate was reduced in out-of-hours vaginal deliveries (3.3% versus 3.6%, adj. OR 0.92; 95% CI 0.85 to 1.00). There was no evidence of a statistically significant difference between out-of-hours and "in-hours" deliveries in the rate of babies with a low Apgar score at 5 min (1.33% versus 1.25%, adjusted OR 1.07; 95% CI 0.95 to 1.21) or low cord pH (0.94% versus 0.82%; adjusted OR 1.12; 95% CI 0.96 to 1.31). Key study limitations include the potential for bias by indication, the reliance upon an organisational measure of consultant presence, and a non-random sample of maternity units. CONCLUSIONS: There was no difference in the rate of maternal and neonatal morbidity according to the presence of consultants on the labour ward, with the possible exception of a reduced rate of severe perineal tears in out-of-hours vaginal deliveries. Fewer women had operative deliveries out-of-hours. Taken together, the available evidence provides some reassurance that the current organisation of maternity care in the UK allows for good planning and risk management. However there is a need for more robust evidence on the quality of care afforded by different models of labour ward staffing

    Symmetric dimethylarginine (SDMA) is a stronger predictor of mortality risk than asymmetric dimethylarginine (ADMA) amongst older people with kidney disease

    Get PDF
    Background Circulating asymmetric (ADMA) and symmetric dimethylarginine (SDMA) are increased in patients with kidney disease. SDMA is considered a good marker of glomerular filtration rate (GFR) whilst ADMA is a marker of cardiovascular risk. However, a link between SDMA and all-cause mortality has been reported. In the present study we evaluated both dimethylarginines as risk and GFR markers in a cohort of elderly white individuals, both with and without CKD. Methods GFR was measured in 394 individuals aged >74 years using an iohexol clearance method. Plasma ADMA, SDMA and iohexol were measured simultaneously using isotope dilution tandem mass spectrometry. Results Plasma ADMA concentrations were increased (P60 mL/min/1.73 m², but did not differ (P>0.05) between those with GFR 30-59 mL/min/1.73 m² and <30 mL/min/1.73 m². Plasma SDMA increased consistently across declining GFR categories (P<0.0001). GFR had an independent effect on plasma ADMA concentration whilst GFR, gender, body mass index and haemoglobin had independent effects on plasma SDMA concentration. Participants were followed for a median of 33 months. There were 65 deaths. High plasma ADMA (P=0.0412) and SDMA (P<0.0001) concentrations were independently associated with reduced survival. Conclusions Amongst elderly white individuals with a range of kidney function, SDMA was a better marker of GFR and a stronger predictor of outcome than ADMA. Future studies should further evaluate the role of SDMA as a marker of outcome and assess its potential value as a marker of GFR

    Multi-disciplinary team directed analysis of whole genome sequencing reveals pathogenic non-coding variants in molecularly undiagnosed inherited retinal dystrophies

    Get PDF
    PURPOSE: To identify, using genome sequencing (GS), likely pathogenic non-coding variants in inherited retinal dystrophy (IRD) genes Methods: Patients with IRD were recruited to the study and underwent comprehensive ophthalmological evaluation and GS. The results of GS were investigated through virtual gene panel analysis and plausible pathogenic variants and clinical phenotype evaluated by multi-disciplinary team (MDT) discussion. For unsolved patients in whom a specific gene was suspected to harbour a missed pathogenic variant, targeted re-analysis of non-coding regions was performed on GS data. Candidate variants were functionally tested including by mRNA analysis, minigene and luciferase reporter assays. RESULTS: Previously unreported, likely pathogenic, non-coding variants, in 7 genes (PRPF31, NDP, IFT140, CRB1, USH2A, BBS10, and GUCY2D), were identified in 11 patients. These were shown to lead to mis-splicing (PRPF31, IFT140, CRB1, USH2A) or altered transcription levels (BBS10, GUCY2D). CONCLUSION: MDT-led, phenotype driven, non-coding variant re-analysis of GS is effective in identifying missing causative alleles
    • …
    corecore