112 research outputs found

    The effect of 12C + 12C rate uncertainties on the evolution and nucleosynthesis of massive stars

    Full text link
    [Shortened] The 12C + 12C fusion reaction has been the subject of considerable experimental efforts to constrain uncertainties at temperatures relevant for stellar nucleosynthesis. In order to investigate the effect of an enhanced carbon burning rate on massive star structure and nucleosynthesis, new stellar evolution models and their yields are presented exploring the impact of three different 12C + 12C reaction rates. Non-rotating stellar models were generated using the Geneva Stellar Evolution Code and were later post-processed with the NuGrid Multi-zone Post-Processing Network tool. The enhanced rate causes core carbon burning to be ignited more promptly and at lower temperature. This reduces the neutrino losses, which increases the core carbon burning lifetime. An increased carbon burning rate also increases the upper initial mass limit for which a star exhibits a convective carbon core. Carbon shell burning is also affected, with fewer convective-shell episodes and convection zones that tend to be larger in mass. Consequently, the chance of an overlap between the ashes of carbon core burning and the following carbon shell convection zones is increased, which can cause a portion of the ashes of carbon core burning to be included in the carbon shell. Therefore, during the supernova explosion, the ejecta will be enriched by s-process nuclides synthesized from the carbon core s process. The yields were used to estimate the weak s-process component in order to compare with the solar system abundance distribution. The enhanced rate models were found to produce a significant proportion of Kr, Sr, Y, Zr, Mo, Ru, Pd and Cd in the weak component, which is primarily the signature of the carbon-core s process. Consequently, it is shown that the production of isotopes in the Kr-Sr region can be used to constrain the 12C + 12C rate using the current branching ratio for a- and p-exit channels.Comment: The paper contains 17 figures and 7 tables. Table 7 will be published in full online onl

    Malnutrition Has No Effect on the Timing of Human Tooth Formation

    Get PDF
    The effect of nutrition on the timing of human tooth formation is poorly understood. Delays and advancements in dental maturation have all been reported as well as no effect. We investigated the effect of severe malnutrition on the timing of human tooth formation in a large representative sample of North Sudanese children. The sample (1102 males, 1013 females) consisted of stratified randomly selected healthy individuals in Khartoum, Sudan, aged 2-22 years using a cross-sectional design following the STROBE statement. Nutritional status was defined using WHO criteria of height and weight. Body mass index Z-scores and height for age Z-scores of ≤-2 (cut-off) were used to identify the malnourished group (N = 474) while the normal was defined by Z-scores of ≥0 (N = 799). Clinical and radiographic examination of individuals, with known ages of birth was performed including height and weight measurements. Mandibular left permanent teeth were assessed using eight crown and seven root established tooth formation stages. Mean age at entry and mean age within tooth stages were calculated for each available tooth stage in each group and compared using a t-test. Results show the mean age at entry and mean age within tooth stages were not significantly different between groups affected by severe malnutrition and normal children (p>0.05). This remarkable finding was evident across the span of dental development. We demonstrate that there is little measurable effect of sustained malnutrition on the average timing of tooth formation. This noteworthy finding supports the notion that teeth have substantial biological stability and are insulated from extreme nutritional conditions compared to other maturing body systems

    Cardiovascular development: towards biomedical applicability: Epicardium-derived cells in cardiogenesis and cardiac regeneration

    Get PDF
    During cardiogenesis, the epicardium grows from the proepicardial organ to form the outermost layer of the early heart. Part of the epicardium undergoes epithelial-mesenchymal transformation, and migrates into the myocardium. These epicardium- derived cells differentiate into interstitial fibroblasts, coronary smooth muscle cells, and perivascular fibroblasts. Moreover, epicardium-derived cells are important regulators of formation of the compact myocardium, the coronary vasculature, and the Purkinje fiber network, thus being essential for proper cardiac development. The fibrous structures of the heart such as the fibrous heart skeleton and the semilunar and atrioventricular valves also depend on a contribution of these cells during development. We hypothesise that the essential properties of epicardium-derived cells can be recapitulated in adult diseased myocardium. These cells can therefore be considered as a novel source of adult stem cells useful in clinical cardiac regeneration therapy

    Antimicrobial resistance among migrants in Europe: a systematic review and meta-analysis

    Get PDF
    BACKGROUND: Rates of antimicrobial resistance (AMR) are rising globally and there is concern that increased migration is contributing to the burden of antibiotic resistance in Europe. However, the effect of migration on the burden of AMR in Europe has not yet been comprehensively examined. Therefore, we did a systematic review and meta-analysis to identify and synthesise data for AMR carriage or infection in migrants to Europe to examine differences in patterns of AMR across migrant groups and in different settings. METHODS: For this systematic review and meta-analysis, we searched MEDLINE, Embase, PubMed, and Scopus with no language restrictions from Jan 1, 2000, to Jan 18, 2017, for primary data from observational studies reporting antibacterial resistance in common bacterial pathogens among migrants to 21 European Union-15 and European Economic Area countries. To be eligible for inclusion, studies had to report data on carriage or infection with laboratory-confirmed antibiotic-resistant organisms in migrant populations. We extracted data from eligible studies and assessed quality using piloted, standardised forms. We did not examine drug resistance in tuberculosis and excluded articles solely reporting on this parameter. We also excluded articles in which migrant status was determined by ethnicity, country of birth of participants' parents, or was not defined, and articles in which data were not disaggregated by migrant status. Outcomes were carriage of or infection with antibiotic-resistant organisms. We used random-effects models to calculate the pooled prevalence of each outcome. The study protocol is registered with PROSPERO, number CRD42016043681. FINDINGS: We identified 2274 articles, of which 23 observational studies reporting on antibiotic resistance in 2319 migrants were included. The pooled prevalence of any AMR carriage or AMR infection in migrants was 25·4% (95% CI 19·1-31·8; I2 =98%), including meticillin-resistant Staphylococcus aureus (7·8%, 4·8-10·7; I2 =92%) and antibiotic-resistant Gram-negative bacteria (27·2%, 17·6-36·8; I2 =94%). The pooled prevalence of any AMR carriage or infection was higher in refugees and asylum seekers (33·0%, 18·3-47·6; I2 =98%) than in other migrant groups (6·6%, 1·8-11·3; I2 =92%). The pooled prevalence of antibiotic-resistant organisms was slightly higher in high-migrant community settings (33·1%, 11·1-55·1; I2 =96%) than in migrants in hospitals (24·3%, 16·1-32·6; I2 =98%). We did not find evidence of high rates of transmission of AMR from migrant to host populations. INTERPRETATION: Migrants are exposed to conditions favouring the emergence of drug resistance during transit and in host countries in Europe. Increased antibiotic resistance among refugees and asylum seekers and in high-migrant community settings (such as refugee camps and detention facilities) highlights the need for improved living conditions, access to health care, and initiatives to facilitate detection of and appropriate high-quality treatment for antibiotic-resistant infections during transit and in host countries. Protocols for the prevention and control of infection and for antibiotic surveillance need to be integrated in all aspects of health care, which should be accessible for all migrant groups, and should target determinants of AMR before, during, and after migration. FUNDING: UK National Institute for Health Research Imperial Biomedical Research Centre, Imperial College Healthcare Charity, the Wellcome Trust, and UK National Institute for Health Research Health Protection Research Unit in Healthcare-associated Infections and Antimictobial Resistance at Imperial College London

    Global assessment of genomic variation in cattle by genome resequencing and high-throughput genotyping

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Integration of genomic variation with phenotypic information is an effective approach for uncovering genotype-phenotype associations. This requires an accurate identification of the different types of variation in individual genomes.</p> <p>Results</p> <p>We report the integration of the whole genome sequence of a single Holstein Friesian bull with data from single nucleotide polymorphism (SNP) and comparative genomic hybridization (CGH) array technologies to determine a comprehensive spectrum of genomic variation. The performance of resequencing SNP detection was assessed by combining SNPs that were identified to be either in identity by descent (IBD) or in copy number variation (CNV) with results from SNP array genotyping. Coding insertions and deletions (indels) were found to be enriched for size in multiples of 3 and were located near the N- and C-termini of proteins. For larger indels, a combination of split-read and read-pair approaches proved to be complementary in finding different signatures. CNVs were identified on the basis of the depth of sequenced reads, and by using SNP and CGH arrays.</p> <p>Conclusions</p> <p>Our results provide high resolution mapping of diverse classes of genomic variation in an individual bovine genome and demonstrate that structural variation surpasses sequence variation as the main component of genomic variability. Better accuracy of SNP detection was achieved with little loss of sensitivity when algorithms that implemented mapping quality were used. IBD regions were found to be instrumental for calculating resequencing SNP accuracy, while SNP detection within CNVs tended to be less reliable. CNV discovery was affected dramatically by platform resolution and coverage biases. The combined data for this study showed that at a moderate level of sequencing coverage, an ensemble of platforms and tools can be applied together to maximize the accurate detection of sequence and structural variants.</p

    Low incidence of SARS-CoV-2, risk factors of mortality and the course of illness in the French national cohort of dialysis patients

    Get PDF

    Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)

    Get PDF

    SARS-CoV-2 susceptibility and COVID-19 disease severity are associated with genetic variants affecting gene expression in a variety of tissues

    Get PDF
    Variability in SARS-CoV-2 susceptibility and COVID-19 disease severity between individuals is partly due to genetic factors. Here, we identify 4 genomic loci with suggestive associations for SARS-CoV-2 susceptibility and 19 for COVID-19 disease severity. Four of these 23 loci likely have an ethnicity-specific component. Genome-wide association study (GWAS) signals in 11 loci colocalize with expression quantitative trait loci (eQTLs) associated with the expression of 20 genes in 62 tissues/cell types (range: 1:43 tissues/gene), including lung, brain, heart, muscle, and skin as well as the digestive system and immune system. We perform genetic fine mapping to compute 99% credible SNP sets, which identify 10 GWAS loci that have eight or fewer SNPs in the credible set, including three loci with one single likely causal SNP. Our study suggests that the diverse symptoms and disease severity of COVID-19 observed between individuals is associated with variants across the genome, affecting gene expression levels in a wide variety of tissue types

    Whole-genome sequencing reveals host factors underlying critical COVID-19

    Get PDF
    Critical COVID-19 is caused by immune-mediated inflammatory lung injury. Host genetic variation influences the development of illness requiring critical care1 or hospitalization2–4 after infection with SARS-CoV-2. The GenOMICC (Genetics of Mortality in Critical Care) study enables the comparison of genomes from individuals who are critically ill with those of population controls to find underlying disease mechanisms. Here we use whole-genome sequencing in 7,491 critically ill individuals compared with 48,400 controls to discover and replicate 23 independent variants that significantly predispose to critical COVID-19. We identify 16 new independent associations, including variants within genes that are involved in interferon signalling (IL10RB and PLSCR1), leucocyte differentiation (BCL11A) and blood-type antigen secretor status (FUT2). Using transcriptome-wide association and colocalization to infer the effect of gene expression on disease severity, we find evidence that implicates multiple genes—including reduced expression of a membrane flippase (ATP11A), and increased expression of a mucin (MUC1)—in critical disease. Mendelian randomization provides evidence in support of causal roles for myeloid cell adhesion molecules (SELE, ICAM5 and CD209) and the coagulation factor F8, all of which are potentially druggable targets. Our results are broadly consistent with a multi-component model of COVID-19 pathophysiology, in which at least two distinct mechanisms can predispose to life-threatening disease: failure to control viral replication; or an enhanced tendency towards pulmonary inflammation and intravascular coagulation. We show that comparison between cases of critical illness and population controls is highly efficient for the detection of therapeutically relevant mechanisms of disease

    Impact of primary kidney disease on the effects of empagliflozin in patients with chronic kidney disease: secondary analyses of the EMPA-KIDNEY trial

    Get PDF
    Background: The EMPA KIDNEY trial showed that empagliflozin reduced the risk of the primary composite outcome of kidney disease progression or cardiovascular death in patients with chronic kidney disease mainly through slowing progression. We aimed to assess how effects of empagliflozin might differ by primary kidney disease across its broad population. Methods: EMPA-KIDNEY, a randomised, controlled, phase 3 trial, was conducted at 241 centres in eight countries (Canada, China, Germany, Italy, Japan, Malaysia, the UK, and the USA). Patients were eligible if their estimated glomerular filtration rate (eGFR) was 20 to less than 45 mL/min per 1·73 m2, or 45 to less than 90 mL/min per 1·73 m2 with a urinary albumin-to-creatinine ratio (uACR) of 200 mg/g or higher at screening. They were randomly assigned (1:1) to 10 mg oral empagliflozin once daily or matching placebo. Effects on kidney disease progression (defined as a sustained ≥40% eGFR decline from randomisation, end-stage kidney disease, a sustained eGFR below 10 mL/min per 1·73 m2, or death from kidney failure) were assessed using prespecified Cox models, and eGFR slope analyses used shared parameter models. Subgroup comparisons were performed by including relevant interaction terms in models. EMPA-KIDNEY is registered with ClinicalTrials.gov, NCT03594110. Findings: Between May 15, 2019, and April 16, 2021, 6609 participants were randomly assigned and followed up for a median of 2·0 years (IQR 1·5–2·4). Prespecified subgroupings by primary kidney disease included 2057 (31·1%) participants with diabetic kidney disease, 1669 (25·3%) with glomerular disease, 1445 (21·9%) with hypertensive or renovascular disease, and 1438 (21·8%) with other or unknown causes. Kidney disease progression occurred in 384 (11·6%) of 3304 patients in the empagliflozin group and 504 (15·2%) of 3305 patients in the placebo group (hazard ratio 0·71 [95% CI 0·62–0·81]), with no evidence that the relative effect size varied significantly by primary kidney disease (pheterogeneity=0·62). The between-group difference in chronic eGFR slopes (ie, from 2 months to final follow-up) was 1·37 mL/min per 1·73 m2 per year (95% CI 1·16–1·59), representing a 50% (42–58) reduction in the rate of chronic eGFR decline. This relative effect of empagliflozin on chronic eGFR slope was similar in analyses by different primary kidney diseases, including in explorations by type of glomerular disease and diabetes (p values for heterogeneity all &gt;0·1). Interpretation: In a broad range of patients with chronic kidney disease at risk of progression, including a wide range of non-diabetic causes of chronic kidney disease, empagliflozin reduced risk of kidney disease progression. Relative effect sizes were broadly similar irrespective of the cause of primary kidney disease, suggesting that SGLT2 inhibitors should be part of a standard of care to minimise risk of kidney failure in chronic kidney disease. Funding: Boehringer Ingelheim, Eli Lilly, and UK Medical Research Council
    corecore