28 research outputs found

    Causes and consequences of purifying selection on SARS-CoV-2

    Get PDF
    Owing to a lag between a deleterious mutation’s appearance and its selective removal, gold-standard methods for mutation rate estimation assume no meaningful loss of mutations between parents and offspring. Indeed, from analysis of closely related lineages, in SARS-CoV-2, the Ka/Ks ratio was previously estimated as 1.008, suggesting no within-host selection. By contrast, we find a higher number of observed SNPs at 4-fold degenerate sites than elsewhere and, allowing for the virus’s complex mutational and compositional biases, estimate that the mutation rate is at least 49–67% higher than would be estimated based on the rate of appearance of variants in sampled genomes. Given the high Ka/Ks one might assume that the majority of such intrahost selection is the purging of nonsense mutations. However, we estimate that selection against nonsense mutations accounts for only ∼10% of all the “missing” mutations. Instead, classical protein-level selective filters (against chemically disparate amino acids and those predicted to disrupt protein functionality) account for many missing mutations. It is less obvious why for an intracellular parasite, amino acid cost parameters, notably amino acid decay rate, is also significant. Perhaps most surprisingly, we also find evidence for real-time selection against synonymous mutations that move codon usage away from that of humans. We conclude that there is common intrahost selection on SARS-CoV-2 that acts on nonsense, missense, and possibly synonymous mutations. This has implications for methods of mutation rate estimation, for determining times to common ancestry and the potential for intrahost evolution including vaccine escape

    Patients' functioning as predictor of nursing workload in acute hospital units providing rehabilitation care: a multi-centre cohort study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Management decisions regarding quality and quantity of nurse staffing have important consequences for hospital budgets. Furthermore, these management decisions must address the nursing care requirements of the particular patients within an organizational unit. In order to determine optimal nurse staffing needs, the extent of nursing workload must first be known. Nursing workload is largely a function of the composite of the patients' individual health status, particularly with respect to functioning status, individual need for nursing care, and severity of symptoms. The International Classification of Functioning, Disability and Health (ICF) and the derived subsets, the so-called ICF Core Sets, are a standardized approach to describe patients' functioning status. The objectives of this study were to (1) examine the association between patients' functioning, as encoded by categories of the Acute ICF Core Sets, and nursing workload in patients in the acute care situation, (2) compare the variance in nursing workload explained by the ICF Core Set categories and with the Barthel Index, and (3) validate the Acute ICF Core Sets by their ability to predict nursing workload.</p> <p>Methods</p> <p>Patients' functioning at admission was assessed using the respective Acute ICF Core Set and the Barthel Index, whereas nursing workload data was collected using an established instrument. Associations between dependent and independent variables were modelled using linear regression. Variable selection was carried out using penalized regression.</p> <p>Results</p> <p>In patients with neurological and cardiopulmonary conditions, selected ICF categories and the Barthel Index Score explained the same variance in nursing workload (44% in neurological conditions, 35% in cardiopulmonary conditions), whereas ICF was slightly superior to Barthel Index Score for musculoskeletal conditions (20% versus 16%).</p> <p>Conclusions</p> <p>A substantial fraction of the variance in nursing workload in patients with rehabilitation needs in the acute hospital could be predicted by selected categories of the Acute ICF Core Sets, or by the Barthel Index score. Incorporating ICF Core Set-based data in nursing management decisions, particularly staffing decisions, may be beneficial.</p

    European Atlas of Natural Radiation

    Get PDF
    Natural ionizing radiation is considered as the largest contributor to the collective effective dose received by the world population. The human population is continuously exposed to ionizing radiation from several natural sources that can be classified into two broad categories: high-energy cosmic rays incident on the Earth’s atmosphere and releasing secondary radiation (cosmic contribution); and radioactive nuclides generated during the formation of the Earth and still present in the Earth’s crust (terrestrial contribution). Terrestrial radioactivity is mostly produced by the uranium and thorium radioactive families together with potassium. In most circumstances, radon, a noble gas produced in the radioactive decay of uranium, is the most important contributor to the total dose. This Atlas aims to present the current state of knowledge of natural radioactivity, by giving general background information, and describing its various sources. This reference material is complemented by a collection of maps of Europe displaying the levels of natural radioactivity caused by different sources. It is a compilation of contributions and reviews received from more than 80 experts in their field: they come from universities, research centres, national and European authorities and international organizations. This Atlas provides reference material and makes harmonized datasets available to the scientific community and national competent authorities. In parallel, this Atlas may serve as a tool for the public to: • familiarize itself with natural radioactivity; • be informed about the levels of natural radioactivity caused by different sources; • have a more balanced view of the annual dose received by the world population, to which natural radioactivity is the largest contributor; • and make direct comparisons between doses from natural sources of ionizing radiation and those from man-made (artificial) ones, hence to better understand the latter.JRC.G.10-Knowledge for Nuclear Security and Safet

    Multidisciplinary constraints of hydrothermal explosions based on the 2013 Gengissig lake events, Kverkfjöll volcano, Iceland

    Get PDF
    Highlights • A multidisciplinary approach to unravel the energetics of hydrothermal explosions. • Pressure failure caused by a lake drainage triggered the hydrothermal explosions. • Bedrock nature controlled the explosion dynamics and the way energy was released. • Approx. 30% of the available thermal energy is converted into mechanical energy. • Released seismic energy as proxy to detect past (and future?) hydrothermal explosions. Hydrothermal explosions frequently occur in geothermal areas showing various mechanisms and energies of explosivity. Their deposits, though generally hardly recognised or badly preserved, provide important insights to quantify the dynamics and energy of these poorly understood explosive events. Furthermore the host rock lithology of the geothermal system adds a control on the efficiency in the energy release during an explosion. We present results from a detailed study of recent hydrothermal explosion deposits within an active geothermal area at Kverkfjöll, a central volcano at the northern edge of Vatnajökull. On August 15th 2013, a small jökulhlaup occurred when the Gengissig ice-dammed lake drained at Kverkfjöll. The lake level dropped by approximately 30 m, decreasing pressure on the lake bed and triggering several hydrothermal explosions on the 16th. Here, a multidisciplinary approach combining detailed field work, laboratory studies, and models of the energetics of explosions with information on duration and amplitudes of seismic signals, has been used to analyse the mechanisms and characteristics of these hydrothermal explosions. Field and laboratory studies were also carried out to help constrain the sedimentary sequence involved in the event. The explosions lasted for 40–50 s and involved the surficial part of an unconsolidated and hydrothermally altered glacio-lacustrine deposit composed of pyroclasts, lavas, scoriaceous fragments, and fine-grained welded or loosely consolidated aggregates, interbedded with clay-rich levels. Several small fans of ejecta were formed, reaching a distance of 1 km north of the lake and covering an area of approximately 0.3 km2, with a maximum thickness of 40 cm at the crater walls. The material (volume of approximately 104 m3) has been ejected by the expanding boiling fluid, generated by a pressure failure affecting the surficial geothermal reservoir. The maximum thermal, craterisation and ejection energies, calculated for the explosion areas, are on the order of 1011, 1010 and 109 J, respectively. Comparison of these with those estimated by the volume of the ejecta and the crater sizes, yields good agreement. We estimate that approximately 30% of the available thermal energy was converted into mechanical energy during this event. The residual energy was largely dissipated as heat, while only a small portion was converted into seismic energy. Estimation of the amount of freshly-fragmented clasts in the ejected material obtained from SEM morphological analyses, reveals that a low but significant energy consumption by fragmentation occurred. Decompression experiments were performed in the laboratory mimicking the conditions due to the drainage of the lake. Experimental results confirm that only a minor amount of energy is consumed by the creation of new surfaces in fragmentation, whereas most of the fresh fragments derive from the disaggregation of aggregates. Furthermore, ejection velocities of the particles (40–50 m/s), measured via high-speed videos, are consistent with those estimated from the field. The multidisciplinary approach used here to investigate hydrothermal explosions has proven to be a valuable tool which can provide robust constraints on energy release and partitioning for such small-size yet hazardous, steam-explosion events

    The Science Performance of JWST as Characterized in Commissioning

    Full text link
    This paper characterizes the actual science performance of the James Webb Space Telescope (JWST), as determined from the six month commissioning period. We summarize the performance of the spacecraft, telescope, science instruments, and ground system, with an emphasis on differences from pre-launch expectations. Commissioning has made clear that JWST is fully capable of achieving the discoveries for which it was built. Moreover, almost across the board, the science performance of JWST is better than expected; in most cases, JWST will go deeper faster than expected. The telescope and instrument suite have demonstrated the sensitivity, stability, image quality, and spectral range that are necessary to transform our understanding of the cosmos through observations spanning from near-earth asteroids to the most distant galaxies.Comment: 5th version as accepted to PASP; 31 pages, 18 figures; https://iopscience.iop.org/article/10.1088/1538-3873/acb29

    The James Webb Space Telescope Mission

    Full text link
    Twenty-six years ago a small committee report, building on earlier studies, expounded a compelling and poetic vision for the future of astronomy, calling for an infrared-optimized space telescope with an aperture of at least 4m4m. With the support of their governments in the US, Europe, and Canada, 20,000 people realized that vision as the 6.5m6.5m James Webb Space Telescope. A generation of astronomers will celebrate their accomplishments for the life of the mission, potentially as long as 20 years, and beyond. This report and the scientific discoveries that follow are extended thank-you notes to the 20,000 team members. The telescope is working perfectly, with much better image quality than expected. In this and accompanying papers, we give a brief history, describe the observatory, outline its objectives and current observing program, and discuss the inventions and people who made it possible. We cite detailed reports on the design and the measured performance on orbit.Comment: Accepted by PASP for the special issue on The James Webb Space Telescope Overview, 29 pages, 4 figure

    Transgene-design:A web application for the design of mammalian transgenes

    Get PDF
    SUMMARY: Transgene-design is a web application to help design transgenes for use in mammalian studies. It is predicated on the recent discovery that human intronless transgenes and native retrogenes can be expressed very effectively if the GC content at exonic synonymous sites is high. In addition, as exonic splice enhancers resident in intron containing genes may have different utility in intronless genes, these can be reduced or increased in density. Input can be a native gene or a commercially ‘optimised’ gene. The option to leave in the first intron and to protect or avoid other motifs is also permitted. AVAILABILITY AND IMPLEMENTATION: Transgene-design is based on a ruby for rails platform. The application is available at https://transgene-design.bath.ac.uk. The code is available under GNU General Public License from GitHub (https://github.com/smuehlh/transgenes). SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online

    Competing tRNA(CAG) in Ascoidea asiatica

    No full text
    These files contain supplementary data to the publication "Endogenous stochastic decoding of the CUG codon by competing Ser- and Leu-tRNAs in <i>Ascoidea asiatica</i>" by <br><i> </i><p>Stefanie Mühlhausen, Hans Dieter Schmitt, Kuan-Ting Pan, Uwe Plessmann, Henning Urlaub, Laurence D. Hurst and Martin Kollmar.</p
    corecore