29 research outputs found

    MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation

    Full text link
    MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics

    Scientific Application Requirements for Leadership Computing at the Exascale

    Get PDF
    The Department of Energy s Leadership Computing Facility, located at Oak Ridge National Laboratory s National Center for Computational Sciences, recently polled scientific teams that had large allocations at the center in 2007, asking them to identify computational science requirements for future exascale systems (capable of an exaflop, or 1018 floating point operations per second). These requirements are necessarily speculative, since an exascale system will not be realized until the 2015 2020 timeframe, and are expressed where possible relative to a recent petascale requirements analysis of similar science applications [1]. Our initial findings, which beg further data collection, validation, and analysis, did in fact align with many of our expectations and existing petascale requirements, yet they also contained some surprises, complete with new challenges and opportunities. First and foremost, the breadth and depth of science prospects and benefits on an exascale computing system are striking. Without a doubt, they justify a large investment, even with its inherent risks. The possibilities for return on investment (by any measure) are too large to let us ignore this opportunity. The software opportunities and challenges are enormous. In fact, as one notable computational scientist put it, the scale of questions being asked at the exascale is tremendous and the hardware has gotten way ahead of the software. We are in grave danger of failing because of a software crisis unless concerted investments and coordinating activities are undertaken to reduce and close this hardwaresoftware gap over the next decade. Key to success will be a rigorous requirement for natural mapping of algorithms to hardware in a way that complements (rather than competes with) compilers and runtime systems. The level of abstraction must be raised, and more attention must be paid to functionalities and capabilities that incorporate intent into data structures, are aware of memory hierarchy, possess fault tolerance, exploit asynchronism, and are power-consumption aware. On the other hand, we must also provide application scientists with the ability to develop software without having to become experts in the computer science components. Numerical algorithms are scattered broadly across science domains, with no one particular algorithm being ubiquitous and no one algorithm going unused. Structured grids and dense linear algebra continue to dominate, but other algorithm categories will become more common. A significant increase is projected for Monte Carlo algorithms, unstructured grids, sparse linear algebra, and particle methods, and a relative decrease foreseen in fast Fourier transforms. These projections reflect the expectation of much higher architecture concurrency and the resulting need for very high scalability. The new algorithm categories that application scientists expect to be increasingly important in the next decade include adaptive mesh refinement, implicit nonlinear systems, data assimilation, agent-based methods, parameter continuation, and optimization. The attributes of leadership computing systems expected to increase most in priority over the next decade are (in order of importance) interconnect bandwidth, memory bandwidth, mean time to interrupt, memory latency, and interconnect latency. The attributes expected to decrease most in relative priority are disk latency, archival storage capacity, disk bandwidth, wide area network bandwidth, and local storage capacity. These choices by application developers reflect the expected needs of applications or the expected reality of available hardware. One interpretation is that the increasing priorities reflect the desire to increase computational efficiency to take advantage of increasing peak flops [floating point operations per second], while the decreasing priorities reflect the expectation that computational efficiency will not increase. Per-core requirements appear to be relatively static, while aggregate requirements will grow with the system. This projection is consistent with a relatively small increase in performance per core with a dramatic increase in the number of cores. Leadership system software must face and overcome issues that will undoubtedly be exacerbated at the exascale. The operating system (OS) must be as unobtrusive as possible and possess more stability, reliability, and fault tolerance during application execution. As applications will be more likely at the exascale to experience loss of resources during an execution, the OS must mitigate such a loss with a range of responses. New fault tolerance paradigms must be developed and integrated into applications. Just as application input and output must not be an afterthought in hardware design, job management, too, must not be an afterthought in system software design. Efficient scheduling of those resources will be a major obstacle faced by leadership computing centers at the exas..

    Evaluating the Effects of SARS-CoV-2 Spike Mutation D614G on Transmissibility and Pathogenicity.

    Get PDF
    Global dispersal and increasing frequency of the SARS-CoV-2 spike protein variant D614G are suggestive of a selective advantage but may also be due to a random founder effect. We investigate the hypothesis for positive selection of spike D614G in the United Kingdom using more than 25,000 whole genome SARS-CoV-2 sequences. Despite the availability of a large dataset, well represented by both spike 614 variants, not all approaches showed a conclusive signal of positive selection. Population genetic analysis indicates that 614G increases in frequency relative to 614D in a manner consistent with a selective advantage. We do not find any indication that patients infected with the spike 614G variant have higher COVID-19 mortality or clinical severity, but 614G is associated with higher viral load and younger age of patients. Significant differences in growth and size of 614G phylogenetic clusters indicate a need for continued study of this variant

    Hospital admission and emergency care attendance risk for SARS-CoV-2 delta (B.1.617.2) compared with alpha (B.1.1.7) variants of concern: a cohort study

    Get PDF
    Background: The SARS-CoV-2 delta (B.1.617.2) variant was first detected in England in March, 2021. It has since rapidly become the predominant lineage, owing to high transmissibility. It is suspected that the delta variant is associated with more severe disease than the previously dominant alpha (B.1.1.7) variant. We aimed to characterise the severity of the delta variant compared with the alpha variant by determining the relative risk of hospital attendance outcomes. Methods: This cohort study was done among all patients with COVID-19 in England between March 29 and May 23, 2021, who were identified as being infected with either the alpha or delta SARS-CoV-2 variant through whole-genome sequencing. Individual-level data on these patients were linked to routine health-care datasets on vaccination, emergency care attendance, hospital admission, and mortality (data from Public Health England's Second Generation Surveillance System and COVID-19-associated deaths dataset; the National Immunisation Management System; and NHS Digital Secondary Uses Services and Emergency Care Data Set). The risk for hospital admission and emergency care attendance were compared between patients with sequencing-confirmed delta and alpha variants for the whole cohort and by vaccination status subgroups. Stratified Cox regression was used to adjust for age, sex, ethnicity, deprivation, recent international travel, area of residence, calendar week, and vaccination status. Findings: Individual-level data on 43 338 COVID-19-positive patients (8682 with the delta variant, 34 656 with the alpha variant; median age 31 years [IQR 17–43]) were included in our analysis. 196 (2·3%) patients with the delta variant versus 764 (2·2%) patients with the alpha variant were admitted to hospital within 14 days after the specimen was taken (adjusted hazard ratio [HR] 2·26 [95% CI 1·32–3·89]). 498 (5·7%) patients with the delta variant versus 1448 (4·2%) patients with the alpha variant were admitted to hospital or attended emergency care within 14 days (adjusted HR 1·45 [1·08–1·95]). Most patients were unvaccinated (32 078 [74·0%] across both groups). The HRs for vaccinated patients with the delta variant versus the alpha variant (adjusted HR for hospital admission 1·94 [95% CI 0·47–8·05] and for hospital admission or emergency care attendance 1·58 [0·69–3·61]) were similar to the HRs for unvaccinated patients (2·32 [1·29–4·16] and 1·43 [1·04–1·97]; p=0·82 for both) but the precision for the vaccinated subgroup was low. Interpretation: This large national study found a higher hospital admission or emergency care attendance risk for patients with COVID-19 infected with the delta variant compared with the alpha variant. Results suggest that outbreaks of the delta variant in unvaccinated populations might lead to a greater burden on health-care services than the alpha variant. Funding: Medical Research Council; UK Research and Innovation; Department of Health and Social Care; and National Institute for Health Research

    Evaluating the Effects of SARS-CoV-2 Spike Mutation D614G on Transmissibility and Pathogenicity

    Get PDF
    Global dispersal and increasing frequency of the SARS-CoV-2 spike protein variant D614G are suggestive of a selective advantage but may also be due to a random founder effect. We investigate the hypothesis for positive selection of spike D614G in the United Kingdom using more than 25,000 whole genome SARS-CoV-2 sequences. Despite the availability of a large dataset, well represented by both spike 614 variants, not all approaches showed a conclusive signal of positive selection. Population genetic analysis indicates that 614G increases in frequency relative to 614D in a manner consistent with a selective advantage. We do not find any indication that patients infected with the spike 614G variant have higher COVID-19 mortality or clinical severity, but 614G is associated with higher viral load and younger age of patients. Significant differences in growth and size of 614G phylogenetic clusters indicate a need for continued study of this variant

    Changes in symptomatology, reinfection, and transmissibility associated with the SARS-CoV-2 variant B.1.1.7: an ecological study

    Get PDF
    Background The SARS-CoV-2 variant B.1.1.7 was first identified in December, 2020, in England. We aimed to investigate whether increases in the proportion of infections with this variant are associated with differences in symptoms or disease course, reinfection rates, or transmissibility. Methods We did an ecological study to examine the association between the regional proportion of infections with the SARS-CoV-2 B.1.1.7 variant and reported symptoms, disease course, rates of reinfection, and transmissibility. Data on types and duration of symptoms were obtained from longitudinal reports from users of the COVID Symptom Study app who reported a positive test for COVID-19 between Sept 28 and Dec 27, 2020 (during which the prevalence of B.1.1.7 increased most notably in parts of the UK). From this dataset, we also estimated the frequency of possible reinfection, defined as the presence of two reported positive tests separated by more than 90 days with a period of reporting no symptoms for more than 7 days before the second positive test. The proportion of SARS-CoV-2 infections with the B.1.1.7 variant across the UK was estimated with use of genomic data from the COVID-19 Genomics UK Consortium and data from Public Health England on spike-gene target failure (a non-specific indicator of the B.1.1.7 variant) in community cases in England. We used linear regression to examine the association between reported symptoms and proportion of B.1.1.7. We assessed the Spearman correlation between the proportion of B.1.1.7 cases and number of reinfections over time, and between the number of positive tests and reinfections. We estimated incidence for B.1.1.7 and previous variants, and compared the effective reproduction number, Rt, for the two incidence estimates. Findings From Sept 28 to Dec 27, 2020, positive COVID-19 tests were reported by 36 920 COVID Symptom Study app users whose region was known and who reported as healthy on app sign-up. We found no changes in reported symptoms or disease duration associated with B.1.1.7. For the same period, possible reinfections were identified in 249 (0·7% [95% CI 0·6–0·8]) of 36 509 app users who reported a positive swab test before Oct 1, 2020, but there was no evidence that the frequency of reinfections was higher for the B.1.1.7 variant than for pre-existing variants. Reinfection occurrences were more positively correlated with the overall regional rise in cases (Spearman correlation 0·56–0·69 for South East, London, and East of England) than with the regional increase in the proportion of infections with the B.1.1.7 variant (Spearman correlation 0·38–0·56 in the same regions), suggesting B.1.1.7 does not substantially alter the risk of reinfection. We found a multiplicative increase in the Rt of B.1.1.7 by a factor of 1·35 (95% CI 1·02–1·69) relative to pre-existing variants. However, Rt fell below 1 during regional and national lockdowns, even in regions with high proportions of infections with the B.1.1.7 variant. Interpretation The lack of change in symptoms identified in this study indicates that existing testing and surveillance infrastructure do not need to change specifically for the B.1.1.7 variant. In addition, given that there was no apparent increase in the reinfection rate, vaccines are likely to remain effective against the B.1.1.7 variant. Funding Zoe Global, Department of Health (UK), Wellcome Trust, Engineering and Physical Sciences Research Council (UK), National Institute for Health Research (UK), Medical Research Council (UK), Alzheimer's Society

    Genomic epidemiology of SARS-CoV-2 in a UK university identifies dynamics of transmission

    Get PDF
    AbstractUnderstanding SARS-CoV-2 transmission in higher education settings is important to limit spread between students, and into at-risk populations. In this study, we sequenced 482 SARS-CoV-2 isolates from the University of Cambridge from 5 October to 6 December 2020. We perform a detailed phylogenetic comparison with 972 isolates from the surrounding community, complemented with epidemiological and contact tracing data, to determine transmission dynamics. We observe limited viral introductions into the university; the majority of student cases were linked to a single genetic cluster, likely following social gatherings at a venue outside the university. We identify considerable onward transmission associated with student accommodation and courses; this was effectively contained using local infection control measures and following a national lockdown. Transmission clusters were largely segregated within the university or the community. Our study highlights key determinants of SARS-CoV-2 transmission and effective interventions in a higher education setting that will inform public health policy during pandemics.</jats:p

    Genomic assessment of quarantine measures to prevent SARS-CoV-2 importation and transmission

    Get PDF
    Mitigation of SARS-CoV-2 transmission from international travel is a priority. We evaluated the effectiveness of travellers being required to quarantine for 14-days on return to England in Summer 2020. We identified 4,207 travel-related SARS-CoV-2 cases and their contacts, and identified 827 associated SARS-CoV-2 genomes. Overall, quarantine was associated with a lower rate of contacts, and the impact of quarantine was greatest in the 16–20 age-group. 186 SARS-CoV-2 genomes were sufficiently unique to identify travel-related clusters. Fewer genomically-linked cases were observed for index cases who returned from countries with quarantine requirement compared to countries with no quarantine requirement. This difference was explained by fewer importation events per identified genome for these cases, as opposed to fewer onward contacts per case. Overall, our study demonstrates that a 14-day quarantine period reduces, but does not completely eliminate, the onward transmission of imported cases, mainly by dissuading travel to countries with a quarantine requirement

    Exploration of Shared Genetic Architecture Between Subcortical Brain Volumes and Anorexia Nervosa

    Get PDF

    The Diffusion Equation Method for Global Optimization and Its Application to Magnetotelluric Geoprospecting

    Get PDF
    In geoprospecting, the conductivity of rock is determined from its effect on electromagnetic waves. From the conductivity, we can deduce the type of rock (or oil or water) and the nature of the geological formation. Unfortunately, electromagnetic inverse problems of this kind are ill-posed. We investigate the use of the Diffusion Equation Method (DEM) for global optimization in implementing an approximate quasisolution method for determining the size, shape, and orientation of an underground deposit. We examine the theoretical underpinnings of the DEM, first discussing the concepts of smoothing and continuation and then establishing the effects of diffusion upon sinusoids and polynomials. From these foundations, we explain the effect of diffusion upon coercive functions and its implications for solving global optimization problems. We examine the robustness of the DEM and its limitations in finding the global optimizer, introduce a discrete DEM using finite differencing, and compare its cost to other global optimization methods. The main computational expense for this application is in repeatedly evaluating the objective function, which fortunately can be done in a highly parallel manner. We discuss our parallel implementation of the magnetotelluric geoprospecting objective function and discrete DEM, and analyze the performance and scalability of our approach
    corecore