49 research outputs found

    Progressive refinement rendering of implicit surfaces

    Get PDF
    The visualisation of implicit surfaces can be an inefficient task when such surfaces are complex and highly detailed. Visualising a surface by first converting it to a polygon mesh may lead to an excessive polygon count. Visualising a surface by direct ray casting is often a slow procedure. In this paper we present a progressive refinement renderer for implicit surfaces that are Lipschitz continuous. The renderer first displays a low resolution estimate of what the final image is going to be and, as the computation progresses, increases the quality of this estimate at an interactive frame rate. This renderer provides a quick previewing facility that significantly reduces the design cycle of a new and complex implicit surface. The renderer is also capable of completing an image faster than a conventional implicit surface rendering algorithm based on ray casting

    The Product and System Specificities of Measuring Curation Impact

    Get PDF
    Using three datasets archived at the National Center for Atmospheric Research (NCAR), we describe the creation of a ‘data usage index’ for curation-specific impact assessments. Our work is focused on quantitatively evaluating climate and weather data used in earth and space science research, but we also discuss the application of this approach to other research data contexts. We conclude with some proposed future directions for metric-based work in data curation

    The Community Climate System Model version 4

    Get PDF
    Author Posting. © American Meteorological Society, 2011. This article is posted here by permission of American Meteorological Society for personal use, not for redistribution. The definitive version was published in Journal of Climate 24 (2011): 4973–4991, doi:10.1175/2011JCLI4083.1.The fourth version of the Community Climate System Model (CCSM4) was recently completed and released to the climate community. This paper describes developments to all CCSM components, and documents fully coupled preindustrial control runs compared to the previous version, CCSM3. Using the standard atmosphere and land resolution of 1° results in the sea surface temperature biases in the major upwelling regions being comparable to the 1.4°-resolution CCSM3. Two changes to the deep convection scheme in the atmosphere component result in CCSM4 producing El Niño–Southern Oscillation variability with a much more realistic frequency distribution than in CCSM3, although the amplitude is too large compared to observations. These changes also improve the Madden–Julian oscillation and the frequency distribution of tropical precipitation. A new overflow parameterization in the ocean component leads to an improved simulation of the Gulf Stream path and the North Atlantic Ocean meridional overturning circulation. Changes to the CCSM4 land component lead to a much improved annual cycle of water storage, especially in the tropics. The CCSM4 sea ice component uses much more realistic albedos than CCSM3, and for several reasons the Arctic sea ice concentration is improved in CCSM4. An ensemble of twentieth-century simulations produces a good match to the observed September Arctic sea ice extent from 1979 to 2005. The CCSM4 ensemble mean increase in globally averaged surface temperature between 1850 and 2005 is larger than the observed increase by about 0.4°C. This is consistent with the fact that CCSM4 does not include a representation of the indirect effects of aerosols, although other factors may come into play. The CCSM4 still has significant biases, such as the mean precipitation distribution in the tropical Pacific Ocean, too much low cloud in the Arctic, and the latitudinal distributions of shortwave and longwave cloud forcings.National Science Foundation, which sponsors NCAR and the CCSM Project. The project is also sponsored by the U.S. Department of Energy (DOE). Thanks are also due to the many other software engineers and scientists who worked on developing CCSM4, and to the Computational and Information Systems Laboratory at NCAR, which provided the computing resources through the Climate Simulation Laboratory. Hunke was supported within theClimate, Ocean and Sea Ice Modeling project at Los Alamos National Laboratory, which is funded by the Biological and Environmental Research division of the DOE Office of Science. The Los Alamos National Laboratory is operated by theDOENationalNuclear Security Administration under Contract DE-AC52-06NA25396. Raschwas supported by theDOEOffice of Science, Earth System Modeling Program, which is part of the DOE Climate Change Research Program. The Pacific Northwest National Laboratory is operated forDOEbyBattelle Memorial Institute under Contract DE-AC06-76RLO 1830. Worley was supported by the Climate Change Research Division of the Office of Biological and Environmental Research and by the Office ofAdvanced Scientific Computing Research, both in the DOE Office of Science, under Contract DE-AC05-00OR22725 with UT-Batelle, LLC

    The PARTNER trial of neoadjuvant olaparib with chemotherapy in triple-negative breast cancer

    Get PDF
    PARTNER is a prospective, phase II–III, randomized controlled clinical trial that recruited patients with triple-negative breast cancer1, 2, who were germline BRCA1 and BRCA2 wild type3. Here we report the results of the trial. Patients (n = 559) were randomized on a 1:1 basis to receive neoadjuvant carboplatin–paclitaxel with or without 150 mg olaparib twice daily, on days 3 to 14, of each of four cycles (gap schedule olaparib, research arm) followed by three cycles of anthracycline-based chemotherapy before surgery. The primary end point was pathologic complete response (pCR)4, and secondary end points included event-free survival (EFS) and overall survival (OS)5. pCR was achieved in 51% of patients in the research arm and 52% in the control arm (P = 0.753). Estimated EFS at 36 months in the research and control arms was 80% and 79% (log-rank P > 0.9), respectively; OS was 90% and 87.2% (log-rank P = 0.8), respectively. In patients with pCR, estimated EFS at 36 months was 90%, and in those with non-pCR it was 70% (log-rank P < 0.001), and OS was 96% and 83% (log-rank P < 0.001), respectively. Neoadjuvant olaparib did not improve pCR rates, EFS or OS when added to carboplatin–paclitaxel and anthracycline-based chemotherapy in patients with triple-negative breast cancer who were germline BRCA1 and BRCA2 wild type. ClinicalTrials.gov ID: NCT03150576

    Assemblathon 2: evaluating de novo methods of genome assembly in three vertebrate species

    Get PDF
    Background: The process of generating raw genome sequence data continues to become cheaper, faster, and more accurate. However, assembly of such data into high-quality, finished genome sequences remains challenging. Many genome assembly tools are available, but they differ greatly in terms of their performance (speed, scalability, hardware requirements, acceptance of newer read technologies) and in their final output (composition of assembled sequence). More importantly, it remains largely unclear how to best assess the quality of assembled genome sequences. The Assemblathon competitions are intended to assess current state-of-the-art methods in genome assembly. Results: In Assemblathon 2, we provided a variety of sequence data to be assembled for three vertebrate species (a bird, a fish, and snake). This resulted in a total of 43 submitted assemblies from 21 participating teams. We evaluated these assemblies using a combination of optical map data, Fosmid sequences, and several statistical methods. From over 100 different metrics, we chose ten key measures by which to assess the overall quality of the assemblies. Conclusions: Many current genome assemblers produced useful assemblies, containing a significant representation of their genes and overall genome structure. However, the high degree of variability between the entries suggests that there is still much room for improvement in the field of genome assembly and that approaches which work well in assembling the genome of one species may not necessarily work well for another

    The wide-field, multiplexed, spectroscopic facility WEAVE : survey design, overview, and simulated implementation

    Get PDF
    Funding for the WEAVE facility has been provided by UKRI STFC, the University of Oxford, NOVA, NWO, Instituto de Astrofísica de Canarias (IAC), the Isaac Newton Group partners (STFC, NWO, and Spain, led by the IAC), INAF, CNRS-INSU, the Observatoire de Paris, Région Île-de-France, CONCYT through INAOE, Konkoly Observatory (CSFK), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Lund University, the Leibniz Institute for Astrophysics Potsdam (AIP), the Swedish Research Council, the European Commission, and the University of Pennsylvania.WEAVE, the new wide-field, massively multiplexed spectroscopic survey facility for the William Herschel Telescope, will see first light in late 2022. WEAVE comprises a new 2-degree field-of-view prime-focus corrector system, a nearly 1000-multiplex fibre positioner, 20 individually deployable 'mini' integral field units (IFUs), and a single large IFU. These fibre systems feed a dual-beam spectrograph covering the wavelength range 366-959 nm at R ∼ 5000, or two shorter ranges at R ∼ 20,000. After summarising the design and implementation of WEAVE and its data systems, we present the organisation, science drivers and design of a five- to seven-year programme of eight individual surveys to: (i) study our Galaxy's origins by completing Gaia's phase-space information, providing metallicities to its limiting magnitude for ∼ 3 million stars and detailed abundances for ∼ 1.5 million brighter field and open-cluster stars; (ii) survey ∼ 0.4 million Galactic-plane OBA stars, young stellar objects and nearby gas to understand the evolution of young stars and their environments; (iii) perform an extensive spectral survey of white dwarfs; (iv) survey  ∼ 400 neutral-hydrogen-selected galaxies with the IFUs; (v) study properties and kinematics of stellar populations and ionised gas in z 1 million spectra of LOFAR-selected radio sources; (viii) trace structures using intergalactic/circumgalactic gas at z > 2. Finally, we describe the WEAVE Operational Rehearsals using the WEAVE Simulator.PostprintPeer reviewe

    The wide-field, multiplexed, spectroscopic facility WEAVE: Survey design, overview, and simulated implementation

    Full text link
    WEAVE, the new wide-field, massively multiplexed spectroscopic survey facility for the William Herschel Telescope, will see first light in late 2022. WEAVE comprises a new 2-degree field-of-view prime-focus corrector system, a nearly 1000-multiplex fibre positioner, 20 individually deployable 'mini' integral field units (IFUs), and a single large IFU. These fibre systems feed a dual-beam spectrograph covering the wavelength range 366-959\,nm at R5000R\sim5000, or two shorter ranges at R20000R\sim20\,000. After summarising the design and implementation of WEAVE and its data systems, we present the organisation, science drivers and design of a five- to seven-year programme of eight individual surveys to: (i) study our Galaxy's origins by completing Gaia's phase-space information, providing metallicities to its limiting magnitude for \sim3 million stars and detailed abundances for 1.5\sim1.5 million brighter field and open-cluster stars; (ii) survey 0.4\sim0.4 million Galactic-plane OBA stars, young stellar objects and nearby gas to understand the evolution of young stars and their environments; (iii) perform an extensive spectral survey of white dwarfs; (iv) survey 400\sim400 neutral-hydrogen-selected galaxies with the IFUs; (v) study properties and kinematics of stellar populations and ionised gas in z<0.5z<0.5 cluster galaxies; (vi) survey stellar populations and kinematics in 25000\sim25\,000 field galaxies at 0.3z0.70.3\lesssim z \lesssim 0.7; (vii) study the cosmic evolution of accretion and star formation using >1>1 million spectra of LOFAR-selected radio sources; (viii) trace structures using intergalactic/circumgalactic gas at z>2z>2. Finally, we describe the WEAVE Operational Rehearsals using the WEAVE Simulator.Comment: 41 pages, 27 figures, accepted for publication by MNRA

    AI is a viable alternative to high throughput screening: a 318-target study

    Get PDF
    : High throughput screening (HTS) is routinely used to identify bioactive small molecules. This requires physical compounds, which limits coverage of accessible chemical space. Computational approaches combined with vast on-demand chemical libraries can access far greater chemical space, provided that the predictive accuracy is sufficient to identify useful molecules. Through the largest and most diverse virtual HTS campaign reported to date, comprising 318 individual projects, we demonstrate that our AtomNet® convolutional neural network successfully finds novel hits across every major therapeutic area and protein class. We address historical limitations of computational screening by demonstrating success for target proteins without known binders, high-quality X-ray crystal structures, or manual cherry-picking of compounds. We show that the molecules selected by the AtomNet® model are novel drug-like scaffolds rather than minor modifications to known bioactive compounds. Our empirical results suggest that computational methods can substantially replace HTS as the first step of small-molecule drug discovery
    corecore