61 research outputs found

    PaCTS 1.0: A Crowdsourced Reporting Standard for Paleoclimate Data

    Get PDF
    The progress of science is tied to the standardization of measurements, instruments, and data. This is especially true in the Big Data age, where analyzing large data volumes critically hinges on the data being standardized. Accordingly, the lack of community-sanctioned data standards in paleoclimatology has largely precluded the benefits of Big Data advances in the field. Building upon recent efforts to standardize the format and terminology of paleoclimate data, this article describes the Paleoclimate Community reporTing Standard (PaCTS), a crowdsourced reporting standard for such data. PaCTS captures which information should be included when reporting paleoclimate data, with the goal of maximizing the reuse value of paleoclimate data sets, particularly for synthesis work and comparison to climate model simulations. Initiated by the LinkedEarth project, the process to elicit a reporting standard involved an international workshop in 2016, various forms of digital community engagement over the next few years, and grassroots working groups. Participants in this process identified important properties across paleoclimate archives, in addition to the reporting of uncertainties and chronologies; they also identified archive-specific properties and distinguished reporting standards for new versus legacy data sets. This work shows that at least 135 respondents overwhelmingly support a drastic increase in the amount of metadata accompanying paleoclimate data sets. Since such goals are at odds with present practices, we discuss a transparent path toward implementing or revising these recommendations in the near future, using both bottom-up and top-down approaches

    PaCTS 1.0: A Crowdsourced Reporting Standard for Paleoclimate Data

    Get PDF
    The progress of science is tied to the standardization of measurements, instruments, and data. This is especially true in the Big Data age, where analyzing large data volumes critically hinges on the data being standardized. Accordingly, the lack of community-sanctioned data standards in paleoclimatology has largely precluded the benefits of Big Data advances in the field. Building upon recent efforts to standardize the format and terminology of paleoclimate data, this article describes the Paleoclimate Community reporTing Standard (PaCTS), a crowdsourced reporting standard for such data. PaCTS captures which information should be included when reporting paleoclimate data, with the goal of maximizing the reuse value of paleoclimate data sets, particularly for synthesis work and comparison to climate model simulations. Initiated by the LinkedEarth project, the process to elicit a reporting standard involved an international workshop in 2016, various forms of digital community engagement over the next few years, and grassroots working groups. Participants in this process identified important properties across paleoclimate archives, in addition to the reporting of uncertainties and chronologies; they also identified archive-specific properties and distinguished reporting standards for new versus legacy data sets. This work shows that at least 135 respondents overwhelmingly support a drastic increase in the amount of metadata accompanying paleoclimate data sets. Since such goals are at odds with present practices, we discuss a transparent path toward implementing or revising these recommendations in the near future, using both bottom-up and top-down approaches

    PaCTS 1.0: a crowdsourced reporting standard for paleoclimate data

    Get PDF
    The progress of science is tied to the standardization of measurements, instruments, and data. This is especially true in the Big Data age, where analyzing large data volumes critically hinges on the data being standardized. Accordingly, the lack of community-sanctioned data standards in paleoclimatology has largely precluded the benefits of Big Data advances in the field. Building upon recent efforts to standardize the format and terminology of paleoclimate data, this article describes the Paleoclimate Community reporTing Standard (PaCTS), a crowdsourced reporting standard for such data. PaCTS captures which information should be included when reporting paleoclimate data, with the goal of maximizing the reuse value of paleoclimate datasets, particularly for synthesis work and comparison to climate model simulations. Initiated by the LinkedEarth project, the process to elicit a reporting standard involved an international workshop in 2016, various forms of digital community engagement over the next few years, and grassroots working groups. Participants in this process identified important properties across paleoclimate archives, in addition to the reporting of uncertainties and chronologies; they also identified archive-specific properties and distinguished reporting standards for new vs. legacy datasets. This work shows that at least 135 respondents overwhelmingly support a drastic increase in the amount of metadata accompanying paleoclimate datasets. Since such goals are at odds with present practices, we discuss a transparent path towards implementing or revising these recommendations in the near future, using both bottom-up and top-down approaches

    International longitudinal registry of patients with atrial fibrillation and treated with rivaroxaban: RIVaroxaban Evaluation in Real life setting (RIVER)

    Get PDF
    Background Real-world data on non-vitamin K oral anticoagulants (NOACs) are essential in determining whether evidence from randomised controlled clinical trials translate into meaningful clinical benefits for patients in everyday practice. RIVER (RIVaroxaban Evaluation in Real life setting) is an ongoing international, prospective registry of patients with newly diagnosed non-valvular atrial fibrillation (NVAF) and at least one investigator-determined risk factor for stroke who received rivaroxaban as an initial treatment for the prevention of thromboembolic stroke. The aim of this paper is to describe the design of the RIVER registry and baseline characteristics of patients with newly diagnosed NVAF who received rivaroxaban as an initial treatment. Methods and results Between January 2014 and June 2017, RIVER investigators recruited 5072 patients at 309 centres in 17 countries. The aim was to enroll consecutive patients at sites where rivaroxaban was already routinely prescribed for stroke prevention. Each patient is being followed up prospectively for a minimum of 2-years. The registry will capture data on the rate and nature of all thromboembolic events (stroke / systemic embolism), bleeding complications, all-cause mortality and other major cardiovascular events as they occur. Data quality is assured through a combination of remote electronic monitoring and onsite monitoring (including source data verification in 10% of cases). Patients were mostly enrolled by cardiologists (n = 3776, 74.6%), by internal medicine specialists 14.2% (n = 718) and by primary care/general practice physicians 8.2% (n = 417). The mean (SD) age of the population was 69.5 (11.0) years, 44.3% were women. Mean (SD) CHADS2 score was 1.9 (1.2) and CHA2DS2-VASc scores was 3.2 (1.6). Almost all patients (98.5%) were prescribed with once daily dose of rivaroxaban, most commonly 20 mg (76.5%) and 15 mg (20.0%) as their initial treatment; 17.9% of patients received concomitant antiplatelet therapy. Most patients enrolled in RIVER met the recommended threshold for AC therapy (86.6% for 2012 ESC Guidelines, and 79.8% of patients according to 2016 ESC Guidelines). Conclusions The RIVER prospective registry will expand our knowledge of how rivaroxaban is prescribed in everyday practice and whether evidence from clinical trials can be translated to the broader cross-section of patients in the real world

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    Fixed and rotary wing transonic aerodynamic improvement via surface-based trapped vortex generators

    No full text
    A novel passive flow control concept for transonic flows over airfoils is proposed and examined via computational fluid dynamics. The control concept is based on the local modification of the airfoil's geometry. It aims to reduce drag or to increase lift without deteriorating the original lift and/or drag characteristics of the airfoil, respectively. Such flow control technique could be beneficial for improving the range or endurance of transonic aircraft or for mitigating the negative effects of transonic flow on the advancing blades of helicopter rotors. To explore the feasibility of the concept, two-dimensional computational fluid dynamics simulations of a NACA 0012 airfoil exposed to a freestream of Mach 0.7 and Re = 9 × 106 as well as of a NASA SC(3)−0712(B) supercritical airfoil exposed to a freestream of Mach 0.78 and Re = 30 × 106 were conducted. The baseline airfoil simulations were carefully verified and validated, showing excellent agreement with wind tunnel data. Then, 32 various local geometry modifications were proposed and systematically examined, all functioning as a trapped-vortex generator. The surface modifications were examined on both the upper and lower surfaces of the airfoils. The upper surface modifications demonstrated remarkable ability to reduce the strength of the shockwave on the upper surface of the airfoil with only a small penalty in lift. On the other hand, the lower surface modifications could significantly increase the lift-to-drag ratio for the full range of the investigated angles of attack, when compared to the baseline airfoil

    Assessing millennial-scale variability during the Holocene: A perspective from the western tropical Pacific

    No full text
    We investigate the relationship between tropical Pacific and Southern Ocean variability during the Holocene using the stable oxygen isotope and magnesium/calcium records of cooccurring planktonic and benthic foraminifera from a marine sediment core collected in the western equatorial Pacific. The planktonic record exhibits millennial-scale sea surface temperature (SST) oscillations over the Holocene of ~0.5°C while the benthic δ18Oc document ~0.10‰ millennial-scale changes of Upper Circumpolar Deep Water (UCDW), a water mass which outcrops in the Southern Ocean. Solar forcing as an explanation for millennial-scale SST variability requires (1) a large climate sensitivity and (2) a long 400 year delayed response, suggesting that if solar forcing is the cause of the variability, it would need to be considerably amplified by processes within the climate system at least at the core location. We also explore the possibility that SST variability arose from volcanic forcing using a simple red noise model. Our best estimates of volcanic forcing falls short of reproducing the amplitude of observed SST variations although it produces power at low-frequency similar to that observed in the MD81 record. Although we cannot totally discount the volcanic and solar forcing hypotheses, we are left to consider that the most plausible source for Holocene millennial-scale variability lies within the climate system itself. In particular, UCDW variability coincided with deep North Atlantic changes, indicating a role for the deep ocean in Holocene millennial-scale variability
    corecore