258 research outputs found

    The impact of master scheduling models on student performance as identified by the Academic Excellence Indicator System (AEIS) database in the high schools of the San Antonio Independent School District, San Antonio, Texas

    Get PDF
    This study determined the impact of master scheduling models on student performance as reported by the AEIS database in the high schools of the SAISD. General student performance and the Texas Assessment of Knowledge and Skills were the primary measures for comparison. The SAISD made a transition from an A-B block schedule in 2002 to a traditional-seven period model in 2003. Conclusions have been made as to the degree of influence that traditional and block schedules have on student performance. The population of this study was the eight high schools of the SAISD. All students enrolled on these campuses were included in the data analysis. The population was 14,418 students during the 2002-2003 school year and 13,689 in 2003-2004. Descriptive statistics and analysis of variance (ANOVA) were the measures utilized for the purposes of population comparisons and data review. Based on the findings of this study, the recommendations for practice indicate the following: 1. Attendance ratings did not return statistical significance on a traditional schedule. 2. Advanced Course participation and AP/IB testing results returned statistical significance on a traditional schedule. 3. SAT and ACT did not return statistical significance on a traditional schedule. 4. TAKS Campus Performance did not return statistical significance on a traditional schedule. 5. TAKS Reading/ELA, Mathematics, Science and Social Studies scores returned statistical significance on a traditional schedule. 6. African American, Hispanic and Special Education Performance returned statistical significance in TAKS Science and TAKS Social Studies on a traditional schedule. 7. White Performance returned statistical significance in TAKS Science on a traditional schedule. 8. Economically Disadvantaged Performance returned statistical significance in each area of the TAKS assessment on a traditional schedule. 9. Limited English Proficient Performance returned statistical significance in TAKS Math on a traditional schedule

    Treatment guided by fractional exhaled nitric oxide in addition to standard care in 6- to 15-year-olds with asthma : the RAACENO RCT

    Get PDF
    Funding This project was funded by the Efficacy and Mechanism Evaluation (EME) programme, a MRC and National Institute for Health and Care Research (NIHR) partnership. This will be published in full in Efficacy and Mechanism Evaluation; Vol. 9, No. 4. See the NIHR Journals Library website for further project information.Peer reviewedPublisher PD

    Caribbean Corals in Crisis: Record Thermal Stress, Bleaching, and Mortality in 2005

    Get PDF
    BACKGROUND The rising temperature of the world's oceans has become a major threat to coral reefs globally as the severity and frequency of mass coral bleaching and mortality events increase. In 2005, high ocean temperatures in the tropical Atlantic and Caribbean resulted in the most severe bleaching event ever recorded in the basin. METHODOLOGY/PRINCIPAL FINDINGS Satellite-based tools provided warnings for coral reef managers and scientists, guiding both the timing and location of researchers' field observations as anomalously warm conditions developed and spread across the greater Caribbean region from June to October 2005. Field surveys of bleaching and mortality exceeded prior efforts in detail and extent, and provided a new standard for documenting the effects of bleaching and for testing nowcast and forecast products. Collaborators from 22 countries undertook the most comprehensive documentation of basin-scale bleaching to date and found that over 80% of corals bleached and over 40% died at many sites. The most severe bleaching coincided with waters nearest a western Atlantic warm pool that was centered off the northern end of the Lesser Antilles. CONCLUSIONS/SIGNIFICANCE Thermal stress during the 2005 event exceeded any observed from the Caribbean in the prior 20 years, and regionally-averaged temperatures were the warmest in over 150 years. Comparison of satellite data against field surveys demonstrated a significant predictive relationship between accumulated heat stress (measured using NOAA Coral Reef Watch's Degree Heating Weeks) and bleaching intensity. This severe, widespread bleaching and mortality will undoubtedly have long-term consequences for reef ecosystems and suggests a troubled future for tropical marine ecosystems under a warming climate.This work was partially supported by salaries from the NOAA Coral Reef Conservation Program to the NOAA Coral Reef Conservation Program authors. NOAA provided funding to Caribbean ReefCheck investigators to undertake surveys of bleaching and mortality. Otherwise, no funding from outside authors' institutions was necessary for the undertaking of this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript

    [Comment] Redefine statistical significance

    Get PDF
    The lack of reproducibility of scientific studies has caused growing concern over the credibility of claims of new discoveries based on “statistically significant” findings. There has been much progress toward documenting and addressing several causes of this lack of reproducibility (e.g., multiple testing, P-hacking, publication bias, and under-powered studies). However, we believe that a leading cause of non-reproducibility has not yet been adequately addressed: Statistical standards of evidence for claiming discoveries in many fields of science are simply too low. Associating “statistically significant” findings with P < 0.05 results in a high rate of false positives even in the absence of other experimental, procedural and reporting problems. For fields where the threshold for defining statistical significance is P<0.05, we propose a change to P<0.005. This simple step would immediately improve the reproducibility of scientific research in many fields. Results that would currently be called “significant” but do not meet the new threshold should instead be called “suggestive.” While statisticians have known the relative weakness of using P≈0.05 as a threshold for discovery and the proposal to lower it to 0.005 is not new (1, 2), a critical mass of researchers now endorse this change. We restrict our recommendation to claims of discovery of new effects. We do not address the appropriate threshold for confirmatory or contradictory replications of existing claims. We also do not advocate changes to discovery thresholds in fields that have already adopted more stringent standards (e.g., genomics and high-energy physics research; see Potential Objections below). We also restrict our recommendation to studies that conduct null hypothesis significance tests. We have diverse views about how best to improve reproducibility, and many of us believe that other ways of summarizing the data, such as Bayes factors or other posterior summaries based on clearly articulated model assumptions, are preferable to P-values. However, changing the P-value threshold is simple and might quickly achieve broad acceptance

    Ebola virus epidemiology, transmission, and evolution during seven months in Sierra Leone

    Get PDF
    The 2013-2015 Ebola virus disease (EVD) epidemic is caused by the Makona variant of Ebola virus (EBOV). Early in the epidemic, genome sequencing provided insights into virus evolution and transmission and offered important information for outbreak response. Here, we analyze sequences from 232 patients sampled over 7 months in Sierra Leone, along with 86 previously released genomes from earlier in the epidemic. We confirm sustained human-to-human transmission within Sierra Leone and find no evidence for import or export of EBOV across national borders after its initial introduction. Using high-depth replicate sequencing, we observe both host-to-host transmission and recurrent emergence of intrahost genetic variants. We trace the increasing impact of purifying selection in suppressing the accumulation of nonsynonymous mutations over time. Finally, we note changes in the mucin-like domain of EBOV glycoprotein that merit further investigation. These findings clarify the movement of EBOV within the region and describe viral evolution during prolonged human-to-human transmission

    Analysis of shared heritability in common disorders of the brain

    Get PDF
    ience, this issue p. eaap8757 Structured Abstract INTRODUCTION Brain disorders may exhibit shared symptoms and substantial epidemiological comorbidity, inciting debate about their etiologic overlap. However, detailed study of phenotypes with different ages of onset, severity, and presentation poses a considerable challenge. Recently developed heritability methods allow us to accurately measure correlation of genome-wide common variant risk between two phenotypes from pools of different individuals and assess how connected they, or at least their genetic risks, are on the genomic level. We used genome-wide association data for 265,218 patients and 784,643 control participants, as well as 17 phenotypes from a total of 1,191,588 individuals, to quantify the degree of overlap for genetic risk factors of 25 common brain disorders. RATIONALE Over the past century, the classification of brain disorders has evolved to reflect the medical and scientific communities' assessments of the presumed root causes of clinical phenomena such as behavioral change, loss of motor function, or alterations of consciousness. Directly observable phenomena (such as the presence of emboli, protein tangles, or unusual electrical activity patterns) generally define and separate neurological disorders from psychiatric disorders. Understanding the genetic underpinnings and categorical distinctions for brain disorders and related phenotypes may inform the search for their biological mechanisms. RESULTS Common variant risk for psychiatric disorders was shown to correlate significantly, especially among attention deficit hyperactivity disorder (ADHD), bipolar disorder, major depressive disorder (MDD), and schizophrenia. By contrast, neurological disorders appear more distinct from one another and from the psychiatric disorders, except for migraine, which was significantly correlated to ADHD, MDD, and Tourette syndrome. We demonstrate that, in the general population, the personality trait neuroticism is significantly correlated with almost every psychiatric disorder and migraine. We also identify significant genetic sharing between disorders and early life cognitive measures (e.g., years of education and college attainment) in the general population, demonstrating positive correlation with several psychiatric disorders (e.g., anorexia nervosa and bipolar disorder) and negative correlation with several neurological phenotypes (e.g., Alzheimer's disease and ischemic stroke), even though the latter are considered to result from specific processes that occur later in life. Extensive simulations were also performed to inform how statistical power, diagnostic misclassification, and phenotypic heterogeneity influence genetic correlations. CONCLUSION The high degree of genetic correlation among many of the psychiatric disorders adds further evidence that their current clinical boundaries do not reflect distinct underlying pathogenic processes, at least on the genetic level. This suggests a deeply interconnected nature for psychiatric disorders, in contrast to neurological disorders, and underscores the need to refine psychiatric diagnostics. Genetically informed analyses may provide important "scaffolding" to support such restructuring of psychiatric nosology, which likely requires incorporating many levels of information. By contrast, we find limited evidence for widespread common genetic risk sharing among neurological disorders or across neurological and psychiatric disorders. We show that both psychiatric and neurological disorders have robust correlations with cognitive and personality measures. Further study is needed to evaluate whether overlapping genetic contributions to psychiatric pathology may influence treatment choices. Ultimately, such developments may pave the way toward reduced heterogeneity and improved diagnosis and treatment of psychiatric disorders

    Prognostic model to predict postoperative acute kidney injury in patients undergoing major gastrointestinal surgery based on a national prospective observational cohort study.

    Get PDF
    Background: Acute illness, existing co-morbidities and surgical stress response can all contribute to postoperative acute kidney injury (AKI) in patients undergoing major gastrointestinal surgery. The aim of this study was prospectively to develop a pragmatic prognostic model to stratify patients according to risk of developing AKI after major gastrointestinal surgery. Methods: This prospective multicentre cohort study included consecutive adults undergoing elective or emergency gastrointestinal resection, liver resection or stoma reversal in 2-week blocks over a continuous 3-month period. The primary outcome was the rate of AKI within 7 days of surgery. Bootstrap stability was used to select clinically plausible risk factors into the model. Internal model validation was carried out by bootstrap validation. Results: A total of 4544 patients were included across 173 centres in the UK and Ireland. The overall rate of AKI was 14·2 per cent (646 of 4544) and the 30-day mortality rate was 1·8 per cent (84 of 4544). Stage 1 AKI was significantly associated with 30-day mortality (unadjusted odds ratio 7·61, 95 per cent c.i. 4·49 to 12·90; P < 0·001), with increasing odds of death with each AKI stage. Six variables were selected for inclusion in the prognostic model: age, sex, ASA grade, preoperative estimated glomerular filtration rate, planned open surgery and preoperative use of either an angiotensin-converting enzyme inhibitor or an angiotensin receptor blocker. Internal validation demonstrated good model discrimination (c-statistic 0·65). Discussion: Following major gastrointestinal surgery, AKI occurred in one in seven patients. This preoperative prognostic model identified patients at high risk of postoperative AKI. Validation in an independent data set is required to ensure generalizability

    The role of networks to overcome large-scale challenges in tomography : the non-clinical tomography users research network

    Get PDF
    Our ability to visualize and quantify the internal structures of objects via computed tomography (CT) has fundamentally transformed science. As tomographic tools have become more broadly accessible, researchers across diverse disciplines have embraced the ability to investigate the 3D structure-function relationships of an enormous array of items. Whether studying organismal biology, animal models for human health, iterative manufacturing techniques, experimental medical devices, engineering structures, geological and planetary samples, prehistoric artifacts, or fossilized organisms, computed tomography has led to extensive methodological and basic sciences advances and is now a core element in science, technology, engineering, and mathematics (STEM) research and outreach toolkits. Tomorrow's scientific progress is built upon today's innovations. In our data-rich world, this requires access not only to publications but also to supporting data. Reliance on proprietary technologies, combined with the varied objectives of diverse research groups, has resulted in a fragmented tomography-imaging landscape, one that is functional at the individual lab level yet lacks the standardization needed to support efficient and equitable exchange and reuse of data. Developing standards and pipelines for the creation of new and future data, which can also be applied to existing datasets is a challenge that becomes increasingly difficult as the amount and diversity of legacy data grows. Global networks of CT users have proved an effective approach to addressing this kind of multifaceted challenge across a range of fields. Here we describe ongoing efforts to address barriers to recently proposed FAIR (Findability, Accessibility, Interoperability, Reuse) and open science principles by assembling interested parties from research and education communities, industry, publishers, and data repositories to approach these issues jointly in a focused, efficient, and practical way. By outlining the benefits of networks, generally, and drawing on examples from efforts by the Non-Clinical Tomography Users Research Network (NoCTURN), specifically, we illustrate how standardization of data and metadata for reuse can foster interdisciplinary collaborations and create new opportunities for future-looking, large-scale data initiatives

    Robust estimation of bacterial cell count from optical density

    Get PDF
    Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals &lt;1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data
    corecore