73 research outputs found

    Marine Boundary Layer Clouds Associated with Coastally Trapped Disturbances: Observations and Model Simulations

    Get PDF
    This work has been accepted to Journal of Atmospheric Sciences. The AMS does not guarantee that the copy provided here is an accurate copy of the final published work.Modeling marine low clouds and fog in coastal environments remains an outstanding challenge due to the inherently complex ocean–land–atmosphere system. This is especially important in the context of global circulation models due to the profound radiative impact of these clouds. This study utilizes aircraft and satellite measurements, in addition to numerical simulations using the Weather Research and Forecasting (WRF) Model, to examine three well-observed coastally trapped disturbance (CTD) events from June 2006, July 2011, and July 2015. Cloud water-soluble ionic and elemental composition analyses conducted for two of the CTD cases indicate that anthropogenic aerosol sources may impact CTD cloud decks due to synoptic-scale patterns associated with CTD initiation. In general, the dynamics and thermodynamics of the CTD systems are well represented and are relatively insensitive to the choice of physics parameterizations; however, a set of WRF simulations suggests that the treatment of model physics strongly influences CTD cloud field evolution. Specifically, cloud liquid water path (LWP) is highly sensitive to the choice of the planetary boundary layer (PBL) scheme; in many instances, the PBL scheme affects cloud extent and LWP values as much as or more than the microphysics scheme. Results suggest that differences in the treatment of entrainment and vertical mixing in the Yonsei University (nonlocal) and Mellor–Yamada–Janjić (local) PBL schemes may play a significant role. The impact of using different driving models—namely, the North American Mesoscale Forecast System (NAM) 12-km analysis and the NCEP North American Regional Reanalysis (NARR) 32-km products—is also investigated

    Marine Boundary Layer Clouds Associated with Coastally Trapped Disturbances: Observations and Model Simulations

    Get PDF
    Modeling marine low clouds and fog in coastal environments remains an outstanding challenge due to the inherently complex ocean–land–atmosphere system. This is especially important in the context of global circulation models due to the profound radiative impact of these clouds. This study utilizes aircraft and satellite measurements, in addition to numerical simulations using the Weather Research and Forecasting (WRF) Model, to examine three well-observed coastally trapped disturbance (CTD) events from June 2006, July 2011, and July 2015. Cloud water-soluble ionic and elemental composition analyses conducted for two of the CTD cases indicate that anthropogenic aerosol sources may impact CTD cloud decks due to synoptic-scale patterns associated with CTD initiation. In general, the dynamics and thermodynamics of the CTD systems are well represented and are relatively insensitive to the choice of physics parameterizations; however, a set of WRF simulations suggests that the treatment of model physics strongly influences CTD cloud field evolution. Specifically, cloud liquid water path (LWP) is highly sensitive to the choice of the planetary boundary layer (PBL) scheme; in many instances, the PBL scheme affects cloud extent and LWP values as much as or more than the microphysics scheme. Results suggest that differences in the treatment of entrainment and vertical mixing in the Yonsei University (nonlocal) and Mellor–Yamada–Janjić (local) PBL schemes may play a significant role. The impact of using different driving models—namely, the North American Mesoscale Forecast System (NAM) 12-km analysis and the NCEP North American Regional Reanalysis (NARR) 32-km products—is also investigated

    Polygenic Prediction of Weight and Obesity Trajectories from Birth to Adulthood

    Get PDF
    Severe obesity is a rapidly growing global health threat. Although often attributed to unhealthy lifestyle choices or environmental factors, obesity is known to be heritable and highly polygenic; the majority of inherited susceptibility is related to the cumulative effect of many common DNA variants. Here we derive and validate a new polygenic predictor comprised of 2.1 million common variants to quantify this susceptibility and test this predictor in more than 300,000 individuals ranging from middle age to birth. Among middle-aged adults, we observe a 13-kg gradient in weight and a 25-fold gradient in risk of severe obesity across polygenic score deciles. In a longitudinal birth cohort, we note minimal differences in birthweight across score deciles, but a significant gradient emerged in early childhood and reached 12 kg by 18 years of age. This new approach to quantify inherited susceptibility to obesity affords new opportunities for clinical prevention and mechanistic assessment. © 2019 Author(s)National Human Genome Research Institute (1K08HG0101)Wellcome Trust (202802/Z/16/Z)University of Bristol NIHR Biomedical Research Centre (S- BRC-1215-20011)National Human Genome Research Institute (HG008895)National Heart, Lung, and Blood Institute (NHLBI) HHSN268201300025CNational Heart, Lung, and Blood Institute (NHLBI) HHSN268201300026CNational Heart, Lung, and Blood Institute (NHLBI) HHSN268201300027CNational Heart, Lung, and Blood Institute (NHLBI) HHSN268201300028CNational Heart, Lung, and Blood Institute (NHLBI) HHSN268201300029CNational Heart, Lung, and Blood Institute (NHLBI) HHSN268200900041CNational Institute on Aging (AG0005)NHLBI (AG0005)National Human Genome Research Institute (U01-HG004729)National Human Genome Research Institute (U01-HG04424)National Human Genome Research Institute (U01-HG004446)Wellcome (102215/2/13/2

    Performance of ACMG-AMP Variant-Interpretation Guidelines among Nine Laboratories in the Clinical Sequencing Exploratory Research Consortium

    Get PDF
    Evaluating the pathogenicity of a variant is challenging given the plethora of types of genetic evidence that laboratories consider. Deciding how to weigh each type of evidence is difficult, and standards have been needed. In 2015, the American College of Medical Genetics and Genomics (ACMG) and the Association for Molecular Pathology (AMP) published guidelines for the assessment of variants in genes associated with Mendelian diseases. Nine molecular diagnostic laboratories involved in the Clinical Sequencing Exploratory Research (CSER) consortium piloted these guidelines on 99 variants spanning all categories (pathogenic, likely pathogenic, uncertain significance, likely benign, and benign). Nine variants were distributed to all laboratories, and the remaining 90 were evaluated by three laboratories. The laboratories classified each variant by using both the laboratory's own method and the ACMG-AMP criteria. The agreement between the two methods used within laboratories was high (K-alpha = 0.91) with 79% concordance. However, there was only 34% concordance for either classification system across laboratories. After consensus discussions and detailed review of the ACMG-AMP criteria, concordance increased to 71%. Causes of initial discordance in ACMG-AMP classifications were identified, and recommendations on clarification and increased specification of the ACMG-AMP criteria were made. In summary, although an initial pilot of the ACMG-AMP guidelines did not lead to increased concordance in variant interpretation, comparing variant interpretations to identify differences and having a common framework to facilitate resolution of those differences were beneficial for improving agreement, allowing iterative movement toward increased reporting consistency for variants in genes associated with monogenic disease

    A survey of informatics approaches to whole-exome and whole-genome clinical reporting in the electronic health record

    Get PDF
    Genome-scale clinical sequencing is being adopted more broadly in medical practice. The National Institutes of Health developed the Clinical Sequencing Exploratory Research (CSER) program to guide implementation and dissemination of best practices for the integration of sequencing into clinical care. This study describes and compares the state of the art of incorporating whole-exome and whole-genome sequencing results into the electronic health record, including approaches to decision support across the six current CSER sites

    Clinical Sequencing Exploratory Research Consortium: Accelerating Evidence-Based Practice of Genomic Medicine

    Get PDF
    Despite rapid technical progress and demonstrable effectiveness for some types of diagnosis and therapy, much remains to be learned about clinical genome and exome sequencing (CGES) and its role within the practice of medicine. The Clinical Sequencing Exploratory Research (CSER) consortium includes 18 extramural research projects, one National Human Genome Research Institute (NHGRI) intramural project, and a coordinating center funded by the NHGRI and National Cancer Institute. The consortium is exploring analytic and clinical validity and utility, as well as the ethical, legal, and social implications of sequencing via multidisciplinary approaches; it has thus far recruited 5,577 participants across a spectrum of symptomatic and healthy children and adults by utilizing both germline and cancer sequencing. The CSER consortium is analyzing data and creating publically available procedures and tools related to participant preferences and consent, variant classification, disclosure and management of primary and secondary findings, health outcomes, and integration with electronic health records. Future research directions will refine measures of clinical utility of CGES in both germline and somatic testing, evaluate the use of CGES for screening in healthy individuals, explore the penetrance of pathogenic variants through extensive phenotyping, reduce discordances in public databases of genes and variants, examine social and ethnic disparities in the provision of genomics services, explore regulatory issues, and estimate the value and downstream costs of sequencing. The CSER consortium has established a shared community of research sites by using diverse approaches to pursue the evidence-based development of best practices in genomic medicine

    An international effort towards developing standards for best practices in analysis, interpretation and reporting of clinical genome sequencing results in the CLARITY Challenge

    Get PDF
    There is tremendous potential for genome sequencing to improve clinical diagnosis and care once it becomes routinely accessible, but this will require formalizing research methods into clinical best practices in the areas of sequence data generation, analysis, interpretation and reporting. The CLARITY Challenge was designed to spur convergence in methods for diagnosing genetic disease starting from clinical case history and genome sequencing data. DNA samples were obtained from three families with heritable genetic disorders and genomic sequence data were donated by sequencing platform vendors. The challenge was to analyze and interpret these data with the goals of identifying disease-causing variants and reporting the findings in a clinically useful format. Participating contestant groups were solicited broadly, and an independent panel of judges evaluated their performance. RESULTS: A total of 30 international groups were engaged. The entries reveal a general convergence of practices on most elements of the analysis and interpretation process. However, even given this commonality of approach, only two groups identified the consensus candidate variants in all disease cases, demonstrating a need for consistent fine-tuning of the generally accepted methods. There was greater diversity of the final clinical report content and in the patient consenting process, demonstrating that these areas require additional exploration and standardization. CONCLUSIONS: The CLARITY Challenge provides a comprehensive assessment of current practices for using genome sequencing to diagnose and report genetic diseases. There is remarkable convergence in bioinformatic techniques, but medical interpretation and reporting are areas that require further development by many groups

    The general error correction model in practice

    No full text
    Enns et al. respond to recent work by Grant and Lebo and Lebo and Grant that raises a number of concerns with political scientists’ use of the general error correction model (GECM). While agreeing with the particular rules one should apply when using unit root data in the GECM, Enns et al. still advocate procedures that will lead researchers astray. Most especially, they fail to recognize the difficulty in interpreting the GECM’s “error correction coefficient.” Without being certain of the univariate properties of one’s data it is extremely difficult (or perhaps impossible) to know whether or not cointegration exists and error correction is occurring. We demonstrate the crucial differences for the GECM between having evidence of a unit root (from Dickey–Fuller tests) versus actually having a unit root. Looking at simulations and two applied examples we show how overblown findings of error correction await the uncareful researcher
    corecore