156 research outputs found

    Bibliometric analysis of authorship trends and collaboration dynamics over the past three decades of BONE's publication history

    Get PDF
    The existence of a gender gap in academia has been a hotly debated topic over the past several decades. It has been argued that due to the gender gap, it is more difficult for women to obtain higher positions. Manuscripts serve as an important measurement of one's accomplishments within a particular field of academia. Here, we analyzed, over the past 3 decades, authorship and other trends in manuscripts published in BONE, one of the premier journals in the field of bone and mineral metabolism. For this study, one complete year of manuscripts was evaluated (e.g. 1985, 1995, 2005, 2015) for each decade. A bibliometric analysis was then performed of authorship trends for those manuscripts. Analyzed fields included: average number of authors per manuscript, numerical position of the corresponding author, number of institutions collaborating on each manuscript, number of countries involved with each manuscript, number of references, and number of citations per manuscript. Each of these fields increased significantly over the 30-year time frame (p < 10− 6). The gender of both the first and corresponding authors was identified and analyzed over time and by region. There was a significant increase in the percentage of female first authors from 23.4% in 1985 to 47.8% in 2015 (p = 0.001). The percentage of female corresponding authors also increased from 21.2% in 1985 to 35.4% in 2015 although it was not significant (p = 0.07). With such a substantial emphasis being placed on publishing in academic medicine, it is crucial to comprehend the changes in publishing characteristics over time and geographical region. These findings highlight authorship trends in BONE over time as well as by region. Importantly, these findings also highlight where challenges still exist

    A comparison of machine learning algorithms for chemical toxicity classification using a simulated multi-scale data model

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Bioactivity profiling using high-throughput <it>in vitro </it>assays can reduce the cost and time required for toxicological screening of environmental chemicals and can also reduce the need for animal testing. Several public efforts are aimed at discovering patterns or classifiers in high-dimensional bioactivity space that predict tissue, organ or whole animal toxicological endpoints. Supervised machine learning is a powerful approach to discover combinatorial relationships in complex <it>in vitro/in vivo </it>datasets. We present a novel model to simulate complex chemical-toxicology data sets and use this model to evaluate the relative performance of different machine learning (ML) methods.</p> <p>Results</p> <p>The classification performance of Artificial Neural Networks (ANN), K-Nearest Neighbors (KNN), Linear Discriminant Analysis (LDA), Naïve Bayes (NB), Recursive Partitioning and Regression Trees (RPART), and Support Vector Machines (SVM) in the presence and absence of filter-based feature selection was analyzed using K-way cross-validation testing and independent validation on simulated <it>in vitro </it>assay data sets with varying levels of model complexity, number of irrelevant features and measurement noise. While the prediction accuracy of all ML methods decreased as non-causal (irrelevant) features were added, some ML methods performed better than others. In the limit of using a large number of features, ANN and SVM were always in the top performing set of methods while RPART and KNN (k = 5) were always in the poorest performing set. The addition of measurement noise and irrelevant features decreased the classification accuracy of all ML methods, with LDA suffering the greatest performance degradation. LDA performance is especially sensitive to the use of feature selection. Filter-based feature selection generally improved performance, most strikingly for LDA.</p> <p>Conclusion</p> <p>We have developed a novel simulation model to evaluate machine learning methods for the analysis of data sets in which in vitro bioassay data is being used to predict in vivo chemical toxicology. From our analysis, we can recommend that several ML methods, most notably SVM and ANN, are good candidates for use in real world applications in this area.</p

    An international effort towards developing standards for best practices in analysis, interpretation and reporting of clinical genome sequencing results in the CLARITY Challenge

    Get PDF
    There is tremendous potential for genome sequencing to improve clinical diagnosis and care once it becomes routinely accessible, but this will require formalizing research methods into clinical best practices in the areas of sequence data generation, analysis, interpretation and reporting. The CLARITY Challenge was designed to spur convergence in methods for diagnosing genetic disease starting from clinical case history and genome sequencing data. DNA samples were obtained from three families with heritable genetic disorders and genomic sequence data were donated by sequencing platform vendors. The challenge was to analyze and interpret these data with the goals of identifying disease-causing variants and reporting the findings in a clinically useful format. Participating contestant groups were solicited broadly, and an independent panel of judges evaluated their performance. RESULTS: A total of 30 international groups were engaged. The entries reveal a general convergence of practices on most elements of the analysis and interpretation process. However, even given this commonality of approach, only two groups identified the consensus candidate variants in all disease cases, demonstrating a need for consistent fine-tuning of the generally accepted methods. There was greater diversity of the final clinical report content and in the patient consenting process, demonstrating that these areas require additional exploration and standardization. CONCLUSIONS: The CLARITY Challenge provides a comprehensive assessment of current practices for using genome sequencing to diagnose and report genetic diseases. There is remarkable convergence in bioinformatic techniques, but medical interpretation and reporting are areas that require further development by many groups

    Whole-genome sequencing reveals host factors underlying critical COVID-19

    Get PDF
    Critical COVID-19 is caused by immune-mediated inflammatory lung injury. Host genetic variation influences the development of illness requiring critical care1 or hospitalization2–4 after infection with SARS-CoV-2. The GenOMICC (Genetics of Mortality in Critical Care) study enables the comparison of genomes from individuals who are critically ill with those of population controls to find underlying disease mechanisms. Here we use whole-genome sequencing in 7,491 critically ill individuals compared with 48,400 controls to discover and replicate 23 independent variants that significantly predispose to critical COVID-19. We identify 16 new independent associations, including variants within genes that are involved in interferon signalling (IL10RB and PLSCR1), leucocyte differentiation (BCL11A) and blood-type antigen secretor status (FUT2). Using transcriptome-wide association and colocalization to infer the effect of gene expression on disease severity, we find evidence that implicates multiple genes—including reduced expression of a membrane flippase (ATP11A), and increased expression of a mucin (MUC1)—in critical disease. Mendelian randomization provides evidence in support of causal roles for myeloid cell adhesion molecules (SELE, ICAM5 and CD209) and the coagulation factor F8, all of which are potentially druggable targets. Our results are broadly consistent with a multi-component model of COVID-19 pathophysiology, in which at least two distinct mechanisms can predispose to life-threatening disease: failure to control viral replication; or an enhanced tendency towards pulmonary inflammation and intravascular coagulation. We show that comparison between cases of critical illness and population controls is highly efficient for the detection of therapeutically relevant mechanisms of disease

    Interpretative and predictive modelling of Joint European Torus collisionality scans

    Get PDF
    Transport modelling of Joint European Torus (JET) dimensionless collisionality scaling experiments in various operational scenarios is presented. Interpretative simulations at a fixed radial position are combined with predictive JETTO simulations of temperatures and densities, using the TGLF transport model. The model includes electromagnetic effects and collisions as well as □(→┬E ) X □(→┬B ) shear in Miller geometry. Focus is on particle transport and the role of the neutral beam injection (NBI) particle source for the density peaking. The experimental 3-point collisionality scans include L-mode, and H-mode (D and H and higher beta D plasma) plasmas in a total of 12 discharges. Experimental results presented in (Tala et al 2017 44th EPS Conf.) indicate that for the H-mode scans, the NBI particle source plays an important role for the density peaking, whereas for the L-mode scan, the influence of the particle source is small. In general, both the interpretative and predictive transport simulations support the experimental conclusions on the role of the NBI particle source for the 12 JET discharges
    corecore