7 research outputs found

    Semi-Empirical Topological Method for Prediction of the Relative Retention Time of Polychlorinated Biphenyl Congeners on 18 Different HR GC Columns

    Get PDF
    High resolution gas chromatographic relative retention time (HRGC-RRT) models were developed to predict relative retention times of the 209 individual polychlorinated biphenyls (PCBs) congeners. To estimate and predict the HRGC-RRT values of all PCBs on 18 different stationary phases, a multiple linear regression equation of the form RRT = ao + a1 (no. o-Cl) + a2 (no. m-Cl) + a3 (no. p-Cl) + a4 (VM or SM) was used. Molecular descriptors in the models included the number of ortho-, meta-, and para-chlorine substituents (no. o-Cl, m-Cl and p-Cl, respectively), the semi-empirically calculated molecular volume (VM), and the molecular surface area (SM). By means of the final variable selection method, four optimal semi-empirical descriptors were selected to develop a QSRR model for the prediction of RRT in PCBs with a correlation coefficient between 0.9272 and 0.9928 and a leave-one-out cross-validation correlation coefficient between 0.9230 and 0.9924 on each stationary phase. The root mean squares errors over different 18 stationary phases are within the range of 0.0108–0.0335. The accuracy of all the developed models were investigated using cross-validation leave-one-out (LOO), Y-randomization, external validation through an odd–even number and division of the entire data set into training and test sets

    Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims

    Get PDF
    With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development. In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, they will need to make verifiable claims to which they can be held accountable. Those outside of a given organization also need effective means of scrutinizing such claims. This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms

    Chemotherapy and the pediatric brain

    No full text
    corecore