38,837 research outputs found

    An Ensemble Model of QSAR Tools for Regulatory Risk Assessment

    Get PDF
    Quantitative structure activity relationships (QSARs) are theoretical models that relate a quantitative measure of chemical structure to a physical property or a biological effect. QSAR predictions can be used for chemical risk assessment for protection of human and environmental health, which makes them interesting to regulators, especially in the absence of experimental data. For compatibility with regulatory use, QSAR models should be transparent, reproducible and optimized to minimize the number of false negatives. In silico QSAR tools are gaining wide acceptance as a faster alternative to otherwise time-consuming clinical and animal testing methods. However, different QSAR tools often make conflicting predictions for a given chemical and may also vary in their predictive performance across different chemical datasets. In a regulatory context, conflicting predictions raise interpretation, validation and adequacy concerns. To address these concerns, ensemble learning techniques in the machine learning paradigm can be used to integrate predictions from multiple tools. By leveraging various underlying QSAR algorithms and training datasets, the resulting consensus prediction should yield better overall predictive ability. We present a novel ensemble QSAR model using Bayesian classification. The model allows for varying a cut-off parameter that allows for a selection in the desirable trade-off between model sensitivity and specificity. The predictive performance of the ensemble model is compared with four in silico tools (Toxtree, Lazar, OECD Toolbox, and Danish QSAR) to predict carcinogenicity for a dataset of air toxins (332 chemicals) and a subset of the gold carcinogenic potency database (480 chemicals). Leave-one-out cross validation results show that the ensemble model achieves the best trade-off between sensitivity and specificity (accuracy: 83.8 % and 80.4 %, and balanced accuracy: 80.6 % and 80.8 %) and highest inter-rater agreement [kappa (Îș): 0.63 and 0.62] for both the datasets. The ROC curves demonstrate the utility of the cut-off feature in the predictive ability of the ensemble model. This feature provides an additional control to the regulators in grading a chemical based on the severity of the toxic endpoint under study

    X-ray absorption spectroscopy systematics at the tungsten L-edge

    Get PDF
    A series of mononuclear six-coordinate tungsten compounds spanning formal oxidation states from 0 to +VI, largely in a ligand environment of inert chloride and/or phosphine, has been interrogated by tungsten L-edge X-ray absorption spectroscopy. The L-edge spectra of this compound set, comprised of [W<sup>0</sup>(PMe<sub>3</sub>)<sub>6</sub>], [W<sup>II</sup>Cl<sub>2</sub>(PMePh<sub>2</sub>)<sub>4</sub>], [W<sup>III</sup>Cl<sub>2</sub>(dppe)<sub>2</sub>][PF<sub>6</sub>] (dppe = 1,2-bis(diphenylphosphino)ethane), [W<sup>IV</sup>Cl<sub>4</sub>(PMePh<sub>2</sub>)<sub>2</sub>], [W<sup>V</sup>(NPh)Cl<sub>3</sub>(PMe<sub>3</sub>)<sub>2</sub>], and [W<sup>VI</sup>Cl<sub>6</sub>] correlate with formal oxidation state and have usefulness as references for the interpretation of the L-edge spectra of tungsten compounds with redox-active ligands and ambiguous electronic structure descriptions. The utility of these spectra arises from the combined correlation of the estimated branching ratio (EBR) of the L<sub>3,2</sub>-edges and the L<sub>1</sub> rising-edge energy with metal Z<sub>eff</sub>, thereby permitting an assessment of effective metal oxidation state. An application of these reference spectra is illustrated by their use as backdrop for the L-edge X-ray absorption spectra of [W<sup>IV</sup>(mdt)<sub>2</sub>(CO)<sub>2</sub>] and [W<sup>IV</sup>(mdt)<sub>2</sub>(CN)<sub>2</sub>]<sup>2–</sup> (mdt<sup>2–</sup> = 1,2-dimethylethene-1,2-dithiolate), which shows that both compounds are effectively W<sup>IV</sup> species. Use of metal L-edge XAS to assess a compound of uncertain formulation requires: 1) Placement of that data within the context of spectra offered by unambiguous calibrant compounds, preferably with the same coordination number and similar metal ligand distances. Such spectra assist in defining upper and/or lower limits for metal Z<sub>eff</sub> in the species of interest; 2) Evaluation of that data in conjunction with information from other physical methods, especially ligand K-edge XAS; 3) Increased care in interpretation if strong π-acceptor ligands, particularly CO, or π-donor ligands are present. The electron-withdrawing/donating nature of these ligand types, combined with relatively short metal-ligand distances, exaggerate the difference between formal oxidation state and metal Z<sub>eff</sub> or, as in the case of [W<sup>IV</sup>(mdt)<sub>2</sub>(CO)<sub>2</sub>], add other subtlety by modulating the redox level of other ligands in the coordination sphere

    Composition and combination‐based object trust evaluation for knowledge management in virtual organizations

    Get PDF
    Purpose – This paper aims to develop a framework for object trust evaluation and related object trust principles to facilitate knowledge management in a virtual organization. It proposes systematic methods to quantify the trust of an object and defines the concept of object trust management. The study aims to expand the domain of subject trust to object trust evaluation in terms of whether an object is correct and accurate in expressing a topic or issue and whether the object is secure and safe to execute (in the case of an executable program). By providing theoretical and empirical insights about object trust composition and combination, this research facilitates better knowledge identification, creation, evaluation, and distribution. Design/methodology/approach This paper presents two object trust principles – trust composition and trust combination. These principles provide formal methodologies and guidelines to assess whether an object has the required level of quality and security features (hence it is trustworthy). The paper uses a component‐based approach to evaluate the quality and security of an object. Formal approaches and algorithms have been developed to assess the trustworthiness of an object in different cases. Findings The paper provides qualitative and quantitative analysis about how object trust can be composed and combined. Novel mechanisms have been developed to help users evaluate the quality and security features of available objects. Originality/value This effort fulfills an identified need to address the challenging issue of evaluating the trustworthiness of an object (e.g. a software program, a file, or other type of knowledge element) in a loosely‐coupled system such as a virtual organization. It is the first of its kind to formally define object trust management and study object trust evaluation

    Estimation with the Nested Logit Model: Specifications and Software Particularities

    Get PDF
    The paper discusses the nested logit model for choices between a set of mutually exclusive alternatives (e.g. brand choice, strategy decisions, modes of transportation, etc.). Due to the ability of the nested logit model to allow and account for similarities between pairs of alternatives, the model has become very popular for the empirical analysis of choice decisions. However the fact that there are two different specifications of the nested logit model (with different outcomes) has not received adequate attention. The utility maximization nested logit (UMNL) model and the non-normalized nested logit (NNNL) model have different properties, influencing the estimation results in a different manner. This paper introduces distinct specifications of the nested logit model and indicates particularities arising from model estimation. The effects of using various software packages on the estimation results of a nested logit model are shown using simulated data sets for an artificial decision situation.nested logit model, utility maximization nested logit, nonnormalized nested logit, simulation study

    Complementing Measurements and Real Options Concepts to Support Inter-iteration Decision-Making in Agile Projects

    Get PDF
    Agile software projects are characterized by iterative and incremental development, accommodation of changes and active customer participation. The process is driven by creating business value for the client, assuming that the client (i) is aware of it, and (ii) is capable to estimate the business value, associated with the separate features of the system to be implemented. This paper is focused on the complementary use of measurement techniques and concepts of real-option-analysis to assist clients in assessing and comparing alternative sets of requirements. Our overall objective is to provide systematic support to clients for the decision-making process on what to implement in each iteration. The design of our approach is justified by using empirical data, published earlier by other authors

    Verification, Analytical Validation, and Clinical Validation (V3): The Foundation of Determining Fit-for-Purpose for Biometric Monitoring Technologies (BioMeTs)

    Get PDF
    Digital medicine is an interdisciplinary field, drawing together stakeholders with expertize in engineering, manufacturing, clinical science, data science, biostatistics, regulatory science, ethics, patient advocacy, and healthcare policy, to name a few. Although this diversity is undoubtedly valuable, it can lead to confusion regarding terminology and best practices. There are many instances, as we detail in this paper, where a single term is used by different groups to mean different things, as well as cases where multiple terms are used to describe essentially the same concept. Our intent is to clarify core terminology and best practices for the evaluation of Biometric Monitoring Technologies (BioMeTs), without unnecessarily introducing new terms. We focus on the evaluation of BioMeTs as fit-for-purpose for use in clinical trials. However, our intent is for this framework to be instructional to all users of digital measurement tools, regardless of setting or intended use. We propose and describe a three-component framework intended to provide a foundational evaluation framework for BioMeTs. This framework includes (1) verification, (2) analytical validation, and (3) clinical validation. We aim for this common vocabulary to enable more effective communication and collaboration, generate a common and meaningful evidence base for BioMeTs, and improve the accessibility of the digital medicine field
    • 

    corecore