92 research outputs found

    Densification et caractérisation d'un outil abrasif en corindon aggloméré

    Get PDF
    Les comprimés sont réalisés sous forme de pastilles (de diamètre 18mm x 5mm) selon le procédé d'élaboration des produits frittés en présence d'une phase vitreuse abondante. Le mélange comprimé uniaxialement à froid est constitué d'une poudre Al203 et d'un liant vitreux à base de kaolin, de feldspath plagioclase et de quartz 945;. Parmi les paramètres d'élaboration constants: l'effort de compactage, la température de cuisson, et la durée de maintien. Les caractéristiques recherchées sont : la porosité, la durée de vie et la proportion des constituants

    "MS-Ready" structures for non-targeted high-resolution mass spectrometry screening studies.

    Get PDF
    Chemical database searching has become a fixture in many non-targeted identification workflows based on high-resolution mass spectrometry (HRMS). However, the form of a chemical structure observed in HRMS does not always match the form stored in a database (e.g., the neutral form versus a salt; one component of a mixture rather than the mixture form used in a consumer product). Linking the form of a structure observed via HRMS to its related form(s) within a database will enable the return of all relevant variants of a structure, as well as the related metadata, in a single query. A Konstanz Information Miner (KNIME) workflow has been developed to produce structural representations observed using HRMS ("MS-Ready structures") and links them to those stored in a database. These MS-Ready structures, and associated mappings to the full chemical representations, are surfaced via the US EPA's Chemistry Dashboard ( https://comptox.epa.gov/dashboard/ ). This article describes the workflow for the generation and linking of ~ 700,000 MS-Ready structures (derived from ~ 760,000 original structures) as well as download, search and export capabilities to serve structure identification using HRMS. The importance of this form of structural representation for HRMS is demonstrated with several examples, including integration with the in silico fragmentation software application MetFrag. The structures, search, download and export functionality are all available through the CompTox Chemistry Dashboard, while the MetFrag implementation can be viewed at https://msbi.ipb-halle.de/MetFragBeta/

    A Qualitative Modeling Approach for Whole Genome Prediction Using High-Throughput Toxicogenomics Data and Pathway-Based Validation

    Get PDF
    Efficient high-throughput transcriptomics (HTT) tools promise inexpensive, rapid assessment of possible biological consequences of human and environmental exposures to tens of thousands of chemicals in commerce. HTT systems have used relatively small sets of gene expression measurements coupled with mathematical prediction methods to estimate genome-wide gene expression and are often trained and validated using pharmaceutical compounds. It is unclear whether these training sets are suitable for general toxicity testing applications and the more diverse chemical space represented by commercial chemicals and environmental contaminants. In this work, we built predictive computational models that inferred whole genome transcriptional profiles from a smaller sample of surrogate genes. The model was trained and validated using a large scale toxicogenomics database with gene expression data from exposure to heterogeneous chemicals from a wide range of classes (the Open TG-GATEs data base). The method of predictor selection was designed to allow high fidelity gene prediction from any pre-existing gene expression data set, regardless of animal species or data measurement platform. Predictive qualitative models were developed with this TG-GATES data that contained gene expression data of human primary hepatocytes with over 941 samples covering 158 compounds. A sequential forward search-based greedy algorithm, combining different fitting approaches and machine learning techniques, was used to find an optimal set of surrogate genes that predicted differential expression changes of the remaining genome. We then used pathway enrichment of up-regulated and down-regulated genes to assess the ability of a limited gene set to determine relevant patterns of tissue response. In addition, we compared prediction performance using the surrogate genes found from our greedy algorithm (referred to as the SV2000) with the landmark genes provided by existing technologies such as L1000 (Genometry) and S1500 (Tox21), finding better predictive performance for the SV2000. The ability of these predictive algorithms to predict pathway level responses is a positive step toward incorporating mode of action (MOA) analysis into the high throughput prioritization and testing of the large number of chemicals in need of safety evaluation

    Integrated Model of Chemical Perturbations of a Biological Pathway Using 18 In Vitro High Throughput Screening Assays for the Estrogen Receptor

    Get PDF
    We demonstrate a computational network model that integrates 18 in vitro, high-throughput screening assays measuring estrogen receptor (ER) binding, dimerization, chromatin binding, transcriptional activation and ER-dependent cell proliferation. The network model uses activity patterns across the in vitro assays to predict whether a chemical is an ER agonist or antagonist, or is otherwise influencing the assays through a manner dependent on the physics and chemistry of the technology platform (“assay interference”). The method is applied to a library of 1812 commercial and environmental chemicals, including 45 ER positive and negative reference chemicals. Among the reference chemicals, the network model correctly identified the agonists and antagonists with the exception of very weak compounds whose activity was outside the concentration range tested. The model agonist score also correlated with the expected potency class of the active reference chemicals. Of the 1812 chemicals evaluated, 111 (6.1%) were predicted to be strongly ER active in agonist or antagonist mode. This dataset and model were also used to begin a systematic investigation of assay interference. The most prominent cause of false-positive activity (activity in an assay that is likely not due to interaction of the chemical with ER) is cytotoxicity. The model provides the ability to prioritize a large set of important environmental chemicals with human exposure potential for additional in vivo endocrine testing. Finally, this model is generalizable to any molecular pathway for which there are multiple upstream and downstream assays available

    New insights on the latest Messinian-to-Piacenzian stratigraphic series from the Dahra Massif (Lower Chelif Basin, Algeria): Lago Mare, reflooding and bio-events

    Get PDF
    Geological investigations carried out on the Dahra Massif have revealed sedimentary changes and bioevents characterizing the post-gypsum detrital sediments (from Messinian to Piacenzian), which are followed by the Trubi equivalent Pliocene marls or white marly limestones. Structured into two superimposed steps, the late Messinian deposits yielded two successive ostracod assemblages. They indicate a brackish environment for the lower and a fairly open shallow brackish environment for the second. Based on their ostracod content, assemblage 1 (Cyprideis, Loxoconcha muelleri) corresponds to the Lago Mare biofacies 1 of the Apennine foredeep, which is correlated with the Lago Mare 1 episode dated between 5.64 and 5.60 Ma. Assemblage 2 (Loxocorniculina djafarovi) is referred to the Lago Mare biofacies 2 described in the same region. It is correlated with the Lago Mare 3 episode, dated between 5.46 and 5.33 Ma. Moreover, the stratigraphic succession is marked by a major discontinuity indicated by a hardground, separating step 1 from step 2 and corresponding to the ostracod assemblages 1 and 2, respectively. This discontinuity is considered here to be equivalent to the Messinian Erosional Surface, already evidenced in the region and widely known around the Mediterranean Basin. These late Messinian deposits and their ostracod assemblage 2, notably the detrital sedimentation with Ceratolithus acutus, Globorotalia margaritae, Reticulofenestra cisnerosii document a marine incursion into the Lower Chelif Basin, corresponding to the latest Messinian marine reflooding of the Mediterranean Basin, that happened before the earliest Zanclean R. cisnerosii occurrence. Finally, the bioevents evidenced in the Dahra Massif, reinforce the evidence of the late Messinian Lago Mare 3 episode, and support the ante-Zanclean age of the marine reflooding of the Mediterranean. The overlying deposits are marked by coral constructions (cf. Cladocora cf. caespitosa, Dendrophyllia sp) never described before and covering the entire early Zanclean, testifying the existence, at that time, of warm enough conditions, which may correspond to the marine isotopic stage TG5

    CATMoS: Collaborative Acute Toxicity Modeling Suite.

    Get PDF
    BACKGROUND: Humans are exposed to tens of thousands of chemical substances that need to be assessed for their potential toxicity. Acute systemic toxicity testing serves as the basis for regulatory hazard classification, labeling, and risk management. However, it is cost- and time-prohibitive to evaluate all new and existing chemicals using traditional rodent acute toxicity tests. In silico models built using existing data facilitate rapid acute toxicity predictions without using animals. OBJECTIVES: The U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) Acute Toxicity Workgroup organized an international collaboration to develop in silico models for predicting acute oral toxicity based on five different end points: Lethal Dose 50 (LD50 value, U.S. Environmental Protection Agency hazard (four) categories, Globally Harmonized System for Classification and Labeling hazard (five) categories, very toxic chemicals [LD50 (LD50≤50mg/kg)], and nontoxic chemicals (LD50>2,000mg/kg). METHODS: An acute oral toxicity data inventory for 11,992 chemicals was compiled, split into training and evaluation sets, and made available to 35 participating international research groups that submitted a total of 139 predictive models. Predictions that fell within the applicability domains of the submitted models were evaluated using external validation sets. These were then combined into consensus models to leverage strengths of individual approaches. RESULTS: The resulting consensus predictions, which leverage the collective strengths of each individual model, form the Collaborative Acute Toxicity Modeling Suite (CATMoS). CATMoS demonstrated high performance in terms of accuracy and robustness when compared with in vivo results. DISCUSSION: CATMoS is being evaluated by regulatory agencies for its utility and applicability as a potential replacement for in vivo rat acute oral toxicity studies. CATMoS predictions for more than 800,000 chemicals have been made available via the National Toxicology Program's Integrated Chemical Environment tools and data sets (ice.ntp.niehs.nih.gov). The models are also implemented in a free, standalone, open-source tool, OPERA, which allows predictions of new and untested chemicals to be made. https://doi.org/10.1289/EHP8495

    Dynamic semantic-based green bio-inspired approach for optimizing energy and cloud services qualities

    No full text
    Currently, everybody can access and leverage existing services on the Cloud from a wide variety of mobile devices at any time and from anywhere (at home, at work, in the car, etc). The massive use of new heterogeneous mobile devices and technologies for discovering and deploying cloud services has led a trade-off between costs and improved quality of services (eg, fast response time, low cost, improved security, the reduction of energy consumption, and considerable emissions of carbon). This trade-off has led most cloud service providers to call for new intelligent, faster, and energy-saving solutions. This paper aims to propose a new approach based on Semantic Web technologies and Ant Colony Optimization algorithm, which intends to reduce the energy consumption of a wide variety of cloud services. Our approach is generic and, therefore, offers to customers a flexible infrastructure where they can easily perform their preferences. The effectiveness and energy saving of our proposal have been validated and evaluated through multiple experiments on random and real-world data sets.Currently, everybody can access and leverage existing services on the Cloud from a wide variety of mobile devices at any time and from anywhere (at home, at work, in the car, etc). The massive use of new heterogeneous mobile devices and technologies for discovering and deploying cloud services has led a trade-off between costs and improved quality of services (eg, fast response time, low cost, improved security, the reduction of energy consumption, and considerable emissions of carbon). This trade-off has led most cloud service providers to call for new intelligent, faster, and energy-saving solutions. This paper aims to propose a new approach based on Semantic Web technologies and Ant Colony Optimization algorithm, which intends to reduce the energy consumption of a wide variety of cloud services. Our approach is generic and, therefore, offers to customers a flexible infrastructure where they can easily perform their preferences. The effectiveness and energy saving of our proposal have been validated and evaluated through multiple experiments on random and real-world data sets

    Selection and Composition of Cloud Smart Services Using Trust Semantic-Based Green Quality Approach

    No full text
    International audienceNowadays, the massive use of new heterogeneous mobile devices and technologies for discovering, selecting and composing cloud smart services has led a trade-off between costs and improved quality of services (e.g., fast response time, low cost, improved security, the reduction of energy consumption, and considerable emissions of carbon). This trade-off has led most cloud service providers to call for new intelligent, faster, and energy-saving services selection and composition solutions. In order to make the cloud computing more attractive by the smart application, it is compulsory to provide best services that users can be satisfied once using them. This paper aims to propose a new generic green cloud service context-aware ontology to manage a large number of heterogeneous cloud services that are grouped semantically according to their service category, functional, and QoS descriptions. We propose also dynamic trust semantic-based bio-inspired selection algorithm that fits user's functional needs and QoS preferences. It focuses on composition process adaptation to context changes (evolution of user's needs and their preferences, energy saving and its execution environment). Also, our approach targets to determine optimal composite service from several relevant cloud smart services results from the selection phase in order to respect the users' global user's needs, energy saving, and quality services' experiences
    • …
    corecore