482 research outputs found

    1st year EFAST annual report

    Get PDF
    The present report provides information about the activities conducted during the 1st year of the EFAST project. The first chapter is dedicated to describe the inquiries conducted at the beginning of the project and to briefly summarise the main results. The second chapter is dedicated to the first EFAST workshop where some of the leading scientists in the field of earthquake engineering have met to discuss about the need and the technologies related to earthquake engineering. The third chapter contains a state of the art and future direction in seismic testing and simulation. The final chapter is dedicated to describe the preliminary design of the web portal of the future testing facility.JRC.DG.G.5-European laboratory for structural assessmen

    EURL ECVAM Workshop on New Generation of Physiologically-Based Kinetic Models in Risk Assessment

    Get PDF
    The European Union Reference Laboratory for Alternatives to Animal Testing (EURL ECVAM) Strategy Document on Toxicokinetics (TK) outlines strategies to enable prediction of systemic toxicity by applying new approach methodologies (NAM). The central feature of the strategy focuses on using physiologically-based kinetic (PBK) modelling to integrate data generated by in vitro and in silico methods for absorption, distribution, metabolism, and excretion (ADME) in humans for predicting whole-body TK behaviour, for environmental chemicals, drugs, nano-materials, and mixtures. In order to facilitate acceptance and use of this new generation of PBK models, which do not rely on animal/human in vivo data in the regulatory domain, experts were invited by EURL ECVAM to (i) identify current challenges in the application of PBK modelling to support regulatory decision making; (ii) discuss challenges in constructing models with no in vivo kinetic data and opportunities for estimating parameter values using in vitro and in silico methods; (iii) present the challenges in assessing model credibility relying on non-animal data and address strengths, uncertainties and limitations in such an approach; (iv) establish a good kinetic modelling practice workflow to serve as the foundation for guidance on the generation and use of in vitro and in silico data to construct PBK models designed to support regulatory decision making. To gauge the current state of PBK applications, experts were asked upfront of the workshop to fill a short survey. In the workshop, using presentations and discussions, the experts elaborated on the importance of being transparent about the model construct, assumptions, and applications to support assessment of model credibility. The experts offered several recommendations to address commonly perceived limitations of parameterization and evaluation of PBK models developed using non-animal data and its use in risk assessment, these include: (i) develop a decision tree for model construction; (ii) set up a task force for independent model peer review; (iii) establish a scoring system for model evaluation; (iv) attract additional funding to develop accessible modelling software.; (v) improve and facilitate communication between scientists (model developers, data provider) and risk assessors/regulators; and (vi) organise specific training for end users. The experts also acknowledged the critical need for developing a guidance document on building, characterising, reporting and documenting PBK models using non-animal data. This document would also need to include guidance on interpreting the model analysis for various risk assessment purposes, such as incorporating PBK models in integrated strategy approaches and integrating them with in vitro toxicity testing and adverse outcome pathways. This proposed guidance document will promote the development of PBK models using in vitro and silico data and facilitate the regulatory acceptance of PBK models for assessing safety of chemicals

    From Knowledgebases to Toxicity Prediction and Promiscuity Assessment

    Get PDF
    Polypharmacology marked a paradigm shift in drug discovery from the traditional ‘one drug, one target’ approach to a multi-target perspective, indicating that highly effective drugs favorably modulate multiple biological targets. This ability of drugs to show activity towards many targets is referred to as promiscuity, an essential phenomenon that may as well lead to undesired side-effects. While activity at therapeutic targets provides desired biological response, toxicity often results from non-specific modulation of off-targets. Safety, efficacy and pharmacokinetics have been the primary concerns behind the failure of a majority of candidate drugs. Computer-based (in silico) models that can predict the pharmacological and toxicological profiles complement the ongoing efforts to lower the high attrition rates. High-confidence bioactivity data is a prerequisite for the development of robust in silico models. Additionally, data quality has been a key concern when integrating data from publicly-accessible bioactivity databases. A majority of the bioactivity data originates from high- throughput screening campaigns and medicinal chemistry literature. However, large numbers of screening hits are considered false-positives due to a number of reasons. In stark contrast, many compounds do not demonstrate biological activity despite being tested in hundreds of assays. This thesis work employs cheminformatics approaches to contribute to the aforementioned diverse, yet highly related, aspects that are crucial in rationalizing and expediting drug discovery. Knowledgebase resources of approved and withdrawn drugs were established and enriched with information integrated from multiple databases. These resources are not only useful in small molecule discovery and optimization, but also in the elucidation of mechanisms of action and off- target effects. In silico models were developed to predict the effects of small molecules on nuclear receptor and stress response pathways and human Ether-à-go-go-Related Gene encoded potassium channel. Chemical similarity and machine-learning based methods were evaluated while highlighting the challenges involved in the development of robust models using public domain bioactivity data. Furthermore, the true promiscuity of the potentially frequent hitter compounds was identified and their mechanisms of action were explored at the molecular level by investigating target-ligand complexes. Finally, the chemical and biological spaces of the extensively tested, yet inactive, compounds were investigated to reconfirm their potential to be promising candidates.Die Polypharmakologie beschreibt einen Paradigmenwechsel von "einem Wirkstoff - ein Zielmolekül" zu "einem Wirkstoff - viele Zielmoleküle" und zeigt zugleich auf, dass hochwirksame Medikamente nur durch die Interaktion mit mehreren Zielmolekülen Ihre komplette Wirkung entfalten können. Hierbei ist die biologische Aktivität eines Medikamentes direkt mit deren Nebenwirkungen assoziiert, was durch die Interaktion mit therapeutischen bzw. Off-Targets erklärt werden kann (Promiskuität). Ein Ungleichgewicht dieser Wechselwirkungen resultiert oftmals in mangelnder Wirksamkeit, Toxizität oder einer ungünstigen Pharmakokinetik, anhand dessen man das Scheitern mehrerer potentieller Wirkstoffe in ihrer präklinischen und klinischen Entwicklungsphase aufzeigen kann. Die frühzeitige Vorhersage des pharmakologischen und toxikologischen Profils durch computergestützte Modelle (in-silico) anhand der chemischen Struktur kann helfen den Prozess der Medikamentenentwicklung zu verbessern. Eine Voraussetzung für die erfolgreiche Vorhersage stellen zuverlässige Bioaktivitätsdaten dar. Allerdings ist die Datenqualität oftmals ein zentrales Problem bei der Datenintegration. Die Ursache hierfür ist die Verwendung von verschiedenen Bioassays und „Readouts“, deren Daten zum Großteil aus primären und bestätigenden Bioassays gewonnen werden. Während ein Großteil der Treffer aus primären Assays als falsch-positiv eingestuft werden, zeigen einige Substanzen keine biologische Aktivität, obwohl sie in beiden Assay- Typen ausgiebig getestet wurden (“extensively assayed compounds”). In diese Arbeit wurden verschiedene chemoinformatische Methoden entwickelt und angewandt, um die zuvor genannten Probleme zu thematisieren sowie Lösungsansätze aufzuzeigen und im Endeffekt die Arzneimittelforschung zu beschleunigen. Hierfür wurden nicht redundante, Hand-validierte Wissensdatenbanken für zugelassene und zurückgezogene Medikamente erstellt und mit weiterführenden Informationen angereichert, um die Entdeckung und Optimierung kleiner organischer Moleküle voran zu treiben. Ein entscheidendes Tool ist hierbei die Aufklärung derer Wirkmechanismen sowie Off-Target-Interaktionen. Für die weiterführende Charakterisierung von Nebenwirkungen, wurde ein Hauptaugenmerk auf Nuklearrezeptoren, Pathways in welchen Stressrezeptoren involviert sind sowie den hERG-Kanal gelegt und mit in-silico Modellen simuliert. Die Erstellung dieser Modelle wurden Mithilfe eines integrativen Ansatzes aus “state-of-the-art” Algorithmen wie Ähnlichkeitsvergleiche und “Machine- Learning” umgesetzt. Um ein hohes Maß an Vorhersagequalität zu gewährleisten, wurde bei der Evaluierung der Datensätze explizit auf die Datenqualität und deren chemische Vielfalt geachtet. Weiterführend wurden die in-silico-Modelle dahingehend erweitert, das Substrukturfilter genauer betrachtet wurden, um richtige Wirkmechanismen von unspezifischen Bindungsverhalten (falsch- positive Substanzen) zu unterscheiden. Abschließend wurden der chemische und biologische Raum ausgiebig getesteter, jedoch inaktiver, kleiner organischer Moleküle (“extensively assayed compounds”) untersucht und mit aktuell zugelassenen Medikamenten verglichen, um ihr Potenzial als vielversprechende Kandidaten zu bestätigen

    Heavy quarks and jets as probes of the QGP

    Full text link
    Quark-Gluon Plasma (QGP), a QCD state of matter created in ultra-relativistic heavy-ion collisions, has remarkable properties, including, for example, a low shear viscosity over entropy ratio. By detecting the collection of low-momentum particles that arise from the collision, it is possible to gain quantitative insight into the created matter. However, its fast evolution and thermalization properties remain elusive. Only using high momentum objects as probes of QGP can unveil its constituents at different wavelengths. In this review, we attempt to provide a comprehensive picture of what was, so far, possible to infer about QGP given our current theoretical understanding of jets, heavy-flavor, and quarkonia. We will bridge the resulting qualitative picture to the experimental observations done at the LHC and RHIC. We will focus on the phenomenological description of experimental observations, provide a brief analytical summary of the description of hard probes, and an outlook on the main difficulties we will need to surpass in the following years. To benchmark QGP-related effects, we will also address nuclear modifications to the initial state and hadronization effects

    Toxicity prediction using multi-disciplinary data integration and novel computational approaches

    Get PDF
    Current predictive tools used for human health assessment of potential chemical hazards rely primarily on either chemical structural information (i.e., cheminformatics) or bioassay data (i.e., bioinformatics). Emerging data sources such as chemical libraries, high throughput assays and health databases offer new possibilities for evaluating chemical toxicity as an integrated system and overcome the limited predictivity of current fragmented efforts; yet, few studies have combined the new data streams. This dissertation tested the hypothesis that integrative computational toxicology approaches drawing upon diverse data sources would improve the prediction and interpretation of chemically induced diseases. First, chemical structures and toxicogenomics data were used to predict hepatotoxicity. Compared with conventional cheminformatics or toxicogenomics models, interpretation was enriched by the chemical and biological insights even though prediction accuracy did not improve. This motivated the second project that developed a novel integrative method, chemical-biological read-across (CBRA), that led to predictive and interpretable models amenable to visualization. CBRA was consistently among the most accurate models on four chemical-biological data sets. It highlighted chemical and biological features for interpretation and the visualizations aided transparency. Third, we developed an integrative workflow that interfaced cheminformatics prediction with pharmacoepidemiology validation using a case study of Stevens Johnson Syndrome (SJS), an adverse drug reaction (ADR) of major public health concern. Cheminformatics models first predicted potential SJS inducers and non-inducers, prioritizing them for subsequent pharmacoepidemiology evaluation, which then confirmed that predicted non-inducers were statistically associated with fewer SJS occurrences. By combining cheminformatics' ability to predict SJS as soon as drug structures are known, and pharmacoepidemiology's statistical rigor, we have provided a universal scheme for more effective study of SJS and other ADRs. Overall, this work demonstrated that integrative approaches could deliver more predictive and interpretable models. These models can then reliably prioritize high risk chemicals for further testing, allowing optimization of testing resources. A broader implication of this research is the growing role we envision for integrative methods that will take advantage of the various emerging data sources.Doctor of Philosoph

    Reliability assessment approach through geospatial mapping for offshore wind energy

    Get PDF
    To meet the increased energy demands, uphold commitments made in the Paris agreement and provide energy security to its consumers, the United Kingdom is rapidly expanding its wind energy industry at offshore locations. While harnessing the improved wind resource further offshore, the industry has faced reliability challenges in the dynamic marine environment which contribute to an increase in the cost of energy. This thesis promotes the argument for location - intelligent decisions in the industry by developing a methodology to allocate a combined risk - return performance metric for offshore locations. In the absence of comprehensive spatially distributed field reliability data for offshore wind turbines, the limit state design methodology is employed to model structural damage. Exposed to stochastic loading from wind and wave regimes, offshore wind turbines are fatigue-critical structures. The aero- and hydro-dynamic loads at representative sites across eight sub-regions in the UK continental shelf are quantified by processing modelled metocean data through established aero-hydro-servo-elastic design tools. These simulated loads and the inherent material fatigue properties provide site-specific lifetime accumulated damage. Normalising this damage based on the potential energy production at each site provides an improved understanding of the feasibility of the sub-region for offshore wind deployment. Results indicate that although sheltered sub-regions display lower resource potential, they have the benefit of the reduced associated structural damage compared to more dynamic locations. A similar observation is made when the methodology is employed on a larger scale incorporating the UK continental shelf and its adjoining areas. Furthermore, not only the energy potential displays an increase with an increase in distance-to-shore, but also the damage per unit energy produced. The research outcomes of this project are useful for identifying the potential of structural reserves for lifetime extension considerations as more turbines reach their design lifetimes. Additionally, it may be used to inform design parameters, optimise siting of future installations and determine suitable maintenance strategies to improve the economic viability of offshore wind

    Event Generators for High-Energy Physics Experiments

    Full text link
    We provide an overview of the status of Monte-Carlo event generators for high-energy particle physics. Guided by the experimental needs and requirements, we highlight areas of active development, and opportunities for future improvements. Particular emphasis is given to physics models and algorithms that are employed across a variety of experiments. These common themes in event generator development lead to a more comprehensive understanding of physics at the highest energies and intensities, and allow models to be tested against a wealth of data that have been accumulated over the past decades. A cohesive approach to event generator development will allow these models to be further improved and systematic uncertainties to be reduced, directly contributing to future experimental success. Event generators are part of a much larger ecosystem of computational tools. They typically involve a number of unknown model parameters that must be tuned to experimental data, while maintaining the integrity of the underlying physics models. Making both these data, and the analyses with which they have been obtained accessible to future users is an essential aspect of open science and data preservation. It ensures the consistency of physics models across a variety of experiments

    Development and application of QSAR models for mechanisms related to endocrine disruption.

    Get PDF

    IN SILICO METHODS FOR DRUG DESIGN AND DISCOVERY

    Get PDF
    Computer-aided drug design (CADD) methodologies are playing an ever-increasing role in drug discovery that are critical in the cost-effective identification of promising drug candidates. These computational methods are relevant in limiting the use of animal models in pharmacological research, for aiding the rational design of novel and safe drug candidates, and for repositioning marketed drugs, supporting medicinal chemists and pharmacologists during the drug discovery trajectory.Within this field of research, we launched a Research Topic in Frontiers in Chemistry in March 2019 entitled “In silico Methods for Drug Design and Discovery,” which involved two sections of the journal: Medicinal and Pharmaceutical Chemistry and Theoretical and Computational Chemistry. For the reasons mentioned, this Research Topic attracted the attention of scientists and received a large number of submitted manuscripts. Among them 27 Original Research articles, five Review articles, and two Perspective articles have been published within the Research Topic. The Original Research articles cover most of the topics in CADD, reporting advanced in silico methods in drug discovery, while the Review articles offer a point of view of some computer-driven techniques applied to drug research. Finally, the Perspective articles provide a vision of specific computational approaches with an outlook in the modern era of CADD
    corecore