8 research outputs found
Mehanizmi delovanja tveganih kemijskih struktur za mutagenost in kancerogenost
Knowing the mutagenic and carcinogenic properties of chemicals is very important for their hazard (and risk) assessment. One of the crucial events that trigger genotoxic and sometimes carcinogenic effects is the forming of adducts between chemical compounds and nucleic acids and histones. This review takes a look at the mechanisms related to specific functional groups (structural alerts or toxicophores) that may trigger genotoxic or epigenetic effects in the cells. We present up-to-date information about defined structural alerts with their mechanisms and the software based on this knowledge (QSAR models and classification schemes).Poznavanje mutagenih in rakotvornih lastnosti kemikalij je zelo pomembno za njihovo oceno tveganja in nevarnosti. Eden od ključnih dogodkov, ki sprožijo genotoksični in včasih kancerogeni učinkek je tvorba aduktov med kemikalijami in nukleinskimi kislinami ter histoni. Ta članek povzema pregled mehanizmov povezanih s specifičnimi funkcionalnimi skupinami (strukturnimi alerti ali tveganimi kemijskimi strukturami), ki lahko sprožijo genotoksične ali epigenetske učinke v celicah. Predstavlja aktualne informacije o poznanih strukturnih alertih, njihove mehanizme interakcij z genetskim materialom in programsko opremo, ki na osnovi poznavanja teh mehanizmov z uporabo QSAR modelov in klasifikacijskih shem omogočajo napovedovanje genostoksičnosti še nepoznanih kemikalij
Ensuring confidence in predictions: A scheme to assess the scientific validity of in silico models
The use of in silico tools within the drug development process to predict a wide range of properties including absorption, distribution, metabolism, elimination and toxicity has become increasingly important due to changes in legislation and both ethical and economic drivers to reduce animal testing. Whilst in silico tools have been used for decades there remains reluctance to accept predictions based on these methods particularly in regulatory settings. This apprehension arises in part due to lack of confidence in the reliability, robustness and applicability of the models. To address this issue we propose a scheme for the verification of in silico models that enables end users and modellers to assess the scientific validity of models in accordance with the principles of good computer modelling practice. We report here the implementation of the scheme within the Innovative Medicines Initiative project “eTOX” (electronic toxicity) and its application to the in silico models developed within the frame of this project
Novel in vitro and mathematical models for the prediction of chemical toxicity
The
focus
of
much
scientific
and
medical
research
is
directed
towards
understanding
the
disease
process
and
defining
therapeutic
intervention
strategies.
Whilst
the
scientific
basis
of
drug
safety
has
received
relatively
little
attention,
despite
the
fact
that
adverse
drug
reactions
(ADRs)
are
a
major
health
concern
and
a
serious
impediment
to
development
of
new
medicines.
Toxicity
issues
account
for
~21%
drug
attrition
during
drug
development
and
safety
testing
strategies
require
considerable
animal
use.
Mechanistic
relationships
between
drug
plasma
levels
and
molecular/cellular
events
that
culminate
in
whole
organ
toxicity
underpins
development
of
novel
safety
assessment
strategies.
Current
in
vitro
test
systems
are
poorly
predictive
of
toxicity
of
chemicals
entering
the
systemic
circulation,
particularly
to
the
liver.
Such
systems
fall
short
because
of
1)
the
physiological
gap
between
cells
currently
used
&
human
hepatocytes
existing
in
their
native
state,
2)
the
lack
of
physiological
integration
with
other
cells/systems
within
organs,
required
to
amplify
the
initial
toxicological
lesion
into
overt
toxicity,
3)
the
inability
to
assess
how
low
level
cell
damage
induced
by
chemicals
may
develop
into
overt
organ
toxicity
in
a
minority
of
patients,
4)
lack
of
consideration
of
systemic
effects.
Reproduction
of
centrilobular
&
periportal
hepatocyte
phenotypes
in
in
vitro
culture
is
crucial
for
sensitive
detection
of
cellular
stress.
Hepatocyte
metabolism/phenotype
is
dependent
on
cell
position
along
the
liver
lobule,
with
corresponding
differences
in
exposure
to
substrate,
oxygen
&
hormone
gradients.
Application
of
bioartificial
liver
(BAL)
technology
can
encompass
in
vitro
predictive
toxicity
testing
with
enhanced
sensitivity
and
improved
mechanistic
understanding.
Combining
this
technology
with
mechanistic
mathematical
models
describing
intracellular
metabolism,
fluid-‐flow,
substrate,
hormone
and
nutrient
distribution
provides
the
opportunity
to
design
the
BAL
specifically
to
mimic
the
in
vivo
scenario.
Such
mathematical
models
enable
theoretical
hypothesis
testing,
will
inform
the
design
of
in
vitro
experiments,
and
will
enable
both
refinement
and
reduction
of
in
vivo
animal
trials.
In
this
way,
development
of
novel
mathematical
modelling
tools
will
help
to
focus
and
direct
in
vitro
and
in
vivo
research,
and
can
be
used
as
a
framework
for
other
areas
of
drug
safety
science
Novel in vitro and mathematical models for the prediction of chemical toxicity
The focus of much scientific and medical research is directed towards understanding the disease process and defining therapeutic intervention strategies. The scientific basis of drug safety is very complex and currently remains poorly understood, despite the fact that adverse drug reactions (ADRs) are a major health concern and a serious impediment to development of new medicines. Toxicity issues account for ∼21% drug attrition during drug development and safety testing strategies require considerable animal use. Mechanistic relationships between drug plasma levels and molecular/cellular events that culminate in whole organ toxicity underpins development of novel safety assessment strategies. Current in vitro test systems are poorly predictive of toxicity of chemicals entering the systemic circulation, particularly to the liver. Such systems fall short because of (1) the physiological gap between cells currently used and human hepatocytes existing in their native state, (2) the lack of physiological integration with other cells/systems within organs, required to amplify the initial toxicological lesion into overt toxicity, (3) the inability to assess how low level cell damage induced by chemicals may develop into overt organ toxicity in a minority of patients, (4) lack of consideration of systemic effects. Reproduction of centrilobular and periportal hepatocyte phenotypes in in vitro culture is crucial for sensitive detection of cellular stress. Hepatocyte metabolism/phenotype is dependent on cell position along the liver lobule, with corresponding differences in exposure to substrate, oxygen and hormone gradients. Application of bioartificial liver (BAL) technology can encompass in vitro predictive toxicity testing with enhanced sensitivity and improved mechanistic understanding. Combining this technology with mechanistic mathematical models describing intracellular metabolism, fluid-flow, substrate, hormone and nutrient distribution provides the opportunity to design the BAL specifically to mimic the in vivo scenario. Such mathematical models enable theoretical hypothesis testing, will inform the design of in vitro experiments, and will enable both refinement and reduction of in vivo animal trials. In this way, development of novel mathematical modelling tools will help to focus and direct in vitro and in vivo research, and can be used as a framework for other areas of drug safety science
Recommended from our members
Development of New Methods for the (Q)SAR Applicability Domain Assessment: Using Structural Information in a Statistical Study of the Errors in Prediction
The main aim of (Q)SAR is to build models to evaluate and predict properties of molecules, such as biological and environmental effects, and physicochemical properties. These models are built using available experimental data, whose quality and quantity heavily affect their capability of obtaining reliable predictions for new chemicals. A dataset can be viewed as a "sampling" of the whole chemical space, if a sample is too small and / or too homogeneous, the model will inevitably have limitations in the type of chemicals it can predict.
From the point of view of protecting the human health and the environment, it is preferable that a model is able to predict even a small number of chemicals, but with the highest possible reliability. The "coverage" issue can be overcome by integrating results from different models. In this perspective the importance of clearly defining the model's applicability domain is crucial to identify which model is most suitable for each chemical to assess.
The definition of the applicability domain (AD) of (Q)SAR models is still an open research field. Several approaches have been proposed and implemented through years, including the use of structural features such as functional groups and atom-centered fragments. These features have also proven to be useful for an a priori definition of AD, making it independent from the specific algorithm chosen to develop the model.
Within this study, the definition of (Q)SAR models' applicability domain has been investigated using structural features of different complexity: thresholds for chemical composition and molecular weight, chemical classes related to commonly well and badly predicted molecules, and statistically-extracted structural fragments to model the error in prediction. In the case studies considered, these approaches improved the AD definition provided by the model developers, supporting their integration within the definition of the models' applicability domain
Recommended from our members
Integration of Toxicity Data from Experiments and Non-Testing Methods within a Weight of Evidence Procedure
Assessment of human health and environmental risk is based on multiple sources of information, requiring the integration of the lines of evidence in order to reach a conclusion. There is an increasing need for data to fill the gaps and new methods for the data integration. From a regulatory point of view, risk assessors take advantage of all the available data by means of weight of evidence (WOE) and expert judgement approaches to develop conclusions about the risk posed by chemicals and also nanoparticles. The integration of the physico-chemical properties and toxicological effects shed light on relationships between the molecular properties and biological effects, leading us to non-testing methods. (Quantitative) structure-activity relationship ((Q)SAR) and read-across are examples of non-testing methods. In this dissertation, (i) two new structure-based carcinogenicity models, (ii) ToxDelta, a new read-across model for mutagenicity endpoint and (iii) a genotoxicity model for the metal oxide nanoparticles are introduced. Within the latter section, best professional judgement method is employed for the selection of reliable data from scientific publications to develop a data base of nanomaterials with their genotoxicity effect. We developed a decision tree model for the classification of these nanomaterials.
The (Q)SAR models used in qualitative WOE approaches mainly lack transparency resulting in risk estimates needing quantified uncertainties. Our two structure-based carcinogenicity models, provide transparent reasoning in their predictions. Additionally, ToxDelta provides better supported techniques in read-across terms based on the analysis of the differences of the molecules structures. We propose a basic qualitative WOE framework that couples the in silico models predictions with the inspections of the similar compounds. We demonstrate the application of this framework to two realistic case studies, and discuss how to deal with different and sometimes conflicting data obtained from various in silico models in qualitative WOE terms to facilitate structured and transparent development of answers to scientific questions
Development and application of distributed computing tools for virtual screening of large compound libraries
Im derzeitigen Drug Discovery Prozess ist die Identifikation eines neuen Targetproteins und dessen potenziellen Liganden langwierig, teuer und zeitintensiv. Die Verwendung von in silico Methoden gewinnt hier zunehmend an Bedeutung und hat sich als wertvolle Strategie zur Erkennung komplexer Zusammenhänge sowohl im Bereich der Struktur von Proteinen wie auch bei Bioaktivitäten erwiesen. Die zunehmende Nachfrage nach Rechenleistung im wissenschaftlichen Bereich sowie eine detaillierte Analyse der generierten Datenmengen benötigen innovative Strategien für die effiziente Verwendung von verteilten Computerressourcen, wie z.B. Computergrids. Diese Grids ergänzen bestehende Technologien um einen neuen Aspekt, indem sie heterogene Ressourcen zur Verfügung stellen und koordinieren. Diese Ressourcen beinhalten verschiedene Organisationen, Personen, Datenverarbeitung, Speicherungs- und Netzwerkeinrichtungen, sowie Daten, Wissen, Software und Arbeitsabläufe.
Das Ziel dieser Arbeit war die Entwicklung einer universitätsweit anwendbaren Grid-Infrastruktur - UVieCo (University of Vienna Condor pool) -, welche für die Implementierung von akademisch frei verfügbaren struktur- und ligandenbasierten Drug Discovery Anwendungen verwendet werden kann. Firewall- und Sicherheitsprobleme wurden mittels eines virtuellen privaten Netzwerkes gelöst, wohingegen die Virtualisierung der Computerhardware über das CoLinux Konzept ermöglicht wurde. Dieses ermöglicht, dass unter Linux auszuführende Aufträge auf Windows Maschinen laufen können. Die Effektivität des Grids wurde durch Leistungsmessungen anhand sequenzieller und paralleler Aufgaben ermittelt.
Als Anwendungsbeispiel wurde die Assoziation der Expression bzw. der Sensitivitätsprofile von ABC-Transportern mit den Aktivitätsprofilen von Antikrebswirkstoffen durch Data-Mining des NCI (National Cancer Institute) Datensatzes analysiert. Die dabei generierten Datensätze wurden für liganden-basierte Computermethoden wie Shape-Similarity und Klassifikationsalgorithmen mit dem Ziel verwendet, P-glycoprotein (P-gp) Substrate zu identifizieren und sie von Nichtsubstraten zu trennen. Beim Erstellen vorhersagekräftiger Klassifikationsmodelle konnte das Problem der extrem unausgeglichenen Klassenverteilung durch Verwendung der „Cost-Sensitive Bagging“ Methode gelöst werden. Applicability Domain Studien ergaben, dass unser Modell nicht nur die NCI Substanzen gut vorhersagen kann, sondern auch für wirkstoffähnliche Moleküle verwendet werden kann. Die entwickelten Modelle waren relativ einfach, aber doch präzise genug um für virtuelles Screening einer großen chemischen Bibliothek verwendet werden zu können. Dadurch könnten P-gp Substrate schon frühzeitig erkannt werden, was möglicherweise nützlich sein kann zur Entfernung von Substanzen mit schlechten ADMET-Eigenschaften bereits in einer frühen Phase der Arzneistoffentwicklung.
Zusätzlich wurden Shape-Similarity und Self-organizing Map Techniken verwendet um neue Substanzen in einer hauseigenen sowie einer großen kommerziellen Datenbank zu identifizieren, die ähnlich zu selektiven Serotonin-Reuptake-Inhibitoren (SSRI) sind und Apoptose induzieren können. Die erhaltenen Treffer besitzen neue chemische Grundkörper und können als Startpunkte für Leitstruktur-Optimierung in Betracht gezogen werden.
Die in dieser Arbeit beschriebenen Studien werden nützlich sein um eine verteilte Computerumgebung zu kreieren die vorhandene Ressourcen in einer Organisation nutzt, und die für verschiedene Anwendungen geeignet ist, wie etwa die effiziente Handhabung der Klassifizierung von unausgeglichenen Datensätzen, oder mehrstufiges virtuelles Screening.In the current drug discovery process, the identification of new target proteins and potential ligands is very tedious, expensive and time-consuming. Thus, use of in silico techniques is of utmost importance and proved to be a valuable strategy in detecting complex structural and bioactivity relationships. Increased demands of computational power for tremendous calculations in scientific fields and timely analysis of generated piles of data require innovative strategies for efficient utilization of distributed computing resources in the form of computational grids. Such grids add a new aspect to the emerging information technology paradigm by providing and coordinating the heterogeneous resources such as various organizations, people, computing, storage and networking facilities as well as data, knowledge, software and workflows.
The aim of this study was to develop a university-wide applicable grid infrastructure, UVieCo (University of Vienna Condor pool) which can be used for implementation of standard structure- and ligand-based drug discovery applications using freely available academic software. Firewall and security issues were resolved with a virtual private network setup whereas virtualization of computer hardware was done using the CoLinux concept in a way to run Linux-executable jobs inside Windows machines. The effectiveness of the grid was assessed by performance measurement experiments using sequential and parallel tasks.
Subsequently, the association of expression/sensitivity profiles of ABC transporters with activity profiles of anticancer compounds was analyzed by mining the data from NCI (National Cancer Institute). The datasets generated in this analysis were utilized with ligand-based computational methods such as shape similarity and classification algorithms to identify and separate P-gp substrates from non-substrates. While developing predictive classification models, the problem of imbalanced class distribution was proficiently addressed using the cost-sensitive bagging approach. Applicability domain experiment revealed that our model not only predicts NCI compounds well, but it can also be applied to drug-like molecules. The developed models were relatively simple but precise enough to be applicable for virtual screening of large chemical libraries for the early identification of P-gp substrates which can potentially be useful to remove compounds of poor ADMET properties in an early phase of drug discovery.
Additionally, shape-similarity and self-organizing maps techniques were used to screen in-house as well as a large vendor database for identification of novel selective serotonin reuptake inhibitor (SSRI) like compounds to induce apoptosis. The retrieved hits possess novel chemical scaffolds and can be considered as a starting point for lead optimization studies.
The work described in this thesis will be useful to create distributed computing environment using available resources within an organization and can be applied to various applications such as efficient handling of imbalanced data classification problems or multistep virtual screening approach
Development of in silico models for the prediction of toxicity incorporating ADME information
Drug discovery is a process that requires a significant investment in both time and resources. Although recent developments have reduced the number of drugs failing at the later stages of development due to poor pharmacokinetic and/or toxicokinetic profiles, late stage attrition of drug candidates remains a problem. Additionally, there is a need to reduce animal testing for toxicological risk assessment for ethical and financial reasons. In silico methods offer an alternative that can address these challenges.
A variety of computational approaches have been developed in the last two decades, these must be evaluated to ensure confidence in their use. The research presented in this thesis has assessed a range of existing tools for the prediction of toxicity and absorption, distribution, metabolism and elimination (ADME) parameters with an emphasis on absorption and xenobiotic metabolism. These two ADME properties largely determine bioavailability of a drug and, in turn, also influence toxicity. In vitro (Caco-2 cells and the parallel artificial membrane permeation assay) and in silico approaches, such as various druglikeness filters, can be used to estimate human intestinal absorption; a comparison between different methods was performed to identify relative strengths and weaknesses of the approaches. In terms of xenobiotic metabolism it is not only important to predict metabolites correctly, but it is also crucial to identify those compounds that can be biotransformed into species that can covalently bind to biomolecules. Structural alerts are routinely used to screen for such potential reactive metabolites. The balance between sensitivity and specificity of such reactive metabolite alerts has been discussed in the context of correctly predicting reactive metabolites of pharmaceuticals (using data available from DrugBank). Off-target toxicity, exemplified by human Ether-à-go-go-Related Gene (hERG) channel inhibition, was also explored. A number of novel structural alerts for hERG toxicity were developed based on groups of structurally similar compounds. Finally, the importance of predicting potential ecotoxicological effects of drugs was also considered. The utility of zebrafish embryos to distinguish between baseline and excess toxicity was investigated. In evaluating this selection of existing tools, improvements to the methods have been proposed where possible