126 research outputs found
Development of Computer-aided Concepts for the Optimization of Single-Molecules and their Integration for High-Throughput Screenings
In the field of synthetic biology, highly interdisciplinary approaches for the
design and modelling of functional molecules using computer-assisted methods
have become established in recent decades. These computer-assisted methods are
mainly used when experimental approaches reach their limits, as computer models
are able to e.g., elucidate the temporal behaviour of nucleic acid polymers or
proteins by single-molecule simulations, as well as to illustrate the functional
relationship of amino acid residues or nucleotides to each other. The knowledge
raised by computer modelling can be used continuously to influence the further
experimental process (screening), and also shape or function
(rational design) of the considered molecule. Such an optimization of the
biomolecules carried out by humans is often necessary, since the observed
substrates for the biocatalysts and enzymes are usually synthetic (``man-made
materials'', such as PET) and the evolution had no time to provide efficient
biocatalysts.
With regard to the computer-aided design of single-molecules, two fundamental paradigms
share the supremacy in the field of synthetic biology. On the one hand,
probabilistic experimental methods (e.g., evolutionary design processes such as
directed evolution) are used in combination with High-Throughput
Screening (HTS), on the other hand, rational, computer-aided single-molecule design
methods are applied.
For both topics, computer models/concepts were developed, evaluated and
published.
The first contribution in this thesis describes a computer-aided design approach
of the Fusarium Solanie Cutinase (FsC). The activity loss of the enzyme during a
longer incubation period was investigated in detail (molecular) with PET. For
this purpose, Molecular Dynamics (MD) simulations of the spatial structure of
FsC and a water-soluble degradation product of the
synthetic substrate PET (ethylene glycol) were computed. The existing model was
extended by combining it with Reduced Models. This simulation study has
identified certain areas of FsC which interact very
strongly with PET (ethylene glycol) and thus have a significant influence on the
flexibility and structure of the enzyme.
The subsequent original publication establishes a new method for the selection
of High-Throughput assays for the use in protein chemistry. The selection is
made via a meta-optimization of the assays to be analyzed. For this purpose,
control reactions are carried out for the respective assay. The distance of the
control distributions is evaluated using classical static methods such as the
Kolmogorov-Smirnov test. A performance is then assigned to each assay. The
described control experiments are performed before the actual experiment
(screening), and the assay with the highest performance is used for further
screening. By applying this generic method, high success rates can be achieved.
We were able to demonstrate this experimentally using
lipases and esterases as an example.
In the area of green chemistry, the above-mentioned processes can be useful for finding
enzymes for the degradation of synthetic materials more quickly or modifying
enzymes that occur naturally in such a way that these enzymes can
efficiently convert synthetic substrates after successful optimization. For this
purpose, the experimental effort (consumption of materials) is kept to a minimum
during the practical implementation. Especially for large-scale screenings, a
prior consideration or restriction of the possible sequence-space can contribute significantly to
maximizing the success rate of screenings and minimizing the total
time they require.
In addition to classical methods such as MD simulations in combination with
reduced models, new graph-based methods for the presentation and analysis of MD
simulations have been developed. For this purpose, simulations were converted
into distance-dependent dynamic graphs. Based on this reduced representation,
efficient algorithms for analysis were developed and tested. In particular,
network motifs were investigated to determine whether this type of
semantics is more suitable for describing molecular structures and interactions
within MD simulations than spatial coordinates. This concept was evaluated for
various MD simulations of molecules, such as water, synthetic pores, proteins,
peptides and RNA structures. It has been shown that this novel form of semantics
is an excellent way to describe (bio)molecular structures and their dynamics.
Furthermore, an algorithm (StreAM-Tg) has been developed for the creation of
motif-based Markov models, especially for the analysis of single molecule
simulations of nucleic acids. This algorithm is used for the design of RNAs. The
insights obtained from the analysis with StreAM-Tg (Markov models) can
provide useful design recommendations for the (re)design of functional RNA.
In this context, a new method was developed to quantify the environment (i.e.
water; solvent context) and its influence on biomolecules in MD simulations. For
this purpose, three vertex motifs were used to describe the structure of the
individual water molecules. This new method offers many advantages. With this
method, the structure and dynamics of water can be accurately described. For
example, we were able to reproduce the thermodynamic entropy of water in the
liquid and vapor phase along the vapor-liquid equilibrium curve from the
triple point to the critical point.
Another major field covered in this thesis is the development of new
computer-aided approaches for HTS for the design of
functional RNA. For the production of functional RNA (e.g., aptamers and riboswitches), an experimental,
round-based HTS (like SELEX) is typically used. By using
Next Generation Sequencing (NGS) in combination with the SELEX process,
this design process can be studied at the nucleotide and secondary structure
levels for the first time. The special feature of small RNA molecules compared
to proteins is that the secondary structure (topology), with a minimum free
energy, can be determined directly from the nucleotide sequence, with a high
degree of certainty.
Using the combination of M. Zuker's algorithm, NGS and the SELEX method, it was
possible to quantify the structural diversity of individual RNA molecules under
consideration of the genetic context. This combination of methods allowed the
prediction of rounds in which the first ciprofloxacin-riboswitch emerged.
In this example, only a simple structural comparison was made for the
quantification (Levenshtein distance) of the diversity of each round.
To improve this, a new representation of the RNA structure as a directed graph
was modeled, which was then compared with a probabilistic subgraph isomorphism.
Finally, the NGS dataset (ciprofloxacin-riboswitch) was modeled as a dynamic
graph and analyzed after the occurrence of defined seven-vertex motifs. For this
purpose, motif-based semantics were integrated into HTS
for RNA molecules for the first time. The identified motifs could be assigned to
secondary structural elements that were identified experimentally in the
ciprofloxacin aptamer R10k6.
Finally, all the algorithms presented were integrated into an R library,
published and made available to scientists from all over the world
Gray codes and symmetric chains
We consider the problem of constructing a cyclic listing of all bitstrings of length~ with Hamming weights in the interval , where , by flipping a single bit in each step.
This is a far-ranging generalization of the well-known middle two levels problem (the case~).
We provide a solution for the case~ and solve a relaxed version of the problem for general values of~, by constructing cycle factors for those instances.
Our proof uses symmetric chain decompositions of the hypercube, a concept known from the theory of posets, and we present several new constructions of such decompositions.
In particular, we construct four pairwise edge-disjoint symmetric chain decompositions of the -dimensional hypercube for any~
Tightly-Secure Authenticated Key Exchange, Revisited
We introduce new tightly-secure authenticated key exchange (AKE) protocols that are extremely efficient, yet have only a constant security loss and can be instantiated in the random oracle model both from the standard DDH assumption and a subgroup assumption over RSA groups. These protocols can be deployed with optimal parameters, independent of the number of users or sessions, without the need to compensate a security loss with increased parameters and thus decreased computational efficiency.
We use the standard “Single-Bit-Guess” AKE security (with forward secrecy and state corruption) requiring all challenge keys to be simultaneously pseudo-random. In contrast, most previous papers on tightly secure AKE protocols (Bader et al., TCC 2015; Gjøsteen and Jager, CRYPTO 2018; Liu et al., ASIACRYPT 2020) concentrated on a non-standard “Multi-Bit-Guess” AKE security
which is known not to compose tightly with symmetric primitives to build a secure communication channel.
Our key technical contribution is a new generic approach to construct tightly-secure AKE protocols based on non-committing key encapsulation mechanisms. The resulting DDH-based protocols are considerably more efficient than all previous constructions
On the Impossibility of Tight Cryptographic Reductions
The existence of tight reductions in cryptographic security proofs is an important question, motivated by the theoretical search for cryptosystems whose security guarantees are truly independent of adversarial behavior and the practical necessity of concrete security bounds for the theoretically-sound selection of cryptographic parameters.
At Eurocrypt 2002, Coron described a meta-reduction technique that allows to prove the impossibility of tight reductions for certain digital signature schemes.
This seminal result has found many further interesting applications.
However, due to a technical subtlety in the argument, the applicability of this technique beyond digital signatures in the single-user setting has turned out to be rather limited.
We describe a new meta-reduction technique for proving such impossibility results, which improves on known ones in several ways.
First, it enables interesting novel applications. This includes a formal proof that for certain cryptographic primitives (including public-key encryption/key encapsulation mechanisms and digital signatures), the security loss incurred when the primitive is transferred from an idealized single-user setting to the more realistic multi-user setting is impossible to avoid, and a lower tightness bound for non-interactive key exchange protocols. Second, the technique allows to rule out tight reductions from a very general class of non-interactive complexity assumptions. Third, the provided bounds are quantitatively and qualitatively better, yet simpler, than the bounds derived from Coron\u27s technique and its extensions
Authenticated Key Exchange and Signatures with Tight Security in the Standard Model
We construct the first authenticated key exchange protocols that achieve tight security in the standard model. Previous works either relied on techniques that seem to inherently require a random oracle, or achieved only “Multi-Bit-Guess” security, which is not known to compose tightly, for instance, to build a secure channel.
Our constructions are generic, based on digital signatures and key encapsulation mechanisms (KEMs). The main technical challenges we resolve is to determine suitable KEM security notions which on the one hand are strong enough to yield tight security, but at the same time weak enough to be efficiently instantiable in the standard model, based on standard techniques such as universal hash proof systems.
Digital signature schemes with tight multi-user security in presence of adaptive corruptions are a central building block, which is used in all known constructions of tightly-secure AKE with full forward security. We identify a subtle gap in the security proof of the only previously known efficient standard model scheme by Bader et al. (TCC 2015). We develop a new variant, which yields the currently most efficient signature scheme that achieves this strong security notion without random oracles and based on standard hardness assumptions
High-throughput miRNA profiling of human melanoma blood samples
<p>Abstract</p> <p>Background</p> <p>MicroRNA (miRNA) signatures are not only found in cancer tissue but also in blood of cancer patients. Specifically, miRNA detection in blood offers the prospect of a non-invasive analysis tool.</p> <p>Methods</p> <p>Using a microarray based approach we screened almost 900 human miRNAs to detect miRNAs that are deregulated in their expression in blood cells of melanoma patients. We analyzed 55 blood samples, including 20 samples of healthy individuals, 24 samples of melanoma patients as test set, and 11 samples of melanoma patients as independent validation set.</p> <p>Results</p> <p>A hypothesis test based approch detected 51 differentially regulated miRNAs, including 21 miRNAs that were downregulated in blood cells of melanoma patients and 30 miRNAs that were upregulated in blood cells of melanoma patients as compared to blood cells of healthy controls. The tets set and the independent validation set of the melanoma samples showed a high correlation of fold changes (0.81). Applying hierarchical clustering and principal component analysis we found that blood samples of melanoma patients and healthy individuals can be well differentiated from each other based on miRNA expression analysis. Using a subset of 16 significant deregulated miRNAs, we were able to reach a classification accuracy of 97.4%, a specificity of 95% and a sensitivity of 98.9% by supervised analysis. MiRNA microarray data were validated by qRT-PCR.</p> <p>Conclusions</p> <p>Our study provides strong evidence for miRNA expression signatures of blood cells as useful biomarkers for melanoma.</p
The human brainome: network analysis identifies \u3ci\u3eHSPA2\u3c/i\u3e as a novel Alzheimer’s disease target
Our hypothesis is that changes in gene and protein expression are crucial to the development of late-onset Alzheimer’s disease. Previously we examined how DNA alleles control downstream expression of RNA transcripts and how those relationships are changed in late-onset Alzheimer’s disease. We have now examined how proteins are incorporated into networks in two separate series and evaluated our outputs in two different cell lines. Our pipeline included the following steps: (i) predicting expression quantitative trait loci; (ii) determining differential expression; (iii) analysing networks of transcript and peptide relationships; and (iv) validating effects in two separate cell lines. We performed all our analysis in two separate brain series to validate effects. Our two series included 345 samples in the first set (177 controls, 168 cases; age range 65–105; 58% female; KRONOSII cohort) and 409 samples in the replicate set (153 controls, 141 cases, 115 mild cognitive impairment; age range 66–107; 63% female; RUSH cohort). Our top target is heat shock protein family A member 2 (HSPA2), which was identified as a key driver in our two datasets. HSPA2 was validated in two cell lines, with overexpression driving further elevation of amyloid-B40 and amyloid-B42 levels in APP mutant cells, as well as significant elevation of microtubule associated protein tau and phosphorylated-tau in a modified neuroglioma line. This work further demonstrates that studying changes in gene and protein expression is crucial to understanding late onset disease and further nominates HSPA2 as a specific key regulator of late-onset Alzheimer’s disease processes
Genome-wide association study of 23,500 individuals identifies 7 loci associated with brain ventricular volume
The volume of the lateral ventricles (LV) increases with age and their abnormal enlargement is a key feature of several neurological and psychiatric diseases. Although lateral ventricular volume is heritable, a comprehensive investigation of its genetic determinants is lacking. In this meta-analysis of genome-wide association studies of 23,533 healthy middle-aged to elderly individuals from 26 population-based cohorts, we identify 7 genetic loci associated with LV volume. These loci map to chromosomes 3q28, 7p22.3, 10p12.31, 11q23.1, 12q23.3, 16q24.2, and 22q13.1 and implicate pathways related to tau pathology, S1P signaling, and cytoskeleton organization. We also report a significant genetic overlap between the thalamus and LV volumes (ρgenetic = -0.59, p-value = 3.14 × 10-6), suggesting that these brain structures may share a common biology. These genetic associations of LV volume provide insights into brain morphology
An Analysis of Two Genome-wide Association Meta-analyses Identifies a New Locus for Broad Depression Phenotype
AbstractBackgroundThe genetics of depression has been explored in genome-wide association studies that focused on either major depressive disorder or depressive symptoms with mostly negative findings. A broad depression phenotype including both phenotypes has not been tested previously using a genome-wide association approach. We aimed to identify genetic polymorphisms significantly associated with a broad phenotype from depressive symptoms to major depressive disorder.MethodsWe analyzed two prior studies of 70,017 participants of European ancestry from general and clinical populations in the discovery stage. We performed a replication meta-analysis of 28,328 participants. Single nucleotide polymorphism (SNP)-based heritability and genetic correlations were calculated using linkage disequilibrium score regression. Discovery and replication analyses were performed using a p-value-based meta-analysis. Lifetime major depressive disorder and depressive symptom scores were used as the outcome measures.ResultsThe SNP-based heritability of major depressive disorder was 0.21 (SE = 0.02), the SNP-based heritability of depressive symptoms was 0.04 (SE = 0.01), and their genetic correlation was 1.001 (SE = 0.2). We found one genome-wide significant locus related to the broad depression phenotype (rs9825823, chromosome 3: 61,082,153, p = 8.2 × 10–9) located in an intron of the FHIT gene. We replicated this SNP in independent samples (p = .02) and the overall meta-analysis of the discovery and replication cohorts (1.0 × 10–9).ConclusionsThis large study identified a new locus for depression. Our results support a continuum between depressive symptoms and major depressive disorder. A phenotypically more inclusive approach may help to achieve the large sample sizes needed to detect susceptibility loci for depression
- …