3,732 research outputs found
Characterization and Bioanalysis of Protein-Based Biopharmaceuticals, Peptides and Amino Acids by Liquid Chromatography and Mass Spectrometry
Biopharmazeutika sind zu einer essenziellen Klasse von Therapeutika geworden und werden fĂŒr verschiedene medizinische Indikationen wie Diabetes, Krebs, entzĂŒndliche Erkrankungen und Infektionskrankheiten eingesetzt. Monoklonale Antikörper (mAbs) haben innerhalb der Biopharmazeutika den gröĂten Anteil bezogen auf die Zulassungszahlen. Den Vorteilen bezĂŒglich hoher SpezifitĂ€t und EffektivitĂ€t stehen jedoch Nachteile durch hohe Kosten und erhöhter KomplexitĂ€t gegenĂŒber. Die KomplexitĂ€t ergibt sich einerseits aufgrund des hohen Molekulargewichts und anderseits aufgrund der strukturellen HeterogenitĂ€t, wodurch die analytische Charakterisierung und QualitĂ€tskontrolle von mAbs und anderer Biopharmazeutika zu einer Herausforderung wird. Neben diesen protein-basierten Biopharmazeutika ist auch die AufklĂ€rung der absoluten Konfiguration von therapeutischen und natĂŒrlichen (Lipo)peptiden von besonderem Interesse fĂŒr die Wirkstoffforschung.
Zur BewĂ€ltigung dieser Herausforderungen wurden in der hier prĂ€sentierten Arbeit flĂŒssigchromatographische (LC) und massenspektrometrische (MS) Methoden fĂŒr die umfassende Analyse eingesetzt. Die erste Publikation dieser Dissertation bezog sich auf die Analyse von Ladungsvarianten von mAbs, welche wichtige QualitĂ€tsmerkmale darstellen und die Sicherheit und Wirksamkeit des Arzneimittels beeinflussen können. Zur Charakterisierung der Ladungsvarianten wurden die mAbs auf Ebene des intakten Proteins als auch auf Fragmentebene nach begrenztem Verdau und Reduzierung der DisulfidbrĂŒcken mittels starker KationenaustauschflĂŒssigkeitschromatographie (SCX) analysiert. Die SCX-Methode wurde systematisch mittels statistischer Versuchsplanung (DoE) dahingehend optimiert, die höchstmögliche Anzahl an Ladungsvarianten zu trennen. Die mobile Phase der optimierten SCX-Methode enthielt jedoch eine hohe Konzentration an nicht-flĂŒchtigen Salzen, wodurch sie nicht mit MS Detektion kompatibel ist, welche wiederum entscheidend fĂŒr die Identifikation der Ladungsvarianten ist. Um dieser Herausforderung zu begegnen, wurde erfolgreich eine online zweidimensionale flĂŒssigchromatographische (2D-LC) Methode entwickelt, bei der SCX in der ersten Trenndimension und UmkehrphasenflĂŒssigchromatographie (RP-LC) in der zweiten Trenndimension zur Entsalzung vor der MS Detektion verwendet wurde. Die Entwicklung einer extrem kurzen (†1 min) RP-LC Methode war unabdingbar zur Etablierung einer umfassenden 2D-LC Methode. Dazu wurde eine SĂ€ulenvergleichsstudie mit monolithischen und oberflĂ€chlich porösen PartikelsĂ€ulen (SPP-SĂ€ulen) durchgefĂŒhrt und die Trenneffizienz sowie die Analysengeschwindigkeit untersucht.
Eine noch umfassendere SĂ€ulenvergleichsstudie mit Fokus auf das kinetische Leistungsvermögen wurde in der zweiten Arbeit dieser Dissertation durchgefĂŒhrt. Eine Auswahl von 13 RP-ProteintrennsĂ€ulen inklusive monolithischer, SPP und vollporöser PartikelsĂ€ulen (FPP-SĂ€ulen) wurde hinsichtlich ihrer FĂ€higkeit, Peaks in der kĂŒrzest möglichen Zeit zu trennen, untersucht. Es konnte gezeigt werden, dass SPP-SĂ€ulen mit einer PorengröĂe von etwa 400 Ă
und einer dĂŒnnen, porösen Schicht die beste Performance insbesondere fĂŒr gröĂere Proteinen besitzen.
Proteine selbst können auch potenzielle Ziele fĂŒr Arzneistoffe sein, wie z.B. das Tumorsuppressorprotein p53, welches in der dritten Publikation dieser Arbeit untersucht wurde. Intakte Protein LC-MS wurde erfolgreich verwendet, um die Bindungseffizienz und -spezifitĂ€t des kovalenten Inhibitors an p53 nachzuweisen.
AminosĂ€uren sind die Bausteine von Proteinen und Peptiden und die Mehrheit dieser AminosĂ€uren sind chiral. Die biologische AktivitĂ€t ist in der Regel abhĂ€ngig von der absoluten Konfiguration der AminosĂ€uren, wodurch die enantiomerenselektive Analyse von höchster Wichtigkeit fĂŒr die StrukturaufklĂ€rung und zur QualitĂ€tskontrolle ist. Daher war die Entwicklung schneller und umfassender Trennmethoden zur Analyse von AminosĂ€uren, deren Enantiomeren, Diastereomeren und konstitutionellen Isomeren ein Ziel dieser Arbeit. Dieses konnte durch Derivatisierung mittels 6-Aminochinolyl-N-hydroxysuccinimidylcarbamat (AQC) und anschlieĂender Analyse durch enantioselektiver flĂŒssigchromatographischer IonenmobilitĂ€ts-Massenspektrometrie (LC-IM-MS) erreicht werden. Eine sehr schnelle dreiminĂŒtige Analysenmethode konnte entwickelt und zur StrukturaufklĂ€rung von therapeutischen Peptiden und eines natĂŒrlichen Lipopeptides eingesetzt werden.
Die absolute Konfiguration eines Tetrapeptides als Bestandteil des natĂŒrlichen, antimikrobiellen Peptidpolyensâ Epifadin konnte mittels chiraler LC-MS bestimmt werden, was wiederum entscheidend fĂŒr die StrukturaufklĂ€rung war. In dieser Arbeit konnten alle acht Enantiomerenpaare erfolgreich getrennt werden und die Diastereomerentrennung wurde optimiert.Biopharmaceuticals have become an essential class of therapeutics and are used for different medical indications such as diabetes, cancer, inflammatory diseases, and infectious diseases. Monoclonal antibodies (mAbs) have the biggest share within the biopharmaceuticals regarding the drug approval numbers. However, the benefits in terms of high specificity and efficacy come with the drawback of higher cost and higher complexity. This complexity arises from the high molecular weight on the one hand and high structural heterogeneity on the other hand, making the analytical characterization and quality control of mAbs and other biopharmaceuticals a significant challenge. In addition to these protein-based biopharmaceuticals, the elucidation of the absolute configuration of therapeutic peptides and natural (lipo)peptides is also of particular interest for drug discovery.
To address these challenges, different liquid chromatography (LC) and mass spectrometric (MS) methods were used for the more comprehensive analysis in the presented work. The first publication of this dissertation was dedicated to the analysis of charge variants of mAbs, which is an important quality attribute that might affect safety and efficacy of the drug product. To characterize the charge variants, the mAbs were analysed at the intact protein level and the subunit level after limited digestion and disulphide reduction using strong cation-exchange chromatography (SCX). The SCX method was systematically optimized to enable the separation of the maximum number of charge variants using a design of experiments (DoE) approach. The optimized SCX mobile phase, however, contains high concentrations of non-volatile salt in the mobile phase, which is incompatible with MS detection. On the other hand, MS analysis is essential for the identification of the charge variants. To overcome this limitation, an online two-dimensional liquid chromatographic (2D-LC) method was successfully developed, which uses SCX in the first separation dimension and reversed-phase (RP) LC in the second separation dimension, which can be used for de-salting prior MS analysis. An ultra-short analysis time (†1 min) of the second dimension RP method was essential to establish a full comprehensive 2D-LC analysis. For this purpose, a column comparison study was performed using a set of monolithic and superficially porous particle (SPP) columns, and the separation efficiency and analysis speed were investigated.
An even more comprehensive column comparison study focusing on the kinetic performance was done for the second work presented in this dissertation. A set of 13 RP protein separation columns including monolithic, SPP, and fully porous particle (FPP) columns was investigated regarding their capability to separate peaks in the shortest possible time. It could be demonstrated that SPP columns with a pore size of 400 Ă
and a thin, porous shell provided the best performance especially for large proteins such as mAbs.
Proteins themselves can also be the potential targets of drug products such as the tumour suppressor protein p53 studied in publication III. Intact protein LC-MS was successfully used to investigate the binding efficiency and specificity of covalent inhibitors.
Amino acids are the building blocks of proteins and peptides and most of these amino acids are chiral. As the biological activity is usually dependent on the absolute configuration of the amino acids, the enantioselective analysis is of utmost importance for structural elucidation and quality control. Therefore, one goal of the presented work was to develop a fast and comprehensive method to separate amino acids, their enantiomers, diastereomers, and constitutional isomers. This was achieved by derivatization using 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate (AQC) and subsequent analysis by enantioselective liquid chromatography ion mobility-mass spectrometry (LC-IM-MS). A very fast three minutes short analysis method could be developed and was applied for the successful structure elucidation of a therapeutic peptide and a natural lipopeptide.
The absolute configuration of a tetrapeptide originating from the natural antimicrobial peptide-polyene epifadin could be determined using chiral LC-MS, which was crucial for the structure elucidation. In this work, all eight enantiomer peak pairs could be successfully separated and the separation of the diastereomers was optimized
Backpropagation Beyond the Gradient
Automatic differentiation is a key enabler of deep learning: previously, practitioners were limited to models
for which they could manually compute derivatives. Now, they can create sophisticated models with almost
no restrictions and train them using first-order, i. e. gradient, information. Popular libraries like PyTorch
and TensorFlow compute this gradient efficiently, automatically, and conveniently with a single line of
code. Under the hood, reverse-mode automatic differentiation, or gradient backpropagation, powers the
gradient computation in these libraries. Their entire design centers around gradient backpropagation.
These frameworks are specialized around one specific taskâcomputing the average gradient in a mini-batch.
This specialization often complicates the extraction of other information like higher-order statistical moments
of the gradient, or higher-order derivatives like the Hessian. It limits practitioners and researchers to methods
that rely on the gradient. Arguably, this hampers the field from exploring the potential of higher-order
information and there is evidence that focusing solely on the gradient has not lead to significant recent
advances in deep learning optimization.
To advance algorithmic research and inspire novel ideas, information beyond the batch-averaged gradient
must be made available at the same level of computational efficiency, automation, and convenience.
This thesis presents approaches to simplify experimentation with rich information beyond the gradient
by making it more readily accessible. We present an implementation of these ideas as an extension to the
backpropagation procedure in PyTorch. Using this newly accessible information, we demonstrate possible use
cases by (i) showing how it can inform our understanding of neural network training by building a diagnostic
tool, and (ii) enabling novel methods to efficiently compute and approximate curvature information.
First, we extend gradient backpropagation for sequential feedforward models to Hessian backpropagation
which enables computing approximate per-layer curvature. This perspective unifies recently proposed block-
diagonal curvature approximations. Like gradient backpropagation, the computation of these second-order
derivatives is modular, and therefore simple to automate and extend to new operations.
Based on the insight that rich information beyond the gradient can be computed efficiently and at the
same time, we extend the backpropagation in PyTorch with the BackPACK library. It provides efficient and
convenient access to statistical moments of the gradient and approximate curvature information, often at a
small overhead compared to computing just the gradient.
Next, we showcase the utility of such information to better understand neural network training. We build
the Cockpit library that visualizes what is happening inside the model during training through various
instruments that rely on BackPACKâs statistics. We show how Cockpit provides a meaningful statistical
summary report to the deep learning engineer to identify bugs in their machine learning pipeline, guide
hyperparameter tuning, and study deep learning phenomena.
Finally, we use BackPACKâs extended automatic differentiation functionality to develop ViViT, an approach
to efficiently compute curvature information, in particular curvature noise. It uses the low-rank structure
of the generalized Gauss-Newton approximation to the Hessian and addresses shortcomings in existing
curvature approximations. Through monitoring curvature noise, we demonstrate how ViViTâs information
helps in understanding challenges to make second-order optimization methods work in practice.
This work develops new tools to experiment more easily with higher-order information in complex deep
learning models. These tools have impacted works on Bayesian applications with Laplace approximations,
out-of-distribution generalization, differential privacy, and the design of automatic differentia-
tion systems. They constitute one important step towards developing and establishing more efficient deep
learning algorithms
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Machine learning applications in search algorithms for gravitational waves from compact binary mergers
Gravitational waves from compact binary mergers are now routinely observed by Earth-bound detectors. These observations enable exciting new science, as they have opened a new window to the Universe.
However, extracting gravitational-wave signals from the noisy detector data is a challenging problem. The most sensitive search algorithms for compact binary mergers use matched filtering, an algorithm that compares the data with a set of expected template signals. As detectors are upgraded and more sophisticated signal models become available, the number of required templates will increase, which can make some sources computationally prohibitive to search for. The computational cost is of particular concern when low-latency alerts should be issued to maximize the time for electromagnetic follow-up observations. One potential solution to reduce computational requirements that has started to be explored in the last decade is machine learning. However, different proposed deep learning searches target varying parameter spaces and use metrics that are not always comparable to existing literature. Consequently, a clear picture of the capabilities of machine learning searches has been sorely missing.
In this thesis, we closely examine the sensitivity of various deep learning gravitational-wave search algorithms and introduce new methods to detect signals from binary black hole and binary neutron star mergers at previously untested statistical confidence levels. By using the sensitive distance as our core metric, we allow for a direct comparison of our algorithms to state-of-the-art search pipelines. As part of this thesis, we organized a global mock data challenge to create a benchmark for machine learning search algorithms targeting compact binaries. This way, the tools developed in this thesis are made available to the greater community by publishing them as open source software.
Our studies show that, depending on the parameter space, deep learning gravitational-wave search algorithms are already competitive with current production search pipelines. We also find that strategies developed for traditional searches can be effectively adapted to their machine learning counterparts. In regions where matched filtering becomes computationally expensive, available deep learning algorithms are also limited in their capability. We find reduced sensitivity to long duration signals compared to the excellent results for short-duration binary black hole signals
When Deep Learning Meets Polyhedral Theory: A Survey
In the past decade, deep learning became the prevalent methodology for
predictive modeling thanks to the remarkable accuracy of deep neural networks
in tasks such as computer vision and natural language processing. Meanwhile,
the structure of neural networks converged back to simpler representations
based on piecewise constant and piecewise linear functions such as the
Rectified Linear Unit (ReLU), which became the most commonly used type of
activation function in neural networks. That made certain types of network
structure \unicode{x2014}such as the typical fully-connected feedforward
neural network\unicode{x2014} amenable to analysis through polyhedral theory
and to the application of methodologies such as Linear Programming (LP) and
Mixed-Integer Linear Programming (MILP) for a variety of purposes. In this
paper, we survey the main topics emerging from this fast-paced area of work,
which bring a fresh perspective to understanding neural networks in more detail
as well as to applying linear optimization techniques to train, verify, and
reduce the size of such networks
Development, Implementation, and Optimization of a Modern, Subsonic/Supersonic Panel Method
In the early stages of aircraft design, engineers consider many different design concepts, examining the trade-offs between different component arrangements and sizes, thrust and power requirements, etc. Because so many different designs are considered, it is best in the early stages of design to use simulation tools that are fast; accuracy is secondary. A common simulation tool for early design and analysis is the panel method. Panel methods were first developed in the 1950s and 1960s with the advent of modern computers. Despite being reasonably accurate and very fast, their development was abandoned in the late 1980s in favor of more complex and accurate simulation methods. The panel methods developed in the 1980s are still in use by aircraft designers today because of their accuracy and speed. However, they are cumbersome to use and limited in applicability. The purpose of this work is to reexamine panel methods in a modern context. In particular, this work focuses on the application of panel methods to supersonic aircraft (a supersonic aircraft is one that flies faster than the speed of sound). Various aspects of the panel method, including the distributions of the unknown flow variables on the surface of the aircraft and efficiently solving for these unknowns, are discussed. Trade-offs between alternative formulations are examined and recommendations given. This work also serves to bring together, clarify, and condense much of the literature previously published regarding panel methods so as to assist future developers of panel methods
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This ïŹfth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different ïŹelds of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modiïŹed Proportional ConïŹict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classiïŹers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identiïŹcation of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classiïŹcation.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classiïŹcation, and hybrid techniques mixing deep learning with belief functions as well
- âŠ