8,306 research outputs found
Splenic nerve bundle stimulation in acute and chronic inflammation
Splenic neurovascular bundle stimulation holds potential to treat acute and chronic inflammatory conditions. In the first part of the thesis, the available literature on the interactions between the immune system and nervous system in the intestine is summarized. Then, it is shown that a specialized T-cell, that can produce the neurotransmitter acetylcholine, resides in the gut an plays a dual role in the development of experimental colitis in mice. Furthermore, electrical splenic neurovascular bundle stimulation ameliorated the outcomes of colitis in mice and reversed transcriptomic changes in the gut that were induced by colitis. The second part of the thesis focused on the translation of splenic neurovascular bundle stimulation to the human situation. It is shown that there are significant changes between murine and human innervation of the spleen. Using computed tomography (CT) images the course and the characteristics of the splenic artery were described. These data were used to develop a cuff electrode that could be used for electrical stimulation of the splenic neurovascular bundle in humans. Finally, it was demonstrated that splenic neurovascular bundle stimulation in humans was safe and feasible in a pilot study with patients that underwent esophagectomy
Machine learning in solar physics
The application of machine learning in solar physics has the potential to
greatly enhance our understanding of the complex processes that take place in
the atmosphere of the Sun. By using techniques such as deep learning, we are
now in the position to analyze large amounts of data from solar observations
and identify patterns and trends that may not have been apparent using
traditional methods. This can help us improve our understanding of explosive
events like solar flares, which can have a strong effect on the Earth
environment. Predicting hazardous events on Earth becomes crucial for our
technological society. Machine learning can also improve our understanding of
the inner workings of the sun itself by allowing us to go deeper into the data
and to propose more complex models to explain them. Additionally, the use of
machine learning can help to automate the analysis of solar data, reducing the
need for manual labor and increasing the efficiency of research in this field.Comment: 100 pages, 13 figures, 286 references, accepted for publication as a
Living Review in Solar Physics (LRSP
Using machine learning to predict pathogenicity of genomic variants throughout the human genome
Geschätzt mehr als 6.000 Erkrankungen werden durch Veränderungen im Genom verursacht. Ursachen gibt es viele: Eine genomische Variante kann die Translation eines Proteins stoppen, die Genregulation stören oder das Spleißen der mRNA in eine andere Isoform begünstigen. All diese Prozesse müssen überprüft werden, um die zum beschriebenen Phänotyp passende Variante zu ermitteln. Eine Automatisierung dieses Prozesses sind Varianteneffektmodelle. Mittels maschinellem Lernen und Annotationen aus verschiedenen Quellen bewerten diese Modelle genomische Varianten hinsichtlich ihrer Pathogenität.
Die Entwicklung eines Varianteneffektmodells erfordert eine Reihe von Schritten: Annotation der Trainingsdaten, Auswahl von Features, Training verschiedener Modelle und Selektion eines Modells. Hier präsentiere ich ein allgemeines Workflow dieses Prozesses. Dieses ermöglicht es den Prozess zu konfigurieren, Modellmerkmale zu bearbeiten, und verschiedene Annotationen zu testen. Der Workflow umfasst außerdem die Optimierung von Hyperparametern, Validierung und letztlich die Anwendung des Modells durch genomweites Berechnen von Varianten-Scores.
Der Workflow wird in der Entwicklung von Combined Annotation Dependent Depletion (CADD), einem Varianteneffektmodell zur genomweiten Bewertung von SNVs und InDels, verwendet. Durch Etablierung des ersten Varianteneffektmodells für das humane Referenzgenome GRCh38 demonstriere ich die gewonnenen Möglichkeiten Annotationen aufzugreifen und neue Modelle zu trainieren. Außerdem zeige ich, wie Deep-Learning-Scores als Feature in einem CADD-Modell die Vorhersage von RNA-Spleißing verbessern. Außerdem werden Varianteneffektmodelle aufgrund eines neuen, auf Allelhäufigkeit basierten, Trainingsdatensatz entwickelt.
Diese Ergebnisse zeigen, dass der entwickelte Workflow eine skalierbare und flexible Möglichkeit ist, um Varianteneffektmodelle zu entwickeln. Alle entstandenen Scores sind unter cadd.gs.washington.edu und cadd.bihealth.org frei verfügbar.More than 6,000 diseases are estimated to be caused by genomic variants. This can happen in many possible ways: a variant may stop the translation of a protein, interfere with gene regulation, or alter splicing of the transcribed mRNA into an unwanted isoform. It is necessary to investigate all of these processes in order to evaluate which variant may be causal for the deleterious phenotype. A great help in this regard are variant effect scores. Implemented as machine learning classifiers, they integrate annotations from different resources to rank genomic variants in terms of pathogenicity.
Developing a variant effect score requires multiple steps: annotation of the training data, feature selection, model training, benchmarking, and finally deployment for the model's application. Here, I present a generalized workflow of this process. It makes it simple to configure how information is converted into model features, enabling the rapid exploration of different annotations. The workflow further implements hyperparameter optimization, model validation and ultimately deployment of a selected model via genome-wide scoring of genomic variants.
The workflow is applied to train Combined Annotation Dependent Depletion (CADD), a variant effect model that is scoring SNVs and InDels genome-wide. I show that the workflow can be quickly adapted to novel annotations by porting CADD to the genome reference GRCh38. Further, I demonstrate the integration of deep-neural network scores as features into a new CADD model, improving the annotation of RNA splicing events. Finally, I apply the workflow to train multiple variant effect models from training data that is based on variants selected by allele frequency.
In conclusion, the developed workflow presents a flexible and scalable method to train variant effect scores. All software and developed scores are freely available from cadd.gs.washington.edu and cadd.bihealth.org
Resilience and food security in a food systems context
This open access book compiles a series of chapters written by internationally recognized experts known for their in-depth but critical views on questions of resilience and food security. The book assesses rigorously and critically the contribution of the concept of resilience in advancing our understanding and ability to design and implement development interventions in relation to food security and humanitarian crises. For this, the book departs from the narrow beaten tracks of agriculture and trade, which have influenced the mainstream debate on food security for nearly 60 years, and adopts instead a wider, more holistic perspective, framed around food systems. The foundation for this new approach is the recognition that in the current post-globalization era, the food and nutritional security of the world’s population no longer depends just on the performance of agriculture and policies on trade, but rather on the capacity of the entire (food) system to produce, process, transport and distribute safe, affordable and nutritious food for all, in ways that remain environmentally sustainable. In that context, adopting a food system perspective provides a more appropriate frame as it incites to broaden the conventional thinking and to acknowledge the systemic nature of the different processes and actors involved. This book is written for a large audience, from academics to policymakers, students to practitioners
Generalizable deep learning based medical image segmentation
Deep learning is revolutionizing medical image analysis and interpretation. However, its real-world deployment is often hindered by the poor generalization to unseen domains (new imaging modalities and protocols). This lack of generalization ability is further exacerbated by the scarcity of labeled datasets for training: Data collection and annotation can be prohibitively expensive in terms of labor and costs because label quality heavily dependents on the expertise of radiologists. Additionally, unreliable predictions caused by poor model generalization pose safety risks to clinical downstream applications.
To mitigate labeling requirements, we investigate and develop a series of techniques to strengthen the generalization ability and the data efficiency of deep medical image computing models. We further improve model accountability and identify unreliable predictions made on out-of-domain data, by designing probability calibration techniques.
In the first and the second part of thesis, we discuss two types of problems for handling unexpected domains: unsupervised domain adaptation and single-source domain generalization. For domain adaptation we present a data-efficient technique that adapts a segmentation model trained on a labeled source domain (e.g., MRI) to an unlabeled target domain (e.g., CT), using a small number of unlabeled training images from the target domain.
For domain generalization, we focus on both image reconstruction and segmentation. For image reconstruction, we design a simple and effective domain generalization technique for cross-domain MRI reconstruction, by reusing image representations learned from natural image datasets. For image segmentation, we perform causal analysis of the challenging cross-domain image segmentation problem. Guided by this causal analysis we propose an effective data-augmentation-based generalization technique for single-source domains. The proposed method outperforms existing approaches on a large variety of cross-domain image segmentation scenarios.
In the third part of the thesis, we present a novel self-supervised method for learning generic image representations that can be used to analyze unexpected objects of interest. The proposed method is designed together with a novel few-shot image segmentation framework that can segment unseen objects of interest by taking only a few labeled examples as references. Superior flexibility over conventional fully-supervised models is demonstrated by our few-shot framework: it does not require any fine-tuning on novel objects of interest. We further build a publicly available comprehensive evaluation environment for few-shot medical image segmentation.
In the fourth part of the thesis, we present a novel probability calibration model. To ensure safety in clinical settings, a deep model is expected to be able to alert human radiologists if it has low confidence, especially when confronted with out-of-domain data. To this end we present a plug-and-play model to calibrate prediction probabilities on out-of-domain data. It aligns the prediction probability in line with the actual accuracy on the test data. We evaluate our method on both artifact-corrupted images and images from an unforeseen MRI scanning protocol. Our method demonstrates improved calibration accuracy compared with the state-of-the-art method.
Finally, we summarize the major contributions and limitations of our works. We also suggest future research directions that will benefit from the works in this thesis.Open Acces
AI for IT Operations (AIOps) on Cloud Platforms: Reviews, Opportunities and Challenges
Artificial Intelligence for IT operations (AIOps) aims to combine the power
of AI with the big data generated by IT Operations processes, particularly in
cloud infrastructures, to provide actionable insights with the primary goal of
maximizing availability. There are a wide variety of problems to address, and
multiple use-cases, where AI capabilities can be leveraged to enhance
operational efficiency. Here we provide a review of the AIOps vision, trends
challenges and opportunities, specifically focusing on the underlying AI
techniques. We discuss in depth the key types of data emitted by IT Operations
activities, the scale and challenges in analyzing them, and where they can be
helpful. We categorize the key AIOps tasks as - incident detection, failure
prediction, root cause analysis and automated actions. We discuss the problem
formulation for each task, and then present a taxonomy of techniques to solve
these problems. We also identify relatively under explored topics, especially
those that could significantly benefit from advances in AI literature. We also
provide insights into the trends in this field, and what are the key investment
opportunities
Large-Scale surveys for continuous gravitational waves: from data preparation to multi-stage hierarchical follow-ups
The gravitational wave event GW150914 was the first direct detection of gravitational waves roughly 100 years after their prediction by Albert Einstein. The detection was a breakthrough, opening another channel to observe the Universe. Since then over 90 detections of merging compact objects have been made, most of them coalescences of binary black holes of different masses. There have been two black hole-neutron star, and two binary neutron-star mergers. Another breakthrough was the first binary neutron-star merger, GW170817, associated with a slew of electromagnetic observations, including a gamma-ray burst 1.7s after the merger.
Compact binary coalescence events are cataclysmic events in which multiple solar masses are emitted in gravitational waves in ~seconds. Still, their gravitational wave detection requires sophisticated measuring devices: kilometer-scale laser interferometers.
Another not yet detected form of gravitational radiation are continuous gravitational waves from e.g., but not limited to, fast-spinning neutron stars nonaxisymmetric relatively to their rotational axis. The gravitational wave amplitude on Earth is orders of magnitude weaker than the compact binary coalescence events, but, in the case of the nonaxisymmetric neutron star, is emitted as long as the neutron star is spinning and sustaining the deformation, which may be months to years.
The gravitational wave is mostly emitted at twice the rotational frequency, with a possible frequency evolution (spin-down) due to the energy emitted by gravitational waves, as well as other braking mechanisms. This nearly monochromatic continuous wave is received by observers on Earth Doppler modulated by Earth's orbit and spin.
Although the waveform is seemingly simple, the detection problem for signals from unknown sources is very challenging. The all-sky search for unknown neutron stars in our galaxy detailed in this work used the volunteer distributed computing project Einstein@Home and the ATLAS supercomputer for several months, taking tens of thousands of total CPU-time years to complete. In this work I describe the full-scale data analysis procedure, including data preparation, search set-up optimization and post-processing of search results, whose design and implementation is the core of my doctoral research work. I also present a number of observational results that demonstrate the real-world application of the methodologies that I designed.Das Gravitationswellenereignis GW150914 war der erste direkte Nachweis von Gravitationswellen rund 100 Jahre nach deren Vorhersage durch Albert Einstein. Die Entdeckung war ein Durchbruch und eröffnete einen weiteren Kanal zur Beobachtung des Universums. Seitdem wurden über 90 weitere verschmelzende kompakte Objekte entdeckt, die meisten binäre schwarze Löcher unterschiedlicher Masse, aber auch zweimal verschmelzende Schwarze Löcher mit Neutronensternen und zwei Verschmelzungen von binären Neutronensternen. Ein weiterer Durchbruch war die Beobachtung der ersten Verschmelzung zweier Neutronensterne, GW170817, die mit einer Reihe von elektromagnetischen Beobachtungen einherging, darunter ein Gammastrahlenausbruch 1.7s nach der Verschmelzung.
Bei der Verschmelzung kompakter Objekte handelt es sich um kataklysmische Ereignisse, bei denen innerhalb von ~Sekunden mehrere Sonnenmassen in Form von Gravitationswellen ausgestoßen werden. Ihr Nachweis erfordert jedoch hochentwickelte Messgeräte: Laserinterferometer im Kilometermaßstab.
Eine weitere, noch nicht nachgewiesene Form der Gravitationsstrahlung sind kontinuierliche Gravitationswellen, die z.B., aber nicht nur, von schnell rotierenden Neutronensternen ausgehen, die relativ zu ihrer Rotationsachse nicht achsensymmetrisch sind. Die Amplitude der kontinuierlichen Gravitationswellen auf der Erde ist um Größenordnungen schwächer als die der verschmelzenden kompakten Objekte, wird aber im Fall des nicht achsensymmetrischen Neutronensterns so lange abgestrahlt, wie der Neutronenstern rotiert und die Deformation aufrechterhält, was Monate bis Jahre sein können.
Die Gravitationswelle wird meist mit der doppelten Rotationsfrequenz ausgestrahlt, wobei eine Frequenzentwicklung (Spin-down) aufgrund der von Gravitationswellen ausgesandten Energie, sowie anderer Bremsmechanismen möglich ist. Diese nahezu monochromatische, kontinuierliche Welle wird von einem Beobachter auf der Erde Doppler-moduliert durch die Erdumlaufbahn und die Erddrehung empfangen.
Obwohl die Wellenform scheinbar einfach ist, ist das Problem des Nachweises von Signalen aus unbekannten Quellen eine große Herausforderung. Die in dieser Arbeit beschriebene Suche nach unbekannten Neutronensternen in unserer Galaxie über den kompletten Himmel verwendete über mehrere Monate hinweg das Volunteer-Computing-Projekt Einstein@Home und den ATLAS-Supercomputer und benötigte insgesamt Zehntausende von Jahren an Rechenzeit. In dieser Arbeit beschreibe ich das vollständige Datenanalyseverfahren einschließlich der Datenvorbereitung, der Optimierung der Suchparameter und der Nachbearbeitung der Suchergebnisse, dessen Entwurf und Implementierung das Kernstück meiner Doktorarbeit darstellt. Außerdem stelle ich eine Reihe von Beobachtungsergebnissen vor, welche die praktische Anwendung der von mir entwickelten Methoden demonstrieren
Applications of Molecular Dynamics simulations for biomolecular systems and improvements to density-based clustering in the analysis
Molecular Dynamics simulations provide a powerful tool to study biomolecular systems with atomistic detail. The key to better understand the function and behaviour of these molecules can often be found in their structural variability. Simulations can help to expose this information that is otherwise experimentally hard or impossible to attain. This work covers two application examples for which a sampling and a characterisation of the conformational ensemble could reveal the structural basis to answer a topical research question. For the fungal toxin phalloidin—a small bicyclic peptide—observed product ratios in different cyclisation reactions could be rationalised by assessing the conformational pre-organisation of precursor fragments. For the C-type lectin receptor langerin, conformational changes induced by different side-chain protonations could deliver an explanation
of the pH-dependency in the protein’s calcium-binding. The investigations were accompanied by the continued development of a density-based clustering protocol into a respective software package, which is generally well applicable for the use case of extracting conformational states from Molecular Dynamics data
- …