4,952 research outputs found
Using machine learning to predict pathogenicity of genomic variants throughout the human genome
Geschätzt mehr als 6.000 Erkrankungen werden durch Veränderungen im Genom verursacht. Ursachen gibt es viele: Eine genomische Variante kann die Translation eines Proteins stoppen, die Genregulation stören oder das Spleißen der mRNA in eine andere Isoform begünstigen. All diese Prozesse müssen überprüft werden, um die zum beschriebenen Phänotyp passende Variante zu ermitteln. Eine Automatisierung dieses Prozesses sind Varianteneffektmodelle. Mittels maschinellem Lernen und Annotationen aus verschiedenen Quellen bewerten diese Modelle genomische Varianten hinsichtlich ihrer Pathogenität.
Die Entwicklung eines Varianteneffektmodells erfordert eine Reihe von Schritten: Annotation der Trainingsdaten, Auswahl von Features, Training verschiedener Modelle und Selektion eines Modells. Hier präsentiere ich ein allgemeines Workflow dieses Prozesses. Dieses ermöglicht es den Prozess zu konfigurieren, Modellmerkmale zu bearbeiten, und verschiedene Annotationen zu testen. Der Workflow umfasst außerdem die Optimierung von Hyperparametern, Validierung und letztlich die Anwendung des Modells durch genomweites Berechnen von Varianten-Scores.
Der Workflow wird in der Entwicklung von Combined Annotation Dependent Depletion (CADD), einem Varianteneffektmodell zur genomweiten Bewertung von SNVs und InDels, verwendet. Durch Etablierung des ersten Varianteneffektmodells für das humane Referenzgenome GRCh38 demonstriere ich die gewonnenen Möglichkeiten Annotationen aufzugreifen und neue Modelle zu trainieren. Außerdem zeige ich, wie Deep-Learning-Scores als Feature in einem CADD-Modell die Vorhersage von RNA-Spleißing verbessern. Außerdem werden Varianteneffektmodelle aufgrund eines neuen, auf Allelhäufigkeit basierten, Trainingsdatensatz entwickelt.
Diese Ergebnisse zeigen, dass der entwickelte Workflow eine skalierbare und flexible Möglichkeit ist, um Varianteneffektmodelle zu entwickeln. Alle entstandenen Scores sind unter cadd.gs.washington.edu und cadd.bihealth.org frei verfügbar.More than 6,000 diseases are estimated to be caused by genomic variants. This can happen in many possible ways: a variant may stop the translation of a protein, interfere with gene regulation, or alter splicing of the transcribed mRNA into an unwanted isoform. It is necessary to investigate all of these processes in order to evaluate which variant may be causal for the deleterious phenotype. A great help in this regard are variant effect scores. Implemented as machine learning classifiers, they integrate annotations from different resources to rank genomic variants in terms of pathogenicity.
Developing a variant effect score requires multiple steps: annotation of the training data, feature selection, model training, benchmarking, and finally deployment for the model's application. Here, I present a generalized workflow of this process. It makes it simple to configure how information is converted into model features, enabling the rapid exploration of different annotations. The workflow further implements hyperparameter optimization, model validation and ultimately deployment of a selected model via genome-wide scoring of genomic variants.
The workflow is applied to train Combined Annotation Dependent Depletion (CADD), a variant effect model that is scoring SNVs and InDels genome-wide. I show that the workflow can be quickly adapted to novel annotations by porting CADD to the genome reference GRCh38. Further, I demonstrate the integration of deep-neural network scores as features into a new CADD model, improving the annotation of RNA splicing events. Finally, I apply the workflow to train multiple variant effect models from training data that is based on variants selected by allele frequency.
In conclusion, the developed workflow presents a flexible and scalable method to train variant effect scores. All software and developed scores are freely available from cadd.gs.washington.edu and cadd.bihealth.org
Knowledge Distillation and Continual Learning for Optimized Deep Neural Networks
Over the past few years, deep learning (DL) has been achieving state-of-theart performance on various human tasks such as speech generation, language translation, image segmentation, and object detection. While traditional machine learning models require hand-crafted features, deep learning algorithms can automatically extract discriminative features and learn complex knowledge from large datasets. This powerful learning ability makes deep learning models attractive to both academia and big corporations.
Despite their popularity, deep learning methods still have two main limitations: large memory consumption and catastrophic knowledge forgetting. First, DL algorithms use very deep neural networks (DNNs) with many billion parameters, which have a big model size and a slow inference speed. This restricts the application of DNNs in resource-constraint devices such as mobile phones and autonomous vehicles. Second, DNNs are known to suffer from catastrophic forgetting. When incrementally learning new tasks, the model performance on old tasks significantly drops. The ability to accommodate new knowledge while retaining previously learned knowledge is called continual learning. Since the realworld environments in which the model operates are always evolving, a robust neural network needs to have this continual learning ability for adapting to new changes
Colour technologies for content production and distribution of broadcast content
The requirement of colour reproduction has long been a priority driving the development of new colour imaging systems that maximise human perceptual plausibility. This thesis explores machine learning algorithms for colour processing to assist both content production and distribution. First, this research studies colourisation technologies with practical use cases in restoration and processing of archived content. The research targets practical deployable solutions, developing a cost-effective pipeline which integrates the activity of the producer into the processing workflow. In particular, a fully automatic image colourisation paradigm using Conditional GANs is proposed to improve content generalisation and colourfulness of existing baselines. Moreover, a more conservative solution is considered by providing references to guide the system towards more accurate colour predictions. A fast-end-to-end architecture is proposed to improve existing exemplar-based image colourisation methods while decreasing the complexity and runtime. Finally, the proposed image-based methods are integrated into a video colourisation pipeline. A general framework is proposed to reduce the generation of temporal flickering or propagation of errors when such methods are applied frame-to-frame. The proposed model is jointly trained to stabilise the input video and to cluster their frames with the aim of learning scene-specific modes. Second, this research explored colour processing technologies for content distribution with the aim to effectively deliver the processed content to the broad audience. In particular, video compression is tackled by introducing a novel methodology for chroma intra prediction based on attention models. Although the proposed architecture helped to gain control over the reference samples and better understand the prediction process, the complexity of the underlying neural network significantly increased the encoding and decoding time. Therefore, aiming at efficient deployment within the latest video coding standards, this work also focused on the simplification of the proposed architecture to obtain a more compact and explainable model
Ab Initio Language Teaching in British Higher Education
Drawing extensively on the expertise of teachers of German in universities across the UK, this volume offers an overview of recent trends, new pedagogical approaches and practical guidance for teaching at beginners level in the higher education classroom. At a time when entries for UK school exams in modern foreign languages are decreasing, this book serves the urgent need for research and guidance on ab initio learning and teaching in HE. Using the example of teaching German, it offers theoretical reflections on teaching ab initio and practice-oriented approaches that will be useful for teachers of both German and other languages in higher education.
The first chapters assess the role of ab initio provision within the wider context of modern languages departments and language centres. They are followed by sections on teaching methods and innovative approaches in the ab initio classroom that include chapters on the use of music, textbook evaluation, the effective use of a flipped classroom and the contribution of language apps. Finally, the book focuses on the learner in the ab initio context and explores issues around autonomy and learner strengths. The whole builds into a theoretically grounded guide that sketches out perspectives for teaching and learning ab initio languages that will benefit current and future generations of students
The role of exports in manufacturing pollution in sub–Saharan Africa and South Asia: towards a better trade-environment governance
Based on the gaps and challenges identified through case studies, the report proposes recommendations for
Kenya, the United Republic of Tanzania, Bangladesh, and Pakistan along with the main three areas of research:
(i) Environmental Law and Public Governance, (ii) Private Sector Governance, and (iii) Life Cycle Assessment.
Even though at different stages, the four countries are building diversified economies by developing their industrial sectors. As exports play a significant role in their economic growth, those countries gain from more sustainable manufacturing practices
Traffic Prediction using Artificial Intelligence: Review of Recent Advances and Emerging Opportunities
Traffic prediction plays a crucial role in alleviating traffic congestion
which represents a critical problem globally, resulting in negative
consequences such as lost hours of additional travel time and increased fuel
consumption. Integrating emerging technologies into transportation systems
provides opportunities for improving traffic prediction significantly and
brings about new research problems. In order to lay the foundation for
understanding the open research challenges in traffic prediction, this survey
aims to provide a comprehensive overview of traffic prediction methodologies.
Specifically, we focus on the recent advances and emerging research
opportunities in Artificial Intelligence (AI)-based traffic prediction methods,
due to their recent success and potential in traffic prediction, with an
emphasis on multivariate traffic time series modeling. We first provide a list
and explanation of the various data types and resources used in the literature.
Next, the essential data preprocessing methods within the traffic prediction
context are categorized, and the prediction methods and applications are
subsequently summarized. Lastly, we present primary research challenges in
traffic prediction and discuss some directions for future research.Comment: Published in Transportation Research Part C: Emerging Technologies
(TR_C), Volume 145, 202
Using video games to study the acquisition and performance of psychomotor skills
Understanding how humans learn complex skills is a fundamental aim of cognitive science. Digital games offer promising opportunities to study cognitive factors associated with skill acquisition and performance, as they motivate longitudinal engagement and produce rich, multivariate data sets. By applying mutlivariate analysis techniques to data arising from gameplay, this thesis extended the literature on cognition as it pertains to psychomotor skill. We describe three studies
that were conducted in this regard. In the first study, we analyzed the relationship between the temporal distribution of play instances and performance in a commercial digital game (League of Legends). Using clustering techniques and big data, we demonstrated that players who cram gameplay into short time frames ultimately perform worse than those who space the same number of games over longer periods. In the second study, we examined an experimental data set of participants who played Meta-T, a laboratory version of Tetris. Using Principal Components Analysis and regression techniques, we identified cognitive-behavioural markers of performance, such as action-latency and motor coordination. We also applied Hidden Markov models (HMM) to time series of these markers, showing that moment-to-moment dynamics in performance can be segmented into behavioural states related to latent psychological states. In the third study, we investigated the neural correlates of behavioural states during performance. Using simultaneous MEG and behavioural recordings of participants playing Tetris, we segmented time series datasets of neural activity based on time stamps of behavioural epochs derived from HMMs. We compared behavioural epochs based on neural markers, showing that cognitive states derived from multivariate behavioural data correlate with neural activity in the alpha band power. Taken together, this thesis advances our understanding of using digital game data to study cognition and learning. It demonstrates the feasibility of recording high-density neuroimaging data during complex behavioural tasks and obtaining reliable measures of internal neuronal states during complex behaviour
- …