13,883 research outputs found
On information captured by neural networks: connections with memorization and generalization
Despite the popularity and success of deep learning, there is limited
understanding of when, how, and why neural networks generalize to unseen
examples. Since learning can be seen as extracting information from data, we
formally study information captured by neural networks during training.
Specifically, we start with viewing learning in presence of noisy labels from
an information-theoretic perspective and derive a learning algorithm that
limits label noise information in weights. We then define a notion of unique
information that an individual sample provides to the training of a deep
network, shedding some light on the behavior of neural networks on examples
that are atypical, ambiguous, or belong to underrepresented subpopulations. We
relate example informativeness to generalization by deriving nonvacuous
generalization gap bounds. Finally, by studying knowledge distillation, we
highlight the important role of data and label complexity in generalization.
Overall, our findings contribute to a deeper understanding of the mechanisms
underlying neural network generalization.Comment: PhD thesi
vONTSS: vMF based semi-supervised neural topic modeling with optimal transport
Recently, Neural Topic Models (NTM), inspired by variational autoencoders,
have attracted a lot of research interest; however, these methods have limited
applications in the real world due to the challenge of incorporating human
knowledge. This work presents a semi-supervised neural topic modeling method,
vONTSS, which uses von Mises-Fisher (vMF) based variational autoencoders and
optimal transport. When a few keywords per topic are provided, vONTSS in the
semi-supervised setting generates potential topics and optimizes topic-keyword
quality and topic classification. Experiments show that vONTSS outperforms
existing semi-supervised topic modeling methods in classification accuracy and
diversity. vONTSS also supports unsupervised topic modeling. Quantitative and
qualitative experiments show that vONTSS in the unsupervised setting
outperforms recent NTMs on multiple aspects: vONTSS discovers highly clustered
and coherent topics on benchmark datasets. It is also much faster than the
state-of-the-art weakly supervised text classification method while achieving
similar classification performance. We further prove the equivalence of optimal
transport loss and cross-entropy loss at the global minimum.Comment: 24 pages, 12 figures, ACL findings 202
Improving Heterogeneous Graph Learning with Weighted Mixed-Curvature Product Manifold
In graph representation learning, it is important that the complex geometric
structure of the input graph, e.g. hidden relations among nodes, is well
captured in embedding space. However, standard Euclidean embedding spaces have
a limited capacity in representing graphs of varying structures. A promising
candidate for the faithful embedding of data with varying structure is product
manifolds of component spaces of different geometries (spherical, hyperbolic,
or euclidean). In this paper, we take a closer look at the structure of product
manifold embedding spaces and argue that each component space in a product
contributes differently to expressing structures in the input graph, hence
should be weighted accordingly. This is different from previous works which
consider the roles of different components equally. We then propose
WEIGHTED-PM, a data-driven method for learning embedding of heterogeneous
graphs in weighted product manifolds. Our method utilizes the topological
information of the input graph to automatically determine the weight of each
component in product spaces. Extensive experiments on synthetic and real-world
graph datasets demonstrate that WEIGHTED-PM is capable of learning better graph
representations with lower geometric distortion from input data, and performs
better on multiple downstream tasks, such as word similarity learning, top-
recommendation, and knowledge graph embedding
Invariant Slot Attention: Object Discovery with Slot-Centric Reference Frames
Automatically discovering composable abstractions from raw perceptual data is
a long-standing challenge in machine learning. Recent slot-based neural
networks that learn about objects in a self-supervised manner have made
exciting progress in this direction. However, they typically fall short at
adequately capturing spatial symmetries present in the visual world, which
leads to sample inefficiency, such as when entangling object appearance and
pose. In this paper, we present a simple yet highly effective method for
incorporating spatial symmetries via slot-centric reference frames. We
incorporate equivariance to per-object pose transformations into the attention
and generation mechanism of Slot Attention by translating, scaling, and
rotating position encodings. These changes result in little computational
overhead, are easy to implement, and can result in large gains in terms of data
efficiency and overall improvements to object discovery. We evaluate our method
on a wide range of synthetic object discovery benchmarks namely CLEVR,
Tetrominoes, CLEVRTex, Objects Room and MultiShapeNet, and show promising
improvements on the challenging real-world Waymo Open dataset.Comment: Accepted at ICML 2023. Project page: https://invariantsa.github.io
Is attention all you need in medical image analysis? A review
Medical imaging is a key component in clinical diagnosis, treatment planning
and clinical trial design, accounting for almost 90% of all healthcare data.
CNNs achieved performance gains in medical image analysis (MIA) over the last
years. CNNs can efficiently model local pixel interactions and be trained on
small-scale MI data. The main disadvantage of typical CNN models is that they
ignore global pixel relationships within images, which limits their
generalisation ability to understand out-of-distribution data with different
'global' information. The recent progress of Artificial Intelligence gave rise
to Transformers, which can learn global relationships from data. However, full
Transformer models need to be trained on large-scale data and involve
tremendous computational complexity. Attention and Transformer compartments
(Transf/Attention) which can well maintain properties for modelling global
relationships, have been proposed as lighter alternatives of full Transformers.
Recently, there is an increasing trend to co-pollinate complementary
local-global properties from CNN and Transf/Attention architectures, which led
to a new era of hybrid models. The past years have witnessed substantial growth
in hybrid CNN-Transf/Attention models across diverse MIA problems. In this
systematic review, we survey existing hybrid CNN-Transf/Attention models,
review and unravel key architectural designs, analyse breakthroughs, and
evaluate current and future opportunities as well as challenges. We also
introduced a comprehensive analysis framework on generalisation opportunities
of scientific and clinical impact, based on which new data-driven domain
generalisation and adaptation methods can be stimulated
Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives
Deep learning has demonstrated remarkable performance across various tasks in
medical imaging. However, these approaches primarily focus on supervised
learning, assuming that the training and testing data are drawn from the same
distribution. Unfortunately, this assumption may not always hold true in
practice. To address these issues, unsupervised domain adaptation (UDA)
techniques have been developed to transfer knowledge from a labeled domain to a
related but unlabeled domain. In recent years, significant advancements have
been made in UDA, resulting in a wide range of methodologies, including feature
alignment, image translation, self-supervision, and disentangled representation
methods, among others. In this paper, we provide a comprehensive literature
review of recent deep UDA approaches in medical imaging from a technical
perspective. Specifically, we categorize current UDA research in medical
imaging into six groups and further divide them into finer subcategories based
on the different tasks they perform. We also discuss the respective datasets
used in the studies to assess the divergence between the different domains.
Finally, we discuss emerging areas and provide insights and discussions on
future research directions to conclude this survey.Comment: Under Revie
Automatic Feature Engineering for Time Series Classification: Evaluation and Discussion
Time Series Classification (TSC) has received much attention in the past two
decades and is still a crucial and challenging problem in data science and
knowledge engineering. Indeed, along with the increasing availability of time
series data, many TSC algorithms have been suggested by the research community
in the literature. Besides state-of-the-art methods based on similarity
measures, intervals, shapelets, dictionaries, deep learning methods or hybrid
ensemble methods, several tools for extracting unsupervised informative summary
statistics, aka features, from time series have been designed in the recent
years. Originally designed for descriptive analysis and visualization of time
series with informative and interpretable features, very few of these feature
engineering tools have been benchmarked for TSC problems and compared with
state-of-the-art TSC algorithms in terms of predictive performance. In this
article, we aim at filling this gap and propose a simple TSC process to
evaluate the potential predictive performance of the feature sets obtained with
existing feature engineering tools. Thus, we present an empirical study of 11
feature engineering tools branched with 9 supervised classifiers over 112 time
series data sets. The analysis of the results of more than 10000 learning
experiments indicate that feature-based methods perform as accurately as
current state-of-the-art TSC algorithms, and thus should rightfully be
considered further in the TSC literature
Using machine learning to predict pathogenicity of genomic variants throughout the human genome
Geschätzt mehr als 6.000 Erkrankungen werden durch Veränderungen im Genom verursacht. Ursachen gibt es viele: Eine genomische Variante kann die Translation eines Proteins stoppen, die Genregulation stören oder das Spleißen der mRNA in eine andere Isoform begünstigen. All diese Prozesse müssen überprüft werden, um die zum beschriebenen Phänotyp passende Variante zu ermitteln. Eine Automatisierung dieses Prozesses sind Varianteneffektmodelle. Mittels maschinellem Lernen und Annotationen aus verschiedenen Quellen bewerten diese Modelle genomische Varianten hinsichtlich ihrer Pathogenität.
Die Entwicklung eines Varianteneffektmodells erfordert eine Reihe von Schritten: Annotation der Trainingsdaten, Auswahl von Features, Training verschiedener Modelle und Selektion eines Modells. Hier präsentiere ich ein allgemeines Workflow dieses Prozesses. Dieses ermöglicht es den Prozess zu konfigurieren, Modellmerkmale zu bearbeiten, und verschiedene Annotationen zu testen. Der Workflow umfasst außerdem die Optimierung von Hyperparametern, Validierung und letztlich die Anwendung des Modells durch genomweites Berechnen von Varianten-Scores.
Der Workflow wird in der Entwicklung von Combined Annotation Dependent Depletion (CADD), einem Varianteneffektmodell zur genomweiten Bewertung von SNVs und InDels, verwendet. Durch Etablierung des ersten Varianteneffektmodells für das humane Referenzgenome GRCh38 demonstriere ich die gewonnenen Möglichkeiten Annotationen aufzugreifen und neue Modelle zu trainieren. Außerdem zeige ich, wie Deep-Learning-Scores als Feature in einem CADD-Modell die Vorhersage von RNA-Spleißing verbessern. Außerdem werden Varianteneffektmodelle aufgrund eines neuen, auf Allelhäufigkeit basierten, Trainingsdatensatz entwickelt.
Diese Ergebnisse zeigen, dass der entwickelte Workflow eine skalierbare und flexible Möglichkeit ist, um Varianteneffektmodelle zu entwickeln. Alle entstandenen Scores sind unter cadd.gs.washington.edu und cadd.bihealth.org frei verfügbar.More than 6,000 diseases are estimated to be caused by genomic variants. This can happen in many possible ways: a variant may stop the translation of a protein, interfere with gene regulation, or alter splicing of the transcribed mRNA into an unwanted isoform. It is necessary to investigate all of these processes in order to evaluate which variant may be causal for the deleterious phenotype. A great help in this regard are variant effect scores. Implemented as machine learning classifiers, they integrate annotations from different resources to rank genomic variants in terms of pathogenicity.
Developing a variant effect score requires multiple steps: annotation of the training data, feature selection, model training, benchmarking, and finally deployment for the model's application. Here, I present a generalized workflow of this process. It makes it simple to configure how information is converted into model features, enabling the rapid exploration of different annotations. The workflow further implements hyperparameter optimization, model validation and ultimately deployment of a selected model via genome-wide scoring of genomic variants.
The workflow is applied to train Combined Annotation Dependent Depletion (CADD), a variant effect model that is scoring SNVs and InDels genome-wide. I show that the workflow can be quickly adapted to novel annotations by porting CADD to the genome reference GRCh38. Further, I demonstrate the integration of deep-neural network scores as features into a new CADD model, improving the annotation of RNA splicing events. Finally, I apply the workflow to train multiple variant effect models from training data that is based on variants selected by allele frequency.
In conclusion, the developed workflow presents a flexible and scalable method to train variant effect scores. All software and developed scores are freely available from cadd.gs.washington.edu and cadd.bihealth.org
Knowledge Distillation and Continual Learning for Optimized Deep Neural Networks
Over the past few years, deep learning (DL) has been achieving state-of-theart performance on various human tasks such as speech generation, language translation, image segmentation, and object detection. While traditional machine learning models require hand-crafted features, deep learning algorithms can automatically extract discriminative features and learn complex knowledge from large datasets. This powerful learning ability makes deep learning models attractive to both academia and big corporations.
Despite their popularity, deep learning methods still have two main limitations: large memory consumption and catastrophic knowledge forgetting. First, DL algorithms use very deep neural networks (DNNs) with many billion parameters, which have a big model size and a slow inference speed. This restricts the application of DNNs in resource-constraint devices such as mobile phones and autonomous vehicles. Second, DNNs are known to suffer from catastrophic forgetting. When incrementally learning new tasks, the model performance on old tasks significantly drops. The ability to accommodate new knowledge while retaining previously learned knowledge is called continual learning. Since the realworld environments in which the model operates are always evolving, a robust neural network needs to have this continual learning ability for adapting to new changes
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
- …