45 research outputs found
Robust Inference of Manifold Density and Geometry by Doubly Stochastic Scaling
The Gaussian kernel and its traditional normalizations (e.g., row-stochastic)
are popular approaches for assessing similarities between data points, commonly
used for manifold learning and clustering, as well as supervised and
semi-supervised learning on graphs. In many practical situations, the data can
be corrupted by noise that prohibits traditional affinity matrices from
correctly assessing similarities, especially if the noise magnitudes vary
considerably across the data, e.g., under heteroskedasticity or outliers. An
alternative approach that provides a more stable behavior under noise is the
doubly stochastic normalization of the Gaussian kernel. In this work, we
investigate this normalization in a setting where points are sampled from an
unknown density on a low-dimensional manifold embedded in high-dimensional
space and corrupted by possibly strong, non-identically distributed,
sub-Gaussian noise. We establish the pointwise concentration of the doubly
stochastic affinity matrix and its scaling factors around certain population
forms. We then utilize these results to develop several tools for robust
inference. First, we derive a robust density estimator that can substantially
outperform the standard kernel density estimator under high-dimensional noise.
Second, we provide estimators for the pointwise noise magnitudes, the pointwise
signal magnitudes, and the pairwise Euclidean distances between clean data
points. Lastly, we derive robust graph Laplacian normalizations that
approximate popular manifold Laplacians, including the Laplace Beltrami
operator, showing that the local geometry of the manifold can be recovered
under high-dimensional noise. We exemplify our results in simulations and on
real single-cell RNA-sequencing data. In the latter, we show that our proposed
normalizations are robust to technical variability associated with different
cell types
Recommended from our members
Learning to See with Minimal Human Supervision
Deep learning has significantly advanced computer vision in the past decade, paving the way for practical applications such as facial recognition and autonomous driving. However, current techniques depend heavily on human supervision, limiting their broader deployment. This dissertation tackles this problem by introducing algorithms and theories to minimize human supervision in three key areas: data, annotations, and neural network architectures, in the context of various visual understanding tasks such as object detection, image restoration, and 3D generation.
First, we present self-supervised learning algorithms to handle in-the-wild images and videos that traditionally require time-consuming manual curation and labeling. We demonstrate that when a deep network is trained to be invariant to geometric and photometric transformations, representations from its intermediate layers are highly predictive of object semantic parts such as eyes and noses. This insight offers a simple unsupervised learning framework that significantly improves the efficiency and accuracy of few-shot landmark prediction and matching. We then present a technique for learning single-view 3D object pose estimation models by utilizing in-the-wild videos where objects turn (e.g., cars in roundabouts). This technique achieves competitive performance with respect to existing state-of-the-art without requiring any manual labels during training. We also contribute an Accidental Turntables Dataset, containing a challenging set of 41,212 images of cars in cluttered backgrounds, motion blur, and illumination changes that serve as a benchmark for 3D pose estimation.
Second, we address variations in labeling styles across different annotators, which leads to a type of noisy label referred to as heterogeneous label. This variability in human annotation can cause subpar performance during both the training and testing phases. To mitigate this, we have developed a framework that models the labeling styles of individual annotators, reducing the impact of human annotation variations and enhancing the performance of standard object detection models. We have also applied this framework to analyze ecological data, which are often collected opportunistically across different case studies without consistent annotation guidelines. Through this application, we have obtained several insightful observations into large-scale bird migration behaviors and their relationship to climate change.
Our next study explores the challenges of designing neural networks, an area that lacks a comprehensive theoretical understanding. By linking deep neural networks with Gaussian processes, we propose a novel Bayesian interpretation of the deep image prior, which parameterizes a natural image as the output of a convolutional network with random parameters and random input. This approach offers valuable insights to optimize the design of neural networks for various image restoration tasks.
Lastly, we introduce several machine-learning techniques to reconstruct and edit 3D shapes from 2D images with minimal human effort. We first present a generic multi-modal generative model that bridges 2D images and 3D shapes via a shared latent space, and demonstrate its applications on versatile 3D shape generation and manipulation tasks. Additionally, we develop a framework for joint estimation of 3D neural scene representation and camera poses. This approach outperforms prior works and allows us to operate in the general SE(3) camera pose setting, unlike the baselines. The results also indicate this method can be complementary to classical structure-from-motion (SfM) pipelines as it compares favorably to SfM on low-texture and low-resolution images
TOWARDS ROBUST REPRESENTATION LEARNING AND BEYOND
Deep networks have reshaped the computer vision research in recent years. As fueled by powerful computational resources and massive amount of data, deep networks now dominate a wide range of visual benchmarks. Nonetheless, these success stories come with bitterness---an increasing amount of studies has shown the limitations of deep networks on certain testing conditions like small input changes or occlusion. These failures not only raise safety and reliability concerns on the applicability of deep networks in the real world, but also demonstrate the computations performed by the current deep networks are dramatically different from those by human brains.
In this dissertation, we focus on investigating and tackling a particular yet challenging weakness of deep networks---their vulnerability to adversarial examples. The first part of this thesis argues that such vulnerability is a much more severe issue than we thought---the threats from adversarial examples are ubiquitous and catastrophic. We then discuss how to equip deep networks with robust representations for defending against adversarial examples. We approach the solution from the perspective of neural architecture design, and show incorporating architectural elements like feature-level denoisers or smooth activation functions can effectively boost model robustness. The last part of this thesis focuses on rethinking the value of adversarial examples. Rather than treating adversarial examples as a threat to deep networks, we take a further step on uncovering adversarial examples can help deep networks improve the generalization ability, if feature representations are properly disentangled during learning
Review : Deep learning in electron microscopy
Deep learning is transforming most areas of science and technology, including electron microscopy. This review paper offers a practical perspective aimed at developers with limited familiarity. For context, we review popular applications of deep learning in electron microscopy. Following, we discuss hardware and software needed to get started with deep learning and interface with electron microscopes. We then review neural network components, popular architectures, and their optimization. Finally, we discuss future directions of deep learning in electron microscopy
Toward more scalable structured models
While deep learning has achieved huge success across different disciplines from computer vision and natural language processing to computational biology and physical sciences, training such models is known to require significant amounts of data. One possible reason is that the structural properties of the data and problem are not modeled explicitly. Effectively exploiting the structure can help build more efficient and performing models. The complexity of the structure requires models with enough representation capabilities. However, increased structured model complexity usually leads to increased inference complexity and trickier learning procedures. Also, making progress on real-world applications requires learning paradigms that circumvent the limitation of evaluating the partition function and scale to high-dimensional datasets.
In this dissertation, we develop more scalable structured models, i.e., models with inference procedures that can handle complex dependencies between variables efficiently, and learning algorithms that operate in high-dimensional spaces. First, we extend Gaussian conditional random fields, traditionally unimodal and only capturing pairwise variables interactions, to model multi-modal distributions with high-order dependencies between the output space variables, while enabling exact inference and incorporating external constraints at runtime. We show compelling results on the task of diverse gray-image colorization. Then, we introduce a reinforcement learning-based method for solving inference in models with general higher-order potentials, that are intractable with traditional techniques. We show promising results on semantic segmentation. Finally, we propose a new loss, max-sliced score matching (MSSM), for learning structured models at scale. We assess our model on an estimation of densities and scores for implicit distributions in Variational and Wasserstein auto-encoders