2,237 research outputs found
DeepSphere: Efficient spherical Convolutional Neural Network with HEALPix sampling for cosmological applications
Convolutional Neural Networks (CNNs) are a cornerstone of the Deep Learning
toolbox and have led to many breakthroughs in Artificial Intelligence. These
networks have mostly been developed for regular Euclidean domains such as those
supporting images, audio, or video. Because of their success, CNN-based methods
are becoming increasingly popular in Cosmology. Cosmological data often comes
as spherical maps, which make the use of the traditional CNNs more complicated.
The commonly used pixelization scheme for spherical maps is the Hierarchical
Equal Area isoLatitude Pixelisation (HEALPix). We present a spherical CNN for
analysis of full and partial HEALPix maps, which we call DeepSphere. The
spherical CNN is constructed by representing the sphere as a graph. Graphs are
versatile data structures that can act as a discrete representation of a
continuous manifold. Using the graph-based representation, we define many of
the standard CNN operations, such as convolution and pooling. With filters
restricted to being radial, our convolutions are equivariant to rotation on the
sphere, and DeepSphere can be made invariant or equivariant to rotation. This
way, DeepSphere is a special case of a graph CNN, tailored to the HEALPix
sampling of the sphere. This approach is computationally more efficient than
using spherical harmonics to perform convolutions. We demonstrate the method on
a classification problem of weak lensing mass maps from two cosmological models
and compare the performance of the CNN with that of two baseline classifiers.
The results show that the performance of DeepSphere is always superior or equal
to both of these baselines. For high noise levels and for data covering only a
smaller fraction of the sphere, DeepSphere achieves typically 10% better
classification accuracy than those baselines. Finally, we show how learned
filters can be visualized to introspect the neural network.Comment: arXiv admin note: text overlap with arXiv:astro-ph/0409513 by other
author
Cross Pixel Optical Flow Similarity for Self-Supervised Learning
We propose a novel method for learning convolutional neural image
representations without manual supervision. We use motion cues in the form of
optical flow, to supervise representations of static images. The obvious
approach of training a network to predict flow from a single image can be
needlessly difficult due to intrinsic ambiguities in this prediction task. We
instead propose a much simpler learning goal: embed pixels such that the
similarity between their embeddings matches that between their optical flow
vectors. At test time, the learned deep network can be used without access to
video or flow information and transferred to tasks such as image
classification, detection, and segmentation. Our method, which significantly
simplifies previous attempts at using motion for self-supervision, achieves
state-of-the-art results in self-supervision using motion cues, competitive
results for self-supervision in general, and is overall state of the art in
self-supervised pretraining for semantic image segmentation, as demonstrated on
standard benchmarks
- …