1,752 research outputs found
Deep Fluids: A Generative Network for Parameterized Fluid Simulations
This paper presents a novel generative model to synthesize fluid simulations
from a set of reduced parameters. A convolutional neural network is trained on
a collection of discrete, parameterizable fluid simulation velocity fields. Due
to the capability of deep learning architectures to learn representative
features of the data, our generative model is able to accurately approximate
the training data set, while providing plausible interpolated in-betweens. The
proposed generative model is optimized for fluids by a novel loss function that
guarantees divergence-free velocity fields at all times. In addition, we
demonstrate that we can handle complex parameterizations in reduced spaces, and
advance simulations in time by integrating in the latent space with a second
network. Our method models a wide variety of fluid behaviors, thus enabling
applications such as fast construction of simulations, interpolation of fluids
with different parameters, time re-sampling, latent space simulations, and
compression of fluid simulation data. Reconstructed velocity fields are
generated up to 700x faster than re-simulating the data with the underlying CPU
solver, while achieving compression rates of up to 1300x.Comment: Computer Graphics Forum (Proceedings of EUROGRAPHICS 2019),
additional materials: http://www.byungsoo.me/project/deep-fluids
Diffusion Models for Probabilistic Deconvolution of Galaxy Images
Telescopes capture images with a particular point spread function (PSF).
Inferring what an image would have looked like with a much sharper PSF, a
problem known as PSF deconvolution, is ill-posed because PSF convolution is not
an invertible transformation. Deep generative models are appealing for PSF
deconvolution because they can infer a posterior distribution over candidate
images that, if convolved with the PSF, could have generated the observation.
However, classical deep generative models such as VAEs and GANs often provide
inadequate sample diversity. As an alternative, we propose a classifier-free
conditional diffusion model for PSF deconvolution of galaxy images. We
demonstrate that this diffusion model captures a greater diversity of possible
deconvolutions compared to a conditional VAE.Comment: Accepted to the ICML 2023 Workshop on Machine Learning for
Astrophysic
CMU DeepLens: Deep Learning For Automatic Image-based Galaxy-Galaxy Strong Lens Finding
Galaxy-scale strong gravitational lensing is not only a valuable probe of the
dark matter distribution of massive galaxies, but can also provide valuable
cosmological constraints, either by studying the population of strong lenses or
by measuring time delays in lensed quasars. Due to the rarity of galaxy-scale
strongly lensed systems, fast and reliable automated lens finding methods will
be essential in the era of large surveys such as LSST, Euclid, and WFIRST. To
tackle this challenge, we introduce CMU DeepLens, a new fully automated
galaxy-galaxy lens finding method based on Deep Learning. This supervised
machine learning approach does not require any tuning after the training step
which only requires realistic image simulations of strongly lensed systems. We
train and validate our model on a set of 20,000 LSST-like mock observations
including a range of lensed systems of various sizes and signal-to-noise ratios
(S/N). We find on our simulated data set that for a rejection rate of
non-lenses of 99%, a completeness of 90% can be achieved for lenses with
Einstein radii larger than 1.4" and S/N larger than 20 on individual -band
LSST exposures. Finally, we emphasize the importance of realistically complex
simulations for training such machine learning methods by demonstrating that
the performance of models of significantly different complexities cannot be
distinguished on simpler simulations. We make our code publicly available at
https://github.com/McWilliamsCenter/CMUDeepLens .Comment: 12 pages, 9 figures, submitted to MNRA
TOWARD DEEP LEARNING EMULATORS FOR MODELING THE LARGE-SCALE STRUCTURE OF THE UNIVERSE
Multi-billion dollar cosmological surveys are being conducted almost every decade in today’s era of precision cosmology. These surveys scan vast swaths of sky and generate tons of observational data. In order to extract meaningful information from this data and test these observations against theory, rigorous theoretical predictions are needed. In the absence of an analytic method, cosmological simulations become the most widely used tool to provide these predictions in order to test against the observations. They can be used to study covariance matrices, generate mock galaxy catalogs and provide ready-to-use snapshots for detailed redshift analyses. But cosmological simulations of matter formation in the universe are one of the most computationally intensive tasks. Faster but equally reliable tools that could approximate these simulations are thus desperately needed. Recently, deep learning has come up as an innovative and novel tool that can generate numerous cosmological simulations orders of magnitude faster than traditional simulations. Deep learning models of structure formation and evolution in the universe are unimaginably fast and retain most of the accuracy of conventional simulations, thus providing a fast, reliable, efficient, and accurate method to study the evolution of the universe and reducing the computational burden of current simulation methods.
In this dissertation, we will focus on deep learning-based models that could mimic the process of structure formation in the universe. In particular, we focus on developing deep convolutional neural network models that could learn the present 3D distribution of the cold dark matter and generate 2D dark matter cosmic mass maps. We employ summary statistics most commonly employed in cosmology and computer vision to quantify the robustness of our models
Astronomia ex machina: a history, primer, and outlook on neural networks in astronomy
In recent years, deep learning has infiltrated every field it has touched,
reducing the need for specialist knowledge and automating the process of
knowledge discovery from data. This review argues that astronomy is no
different, and that we are currently in the midst of a deep learning revolution
that is transforming the way we do astronomy. We trace the history of
astronomical connectionism from the early days of multilayer perceptrons,
through the second wave of convolutional and recurrent neural networks, to the
current third wave of self-supervised and unsupervised deep learning. We then
predict that we will soon enter a fourth wave of astronomical connectionism, in
which finetuned versions of an all-encompassing 'foundation' model will replace
expertly crafted deep learning models. We argue that such a model can only be
brought about through a symbiotic relationship between astronomy and
connectionism, whereby astronomy provides high quality multimodal data to train
the foundation model, and in turn the foundation model is used to advance
astronomical research.Comment: 60 pages, 269 references, 29 figures. Review submitted to Royal
Society Open Science. Comments and feedback welcom
Fast Point Spread Function Modeling with Deep Learning
Modeling the Point Spread Function (PSF) of wide-field surveys is vital for
many astrophysical applications and cosmological probes including weak
gravitational lensing. The PSF smears the image of any recorded object and
therefore needs to be taken into account when inferring properties of galaxies
from astronomical images. In the case of cosmic shear, the PSF is one of the
dominant sources of systematic errors and must be treated carefully to avoid
biases in cosmological parameters. Recently, forward modeling approaches to
calibrate shear measurements within the Monte-Carlo Control Loops ()
framework have been developed. These methods typically require simulating a
large amount of wide-field images, thus, the simulations need to be very fast
yet have realistic properties in key features such as the PSF pattern. Hence,
such forward modeling approaches require a very flexible PSF model, which is
quick to evaluate and whose parameters can be estimated reliably from survey
data. We present a PSF model that meets these requirements based on a fast
deep-learning method to estimate its free parameters. We demonstrate our
approach on publicly available SDSS data. We extract the most important
features of the SDSS sample via principal component analysis. Next, we
construct our model based on perturbations of a fixed base profile, ensuring
that it captures these features. We then train a Convolutional Neural Network
to estimate the free parameters of the model from noisy images of the PSF. This
allows us to render a model image of each star, which we compare to the SDSS
stars to evaluate the performance of our method. We find that our approach is
able to accurately reproduce the SDSS PSF at the pixel level, which, due to the
speed of both the model evaluation and the parameter estimation, offers good
prospects for incorporating our method into the framework.Comment: 25 pages, 8 figures, 1 tabl
- …