173 research outputs found
Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks
We study the problem of synthesizing a number of likely future frames from a
single input image. In contrast to traditional methods, which have tackled this
problem in a deterministic or non-parametric way, we propose a novel approach
that models future frames in a probabilistic manner. Our probabilistic model
makes it possible for us to sample and synthesize many possible future frames
from a single input image. Future frame synthesis is challenging, as it
involves low- and high-level image and motion understanding. We propose a novel
network structure, namely a Cross Convolutional Network to aid in synthesizing
future frames; this network structure encodes image and motion information as
feature maps and convolutional kernels, respectively. In experiments, our model
performs well on synthetic data, such as 2D shapes and animated game sprites,
as well as on real-wold videos. We also show that our model can be applied to
tasks such as visual analogy-making, and present an analysis of the learned
network representations.Comment: The first two authors contributed equally to this wor
Visual Dynamics: Stochastic Future Generation via Layered Cross Convolutional Networks
We study the problem of synthesizing a number of likely future frames from a
single input image. In contrast to traditional methods that have tackled this
problem in a deterministic or non-parametric way, we propose to model future
frames in a probabilistic manner. Our probabilistic model makes it possible for
us to sample and synthesize many possible future frames from a single input
image. To synthesize realistic movement of objects, we propose a novel network
structure, namely a Cross Convolutional Network; this network encodes image and
motion information as feature maps and convolutional kernels, respectively. In
experiments, our model performs well on synthetic data, such as 2D shapes and
animated game sprites, and on real-world video frames. We present analyses of
the learned network representations, showing it is implicitly learning a
compact encoding of object appearance and motion. We also demonstrate a few of
its applications, including visual analogy-making and video extrapolation.Comment: Journal preprint of arXiv:1607.02586 (IEEE TPAMI, 2019). The first
two authors contributed equally to this work. Project page:
http://visualdynamics.csail.mit.ed
First M87 Event Horizon Telescope Results. VI. The Shadow and Mass of the Central Black Hole
We present measurements of the properties of the central radio source in M87 using Event Horizon Telescope data obtained during the 2017 campaign. We develop and fit geometric crescent models (asymmetric rings with interior brightness depressions) using two independent sampling algorithms that consider distinct representations of the visibility data. We show that the crescent family of models is statistically preferred over other comparably complex geometric models that we explore. We calibrate the geometric model parameters using general relativistic magnetohydrodynamic (GRMHD) models of the emission region and estimate physical properties of the source. We further fit images generated from GRMHD models directly to the data. We compare the derived emission region and black hole parameters from these analyses with those recovered from reconstructed images. There is a remarkable consistency among all methods and data sets. We find that >50% of the total flux at arcsecond scales comes from near the horizon, and that the emission is dramatically suppressed interior to this region by a factor >10, providing direct evidence of the predicted shadow of a black hole. Across all methods, we measure a crescent diameter of 42 ± 3 μas and constrain its fractional width to be <0.5. Associating the crescent feature with the emission surrounding the black hole shadow, we infer an angular gravitational radius of GM/Dc^2 = 3.8 ± 0.4 μas. Folding in a distance measurement of 16.8^(+0.8)_(-0.7) Mpc gives a black hole mass of M = 6.5 ± 0.2|_(stat) ± 0.7|_(sys) x 10^9 M⊙ . This measurement from lensed emission near the event horizon is consistent with the presence of a central Kerr black hole, as predicted by the general theory of relativity
Deep Probabilistic Imaging: Uncertainty Quantification and Multi-modal Solution Characterization for Computational Imaging
Computational image reconstruction algorithms generally produce a single
image without any measure of uncertainty or confidence. Regularized Maximum
Likelihood (RML) and feed-forward deep learning approaches for inverse problems
typically focus on recovering a point estimate. This is a serious limitation
when working with underdetermined imaging systems, where it is conceivable that
multiple image modes would be consistent with the measured data. Characterizing
the space of probable images that explain the observational data is therefore
crucial. In this paper, we propose a variational deep probabilistic imaging
approach to quantify reconstruction uncertainty. Deep Probabilistic Imaging
(DPI) employs an untrained deep generative model to estimate a posterior
distribution of an unobserved image. This approach does not require any
training data; instead, it optimizes the weights of a neural network to
generate image samples that fit a particular measurement dataset. Once the
network weights have been learned, the posterior distribution can be
efficiently sampled. We demonstrate this approach in the context of
interferometric radio imaging, which is used for black hole imaging with the
Event Horizon Telescope, and compressed sensing Magnetic Resonance Imaging
(MRI).Comment: This paper has been accepted to AAAI 2021. Keywords: Computational
Imaging, Normalizing Flow, Uncertainty Quantification, Interferometry, MR
Efficient Bayesian Computational Imaging with a Surrogate Score-Based Prior
We propose a surrogate function for efficient use of score-based priors for
Bayesian inverse imaging. Recent work turned score-based diffusion models into
probabilistic priors for solving ill-posed imaging problems by appealing to an
ODE-based log-probability function. However, evaluating this function is
computationally inefficient and inhibits posterior estimation of
high-dimensional images. Our proposed surrogate prior is based on the evidence
lower-bound of a score-based diffusion model. We demonstrate the surrogate
prior on variational inference for efficient approximate posterior sampling of
large images. Compared to the exact prior in previous work, our surrogate prior
accelerates optimization of the variational image distribution by at least two
orders of magnitude. We also find that our principled approach achieves
higher-fidelity images than non-Bayesian baselines that involve
hyperparameter-tuning at inference. Our work establishes a practical path
forward for using score-based diffusion models as general-purpose priors for
imaging
First M87 Event Horizon Telescope Results. II. Array and Instrumentation
The Event Horizon Telescope (EHT) is a very long baseline interferometry (VLBI) array that comprises millimeter- and submillimeter-wavelength telescopes separated by distances comparable to the diameter of the Earth. At a nominal operating wavelength of ~1.3 mm, EHT angular resolution (λ/D) is ~25 μas, which is sufficient to resolve nearby supermassive black hole candidates on spatial and temporal scales that correspond to their event horizons. With this capability, the EHT scientific goals are to probe general relativistic effects in the strong-field regime and to study accretion and relativistic jet formation near the black hole boundary. In this Letter we describe the system design of the EHT, detail the technology and instrumentation that enable observations, and provide measures of its performance. Meeting the EHT science objectives has required several key developments that have facilitated the robust extension of the VLBI technique to EHT observing wavelengths and the production of instrumentation that can be deployed on a heterogeneous array of existing telescopes and facilities. To meet sensitivity requirements, high-bandwidth digital systems were developed that process data at rates of 64 gigabit s^(−1), exceeding those of currently operating cm-wavelength VLBI arrays by more than an order of magnitude. Associated improvements include the development of phasing systems at array facilities, new receiver installation at several sites, and the deployment of hydrogen maser frequency standards to ensure coherent data capture across the array. These efforts led to the coordination and execution of the first Global EHT observations in 2017 April, and to event-horizon-scale imaging of the supermassive black hole candidate in M87
First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole
When surrounded by a transparent emission region, black holes are expected to reveal a dark shadow caused by gravitational light bending and photon capture at the event horizon. To image and study this phenomenon, we have assembled the Event Horizon Telescope, a global very long baseline interferometry array observing at a wavelength of 1.3 mm. This allows us to reconstruct event-horizon-scale images of the supermassive black hole candidate in the center of the giant elliptical galaxy M87. We have resolved the central compact radio source as an asymmetric bright emission ring with a diameter of 42 ± 3 μas, which is circular and encompasses a central depression in brightness with a flux ratio ≳10:1. The emission ring is recovered using different calibration and imaging schemes, with its diameter and width remaining stable over four different observations carried out in different days. Overall, the observed image is consistent with expectations for the shadow of a Kerr black hole as predicted by general relativity. The asymmetry in brightness in the ring can be explained in terms of relativistic beaming of the emission from a plasma rotating close to the speed of light around a black hole. We compare our images to an extensive library of ray-traced general-relativistic magnetohydrodynamic simulations of black holes and derive a central mass of M = (6.5 ± 0.7) × 10^9 M⊙. Our radio-wave observations thus provide powerful evidence for the presence of supermassive black holes in centers of galaxies and as the central engines of active galactic nuclei. They also present a new tool to explore gravity in its most extreme limit and on a mass scale that was so far not accessible
Estimating the material properties of fabric through the observation of motion
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 49-51).We present a framework for predicting the physical properties of moving deformable objects observed in video. We apply our framework to analyze videos of fabrics moving under various unknown wind forces, and recover two key material properties of the fabric: stiffness and mass. We extend features previously developed to compactly represent static image textures to describe video textures such as fabric motion. A discriminatively trained regression model is then used to predict the physical properties of fabric from these features. The success of our model is demonstrated on a new database of fabric videos with corresponding measured ground truth material properties that we have collected. We show that our predictions are well correlated with both measured material properties and human perception of material properties. Our contributions include: (a) a method for predicting the material properties of fabric from a video, (b) a database that can be used for training and testing algorithms for predicting fabric properties containing RGB and RGBD videos of real videos with associated material properties and rendered videos of simulated fabric with associated model parameters, and (c) a perceptual study of humans' ability to estimate the material properties of fabric from videos and images.by Katherine L. Bouman.S.M
Deep Radio Interferometric Imaging with POLISH: DSA-2000 and weak lensing
Radio interferometry allows astronomers to probe small spatial scales that
are often inaccessible with single-dish instruments. However, recovering the
radio sky from an interferometer is an ill-posed deconvolution problem that
astronomers have worked on for half a century. More challenging still is
achieving resolution below the array's diffraction limit, known as
super-resolution imaging. To this end, we have developed a new learning-based
approach for radio interferometric imaging, leveraging recent advances in the
classical computer vision problems of single-image super-resolution (SISR) and
deconvolution. We have developed and trained a high dynamic range residual
neural network to learn the mapping between the dirty image and the true radio
sky. We call this procedure POLISH, in contrast to the traditional CLEAN
algorithm. The feed forward nature of learning-based approaches like POLISH is
critical for analyzing data from the upcoming Deep Synoptic Array (DSA-2000).
We show that POLISH achieves super-resolution, and we demonstrate its ability
to deconvolve real observations from the Very Large Array (VLA).
Super-resolution on DSA-2000 will allow us to measure the shapes and
orientations of several hundred million star forming radio galaxies (SFGs),
making it a powerful cosmological weak lensing survey and probe of dark energy.
We forecast its ability to constrain the lensing power spectrum, finding that
it will be complementary to next-generation optical surveys such as Euclid
High Resolution Linear Polarimetric Imaging for the Event Horizon Telescope
Images of the linear polarization of synchrotron radiation around Active
Galactic Nuclei (AGN) identify their projected magnetic field lines and provide
key data for understanding the physics of accretion and outflow from
supermassive black holes. The highest resolution polarimetric images of AGN are
produced with Very Long Baseline Interferometry (VLBI). Because VLBI
incompletely samples the Fourier transform of the source image, any image
reconstruction that fills in unmeasured spatial frequencies will not be unique
and reconstruction algorithms are required. In this paper, we explore
extensions of the Maximum Entropy Method (MEM) to linear polarimetric VLBI
imaging. In contrast to previous work, our polarimetric MEM algorithm combines
a Stokes I imager that uses only bispectrum measurements that are immune to
atmospheric phase corruption with a joint Stokes Q and U imager that operates
on robust polarimetric ratios. We demonstrate the effectiveness of our
technique on 7- and 3-mm wavelength quasar observations from the VLBA and
simulated 1.3-mm Event Horizon Telescope observations of Sgr A* and M87.
Consistent with past studies, we find that polarimetric MEM can produce
superior resolution compared to the standard CLEAN algorithm when imaging
smooth and compact source distributions. As an imaging framework, MEM is highly
adaptable, allowing a range of constraints on polarization structure.
Polarimetric MEM is thus an attractive choice for image reconstruction with the
EHT.Comment: 19 pages, 9 figures. Accepted for publication in ApJ. Imaging code
available at https://github.com/achael/eht-imaging
- …