9,439 research outputs found
Two-Dimensional Core-Collapse Supernova Models with Multi-Dimensional Transport
We present new two-dimensional (2D) axisymmetric neutrino
radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use
the CASTRO code, which incorporates truly multi-dimensional, multi-group,
flux-limited diffusion (MGFLD) neutrino transport, including all relevant
terms. Our main motivation for carrying out this study is to
compare with recent 2D models produced by other groups who have obtained
explosions for some progenitor stars and with recent 2D VULCAN results that did
not incorporate terms. We follow the evolution of 12, 15,
20, and 25 solar-mass progenitors to approximately 600 milliseconds after
bounce and do not obtain an explosion in any of these models. Though the reason
for the qualitative disagreement among the groups engaged in CCSN modeling
remains unclear, we speculate that the simplifying ``ray-by-ray' approach
employed by all other groups may be compromising their results. We show that
``ray-by-ray' calculations greatly exaggerate the angular and temporal
variations of the neutrino fluxes, which we argue are better captured by our
multi-dimensional MGFLD approach. On the other hand, our 2D models also make
approximations, making it difficult to draw definitive conclusions concerning
the root of the differences between groups. We discuss some of the diagnostics
often employed in the analyses of CCSN simulations and highlight the intimate
relationship between the various explosion conditions that have been proposed.
Finally, we explore the ingredients that may be missing in current calculations
that may be important in reproducing the properties of the average CCSNe,
should the delayed neutrino-heating mechanism be the correct mechanism of
explosion.Comment: ApJ accepted version. Minor changes from origina
A Comparative Evaluation of Approximate Probabilistic Simulation and Deep Neural Networks as Accounts of Human Physical Scene Understanding
Humans demonstrate remarkable abilities to predict physical events in complex
scenes. Two classes of models for physical scene understanding have recently
been proposed: "Intuitive Physics Engines", or IPEs, which posit that people
make predictions by running approximate probabilistic simulations in causal
mental models similar in nature to video-game physics engines, and memory-based
models, which make judgments based on analogies to stored experiences of
previously encountered scenes and physical outcomes. Versions of the latter
have recently been instantiated in convolutional neural network (CNN)
architectures. Here we report four experiments that, to our knowledge, are the
first rigorous comparisons of simulation-based and CNN-based models, where both
approaches are concretely instantiated in algorithms that can run on raw image
inputs and produce as outputs physical judgments such as whether a stack of
blocks will fall. Both approaches can achieve super-human accuracy levels and
can quantitatively predict human judgments to a similar degree, but only the
simulation-based models generalize to novel situations in ways that people do,
and are qualitatively consistent with systematic perceptual illusions and
judgment asymmetries that people show.Comment: Accepted to CogSci 2016 as an oral presentatio
Learning to Reconstruct Shapes from Unseen Classes
From a single image, humans are able to perceive the full 3D shape of an
object by exploiting learned shape priors from everyday life. Contemporary
single-image 3D reconstruction algorithms aim to solve this task in a similar
fashion, but often end up with priors that are highly biased by training
classes. Here we present an algorithm, Generalizable Reconstruction (GenRe),
designed to capture more generic, class-agnostic shape priors. We achieve this
with an inference network and training procedure that combine 2.5D
representations of visible surfaces (depth and silhouette), spherical shape
representations of both visible and non-visible surfaces, and 3D voxel-based
representations, in a principled manner that exploits the causal structure of
how 3D shapes give rise to 2D images. Experiments demonstrate that GenRe
performs well on single-view shape reconstruction, and generalizes to diverse
novel objects from categories not seen during training.Comment: NeurIPS 2018 (Oral). The first two authors contributed equally to
this paper. Project page: http://genre.csail.mit.edu
Quantum limits on post-selected, probabilistic quantum metrology
Probabilistic metrology attempts to improve parameter estimation by
occasionally reporting an excellent estimate and the rest of the time either
guessing or doing nothing at all. Here we show that probabilistic metrology can
never improve quantum limits on estimation of a single parameter, both on
average and asymptotically in number of trials, if performance is judged
relative to mean-square estimation error. We extend the result by showing that
for a finite number of trials, the probability of obtaining better estimates
using probabilistic metrology, as measured by mean-square error, decreases
exponentially with the number of trials. To be tight, the performance bounds we
derive require that likelihood functions be approximately normal, which in turn
depends on how rapidly specific distributions converge to a normal distribution
with number of trials.Comment: V1:8 pages, 1 figure. V2: 9 pages, 1 figure, revised text. V3: 11
pages, 1 figure, revised text; V4 published version, revised title ;-
Quantum limits on phase-preserving linear amplifiers
The purpose of a phase-preserving linear amplifier is to make a small signal
larger, regardless of its phase, so that it can be perceived by instruments
incapable of resolving the original signal, while sacrificing as little as
possible in signal-to-noise. Quantum mechanics limits how well this can be
done: a high-gain linear amplifier must degrade the signal-to-noise; the noise
added by the amplifier, when referred to the input, must be at least half a
quantum at the operating frequency. This well-known quantum limit only
constrains the second moments of the added noise. Here we derive the quantum
constraints on the entire distribution of added noise: we show that any
phase-preserving linear amplifier is equivalent to a parametric amplifier with
a physical state for the ancillary mode; the noise added to the amplified field
mode is distributed according to the Wigner function of the ancilla state.Comment: 37 pages, 6 figure
Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling
We study 3D shape modeling from a single image and make contributions to it
in three aspects. First, we present Pix3D, a large-scale benchmark of diverse
image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications
in shape-related tasks including reconstruction, retrieval, viewpoint
estimation, etc. Building such a large-scale dataset, however, is highly
challenging; existing datasets either contain only synthetic data, or lack
precise alignment between 2D images and 3D shapes, or only have a small number
of images. Second, we calibrate the evaluation criteria for 3D shape
reconstruction through behavioral studies, and use them to objectively and
systematically benchmark cutting-edge reconstruction algorithms on Pix3D.
Third, we design a novel model that simultaneously performs 3D reconstruction
and pose estimation; our multi-task learning approach achieves state-of-the-art
performance on both tasks.Comment: CVPR 2018. The first two authors contributed equally to this work.
Project page: http://pix3d.csail.mit.ed
Visual Object Networks: Image Generation with Disentangled 3D Representation
Recent progress in deep generative models has led to tremendous breakthroughs
in image generation. However, while existing models can synthesize
photorealistic images, they lack an understanding of our underlying 3D world.
We present a new generative model, Visual Object Networks (VON), synthesizing
natural images of objects with a disentangled 3D representation. Inspired by
classic graphics rendering pipelines, we unravel our image formation process
into three conditionally independent factors---shape, viewpoint, and
texture---and present an end-to-end adversarial learning framework that jointly
models 3D shapes and 2D images. Our model first learns to synthesize 3D shapes
that are indistinguishable from real shapes. It then renders the object's 2.5D
sketches (i.e., silhouette and depth map) from its shape under a sampled
viewpoint. Finally, it learns to add realistic texture to these 2.5D sketches
to generate natural images. The VON not only generates images that are more
realistic than state-of-the-art 2D image synthesis methods, but also enables
many 3D operations such as changing the viewpoint of a generated image, editing
of shape and texture, linear interpolation in texture and shape space, and
transferring appearance across different objects and viewpoints.Comment: NeurIPS 2018. Code: https://github.com/junyanz/VON Website:
http://von.csail.mit.edu
Fabrication and Electric Field Dependent Transport Measurements of Mesoscopic Graphite Devices
We have developed a unique micromechanical method to extract extremely thin
graphite samples. Graphite crystallites with thicknesses ranging from 10 - 100
nm and lateral size 2 m are extracted from bulk. Mesoscopic
graphite devices are fabricated from these samples for electric field dependent
conductance measurements. Strong conductance modulation as a function of gate
voltage is observed in the thinner crystallite devices. The temperature
dependent resistivity measurements show more boundary scattering contribution
in the thinner graphite samples.Comment: 3 pages, 3 figures included, submitted to Appl. Phys. Let
- …