25,140 research outputs found
Inferring the emission regions for different kinds of gamma-ray bursts
Using a theoretical model describing pulse shapes, we have clarified the
relations between the observed pulses and their corresponding timescales, such
as the angular spreading time, the dynamic time as well as the cooling time. We
find that the angular spreading timescale caused by curvature effect of
fireball surface only contributes to the falling part of the observed pulses,
while the dynamic one in the co-moving frame of the shell merely contributes to
the rising portion of pulses provided the radiative time is negligible. In
addition, the pulses resulted from the pure radiative cooling time of
relativistic electrons exhibit properties of fast rise and slow decay (a
quasi-FRED) profile together with smooth peaks. Besides, we interpret the
phenomena of wider pules tending to be more asymmetric to be a consequence of
the difference in emission regions. Meanwhile, we find the intrinsic emission
time is decided by the ratios of lorentz factors and radii of the shells
between short and long bursts. Based on the analysis of asymmetry, our results
suggest that the long GRB pulses may occur in the regions with larger radius,
while the short bursts could locate at the smaller distance from central
engine.Comment: 6 pages; 6 figures; accepted for publication in AN with minor change
SurfNet: Generating 3D shape surfaces using deep residual networks
3D shape models are naturally parameterized using vertices and faces, \ie,
composed of polygons forming a surface. However, current 3D learning paradigms
for predictive and generative tasks using convolutional neural networks focus
on a voxelized representation of the object. Lifting convolution operators from
the traditional 2D to 3D results in high computational overhead with little
additional benefit as most of the geometry information is contained on the
surface boundary. Here we study the problem of directly generating the 3D shape
surface of rigid and non-rigid shapes using deep convolutional neural networks.
We develop a procedure to create consistent `geometry images' representing the
shape surface of a category of 3D objects. We then use this consistent
representation for category-specific shape surface generation from a parametric
representation or an image by developing novel extensions of deep residual
networks for the task of geometry image generation. Our experiments indicate
that our network learns a meaningful representation of shape surfaces allowing
it to interpolate between shape orientations and poses, invent new shape
surfaces and reconstruct 3D shape surfaces from previously unseen images.Comment: CVPR 2017 pape
Sparse Codes for Speech Predict Spectrotemporal Receptive Fields in the Inferior Colliculus
We have developed a sparse mathematical representation of speech that
minimizes the number of active model neurons needed to represent typical speech
sounds. The model learns several well-known acoustic features of speech such as
harmonic stacks, formants, onsets and terminations, but we also find more
exotic structures in the spectrogram representation of sound such as localized
checkerboard patterns and frequency-modulated excitatory subregions flanked by
suppressive sidebands. Moreover, several of these novel features resemble
neuronal receptive fields reported in the Inferior Colliculus (IC), as well as
auditory thalamus and cortex, and our model neurons exhibit the same tradeoff
in spectrotemporal resolution as has been observed in IC. To our knowledge,
this is the first demonstration that receptive fields of neurons in the
ascending mammalian auditory pathway beyond the auditory nerve can be predicted
based on coding principles and the statistical properties of recorded sounds.Comment: For Supporting Information, see PLoS website:
http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.100259
OctNetFusion: Learning Depth Fusion from Data
In this paper, we present a learning based approach to depth fusion, i.e.,
dense 3D reconstruction from multiple depth images. The most common approach to
depth fusion is based on averaging truncated signed distance functions, which
was originally proposed by Curless and Levoy in 1996. While this method is
simple and provides great results, it is not able to reconstruct (partially)
occluded surfaces and requires a large number frames to filter out sensor noise
and outliers. Motivated by the availability of large 3D model repositories and
recent advances in deep learning, we present a novel 3D CNN architecture that
learns to predict an implicit surface representation from the input depth maps.
Our learning based method significantly outperforms the traditional volumetric
fusion approach in terms of noise reduction and outlier suppression. By
learning the structure of real world 3D objects and scenes, our approach is
further able to reconstruct occluded regions and to fill in gaps in the
reconstruction. We demonstrate that our learning based approach outperforms
both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric
fusion. Further, we demonstrate state-of-the-art 3D shape completion results.Comment: 3DV 2017, https://github.com/griegler/octnetfusio
Space Exploration via Proximity Search
We investigate what computational tasks can be performed on a point set in
, if we are only given black-box access to it via nearest-neighbor
search. This is a reasonable assumption if the underlying point set is either
provided implicitly, or it is stored in a data structure that can answer such
queries. In particular, we show the following: (A) One can compute an
approximate bi-criteria -center clustering of the point set, and more
generally compute a greedy permutation of the point set. (B) One can decide if
a query point is (approximately) inside the convex-hull of the point set.
We also investigate the problem of clustering the given point set, such that
meaningful proximity queries can be carried out on the centers of the clusters,
instead of the whole point set
- …