264 research outputs found
Vibrating quantum billiards on Riemannian manifolds
Quantum billiards provide an excellent forum for the analysis of quantum
chaos. Toward this end, we consider quantum billiards with time-varying
surfaces, which provide an important example of quantum chaos that does not
require the semiclassical () or high quantum-number
limits. We analyze vibrating quantum billiards using the framework of
Riemannian geometry. First, we derive a theorem detailing necessary conditions
for the existence of chaos in vibrating quantum billiards on Riemannian
manifolds. Numerical observations suggest that these conditions are also
sufficient. We prove the aforementioned theorem in full generality for one
degree-of-freedom boundary vibrations and briefly discuss a generalization to
billiards with two or more degrees-of-vibrations. The requisite conditions are
direct consequences of the separability of the Helmholtz equation in a given
orthogonal coordinate frame, and they arise from orthogonality relations
satisfied by solutions of the Helmholtz equation. We then state and prove a
second theorem that provides a general form for the coupled ordinary
differential equations that describe quantum billiards with one
degree-of-vibration boundaries. This set of equations may be used to illustrate
KAM theory and also provides a simple example of semiquantum chaos. Moreover,
vibrating quantum billiards may be used as models for quantum-well
nanostructures, so this study has both theoretical and practical applications.Comment: 23 pages, 6 figures, a few typos corrected. To appear in
International Journal of Bifurcation and Chaos (9/01
Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding
We present a deep learning framework for probabilistic pixel-wise semantic
segmentation, which we term Bayesian SegNet. Semantic segmentation is an
important tool for visual scene understanding and a meaningful measure of
uncertainty is essential for decision making. Our contribution is a practical
system which is able to predict pixel-wise class labels with a measure of model
uncertainty. We achieve this by Monte Carlo sampling with dropout at test time
to generate a posterior distribution of pixel class labels. In addition, we
show that modelling uncertainty improves segmentation performance by 2-3%
across a number of state of the art architectures such as SegNet, FCN and
Dilation Network, with no additional parametrisation. We also observe a
significant improvement in performance for smaller datasets where modelling
uncertainty is more effective. We benchmark Bayesian SegNet on the indoor SUN
Scene Understanding and outdoor CamVid driving scenes datasets.Toyota Corporatio
Knowledge Distillation for Multi-task Learning
Multi-task learning (MTL) is to learn one single model that performs multiple
tasks for achieving good performance on all tasks and lower cost on
computation. Learning such a model requires to jointly optimize losses of a set
of tasks with different difficulty levels, magnitudes, and characteristics
(e.g. cross-entropy, Euclidean loss), leading to the imbalance problem in
multi-task learning. To address the imbalance problem, we propose a knowledge
distillation based method in this work. We first learn a task-specific model
for each task. We then learn the multi-task model for minimizing task-specific
loss and for producing the same feature with task-specific models. As the
task-specific network encodes different features, we introduce small
task-specific adaptors to project multi-task features to the task-specific
features. In this way, the adaptors align the task-specific feature and the
multi-task feature, which enables a balanced parameter sharing across tasks.
Extensive experimental results demonstrate that our method can optimize a
multi-task learning model in a more balanced way and achieve better overall
performance.Comment: We propose a knowledge distillation method for addressing the
imbalance problem in multi-task learnin
Geometry meets semantics for semi-supervised monocular depth estimation
Depth estimation from a single image represents a very exciting challenge in
computer vision. While other image-based depth sensing techniques leverage on
the geometry between different viewpoints (e.g., stereo or structure from
motion), the lack of these cues within a single image renders ill-posed the
monocular depth estimation task. For inference, state-of-the-art
encoder-decoder architectures for monocular depth estimation rely on effective
feature representations learned at training time. For unsupervised training of
these models, geometry has been effectively exploited by suitable images
warping losses computed from views acquired by a stereo rig or a moving camera.
In this paper, we make a further step forward showing that learning semantic
information from images enables to improve effectively monocular depth
estimation as well. In particular, by leveraging on semantically labeled images
together with unsupervised signals gained by geometry through an image warping
loss, we propose a deep learning approach aimed at joint semantic segmentation
and depth estimation. Our overall learning framework is semi-supervised, as we
deploy groundtruth data only in the semantic domain. At training time, our
network learns a common feature representation for both tasks and a novel
cross-task loss function is proposed. The experimental findings show how,
jointly tackling depth prediction and semantic segmentation, allows to improve
depth estimation accuracy. In particular, on the KITTI dataset our network
outperforms state-of-the-art methods for monocular depth estimation.Comment: 16 pages, Accepted to ACCV 201
Estimating Depth from RGB and Sparse Sensing
We present a deep model that can accurately produce dense depth maps given an
RGB image with known depth at a very sparse set of pixels. The model works
simultaneously for both indoor/outdoor scenes and produces state-of-the-art
dense depth maps at nearly real-time speeds on both the NYUv2 and KITTI
datasets. We surpass the state-of-the-art for monocular depth estimation even
with depth values for only 1 out of every ~10000 image pixels, and we
outperform other sparse-to-dense depth methods at all sparsity levels. With
depth values for 1/256 of the image pixels, we achieve a mean absolute error of
less than 1% of actual depth on indoor scenes, comparable to the performance of
consumer-grade depth sensor hardware. Our experiments demonstrate that it would
indeed be possible to efficiently transform sparse depth measurements obtained
using e.g. lower-power depth sensors or SLAM systems into high-quality dense
depth maps.Comment: European Conference on Computer Vision (ECCV) 2018. Updated to
camera-ready version with additional experiment
Classical and Quantum Chaos in a quantum dot in time-periodic magnetic fields
We investigate the classical and quantum dynamics of an electron confined to
a circular quantum dot in the presence of homogeneous magnetic
fields. The classical motion shows a transition to chaotic behavior depending
on the ratio of field magnitudes and the cyclotron
frequency in units of the drive frequency. We determine a
phase boundary between regular and chaotic classical behavior in the
vs plane. In the quantum regime we evaluate the quasi-energy
spectrum of the time-evolution operator. We show that the nearest neighbor
quasi-energy eigenvalues show a transition from level clustering to level
repulsion as one moves from the regular to chaotic regime in the
plane. The statistic confirms this
transition. In the chaotic regime, the eigenfunction statistics coincides with
the Porter-Thomas prediction. Finally, we explicitly establish the phase space
correspondence between the classical and quantum solutions via the Husimi phase
space distributions of the model. Possible experimentally feasible conditions
to see these effects are discussed.Comment: 26 pages and 17 PstScript figures, two large ones can be obtained
from the Author
Deep Depth From Focus
Depth from focus (DFF) is one of the classical ill-posed inverse problems in
computer vision. Most approaches recover the depth at each pixel based on the
focal setting which exhibits maximal sharpness. Yet, it is not obvious how to
reliably estimate the sharpness level, particularly in low-textured areas. In
this paper, we propose `Deep Depth From Focus (DDFF)' as the first end-to-end
learning approach to this problem. One of the main challenges we face is the
hunger for data of deep neural networks. In order to obtain a significant
amount of focal stacks with corresponding groundtruth depth, we propose to
leverage a light-field camera with a co-calibrated RGB-D sensor. This allows us
to digitally create focal stacks of varying sizes. Compared to existing
benchmarks our dataset is 25 times larger, enabling the use of machine learning
for this inverse problem. We compare our results with state-of-the-art DFF
methods and we also analyze the effect of several key deep architectural
components. These experiments show that our proposed method `DDFFNet' achieves
state-of-the-art performance in all scenes, reducing depth error by more than
75% compared to the classical DFF methods.Comment: accepted to Asian Conference on Computer Vision (ACCV) 201
Pauli principle and chaos in a magnetized disk
We present results of a detailed quantum mechanical study of a gas of
noninteracting electrons confined to a circular boundary and subject to
homogeneous dc plus ac magnetic fields , with
). We earlier found a one-particle {\it classical}
phase diagram of the (scaled) Larmor frequency
{\rm vs} that
separates regular from chaotic regimes. We also showed that the quantum
spectrum statistics changed from Poisson to Gaussian orthogonal ensembles in
the transition from classically integrable to chaotic dynamics. Here we find
that, as a function of and , there are clear
quantum signatures in the magnetic response, when going from the
single-particle classically regular to chaotic regimes. In the quasi-integrable
regime the magnetization non-monotonically oscillates between diamagnetic and
paramagnetic as a function of . We quantitatively understand this behavior
from a perturbation theory analysis. In the chaotic regime, however, we find
that the magnetization oscillates as a function of but it is {\it always}
diamagnetic. Equivalent results are also presented for the orbital currents. We
also find that the time-averaged energy grows like in the
quasi-integrable regime but changes to a linear dependence in the chaotic
regime. In contrast, the results with Bose statistics are akin to the
single-particle case and thus different from the fermionic case. We also give
an estimate of possible experimental parameters were our results may be seen in
semiconductor quantum dot billiards.Comment: 22 pages, 7 GIF figures, Phys. Rev. E. (1999
Self-supervised Depth Estimation to Regularise Semantic Segmentation in Knee Arthroscopy
Intra-operative automatic semantic segmentation of knee joint structures can
assist surgeons during knee arthroscopy in terms of situational awareness.
However, due to poor imaging conditions (e.g., low texture, overexposure,
etc.), automatic semantic segmentation is a challenging scenario, which
justifies the scarce literature on this topic. In this paper, we propose a
novel self-supervised monocular depth estimation to regularise the training of
the semantic segmentation in knee arthroscopy. To further regularise the depth
estimation, we propose the use of clean training images captured by the stereo
arthroscope of routine objects (presenting none of the poor imaging conditions
and with rich texture information) to pre-train the model. We fine-tune such
model to produce both the semantic segmentation and self-supervised monocular
depth using stereo arthroscopic images taken from inside the knee. Using a data
set containing 3868 arthroscopic images captured during cadaveric knee
arthroscopy with semantic segmentation annotations, 2000 stereo image pairs of
cadaveric knee arthroscopy, and 2150 stereo image pairs of routine objects, we
show that our semantic segmentation regularised by self-supervised depth
estimation produces a more accurate segmentation than a state-of-the-art
semantic segmentation approach modeled exclusively with semantic segmentation
annotation.Comment: 10 pages, 6 figure
Understanding Real World Indoor Scenes With Synthetic Data
Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments. Recent attempts with supervised learning have shown promise in this direction but also highlighted the need for enormous quantity of supervised data- performance increases in proportion to the amount of data used. However, this quickly becomes prohibitive when considering the manual labour needed to collect such data. In this work, we focus our attention on depth based semantic per-pixel labelling as a scene understanding problem and show the potential of computer graphics to generate virtually unlimited labelled data from synthetic 3D scenes. By carefully synthesizing training data with appropriate noise models we show comparable performance to state-of-the-art RGBD systems on NYUv2 dataset despite using only depth data as input and set a benchmark on depth-based segmentation on SUN RGB-D dataset
- …