318 research outputs found
Experiments on the large-scale structure of turbulence in the near-jet region
The near region of an axisymmetric, turbulent jet was investigated. Turbulence quantities, as well as mean velocities, were measured between 3 and 23 diam away from the nozzle. The mean velocity profiles were similar over most of this distance, whereas the turbulence quantities were far from equilibrium conditions. Across the jet, the rate of large-scale turbulence varied considerably; however, a Strouhal number based on local velocity, the diameter of the jet, and the frequency of the large-scale turbulent oscillation remained relatively constant. The formation of the initial instability waves and the pairing of the vortices were examined. Turbulent fluctuations were observed only downstream of the pairing process
Vibrating quantum billiards on Riemannian manifolds
Quantum billiards provide an excellent forum for the analysis of quantum
chaos. Toward this end, we consider quantum billiards with time-varying
surfaces, which provide an important example of quantum chaos that does not
require the semiclassical () or high quantum-number
limits. We analyze vibrating quantum billiards using the framework of
Riemannian geometry. First, we derive a theorem detailing necessary conditions
for the existence of chaos in vibrating quantum billiards on Riemannian
manifolds. Numerical observations suggest that these conditions are also
sufficient. We prove the aforementioned theorem in full generality for one
degree-of-freedom boundary vibrations and briefly discuss a generalization to
billiards with two or more degrees-of-vibrations. The requisite conditions are
direct consequences of the separability of the Helmholtz equation in a given
orthogonal coordinate frame, and they arise from orthogonality relations
satisfied by solutions of the Helmholtz equation. We then state and prove a
second theorem that provides a general form for the coupled ordinary
differential equations that describe quantum billiards with one
degree-of-vibration boundaries. This set of equations may be used to illustrate
KAM theory and also provides a simple example of semiquantum chaos. Moreover,
vibrating quantum billiards may be used as models for quantum-well
nanostructures, so this study has both theoretical and practical applications.Comment: 23 pages, 6 figures, a few typos corrected. To appear in
International Journal of Bifurcation and Chaos (9/01
Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding
We present a deep learning framework for probabilistic pixel-wise semantic
segmentation, which we term Bayesian SegNet. Semantic segmentation is an
important tool for visual scene understanding and a meaningful measure of
uncertainty is essential for decision making. Our contribution is a practical
system which is able to predict pixel-wise class labels with a measure of model
uncertainty. We achieve this by Monte Carlo sampling with dropout at test time
to generate a posterior distribution of pixel class labels. In addition, we
show that modelling uncertainty improves segmentation performance by 2-3%
across a number of state of the art architectures such as SegNet, FCN and
Dilation Network, with no additional parametrisation. We also observe a
significant improvement in performance for smaller datasets where modelling
uncertainty is more effective. We benchmark Bayesian SegNet on the indoor SUN
Scene Understanding and outdoor CamVid driving scenes datasets.Toyota Corporatio
Geometry meets semantics for semi-supervised monocular depth estimation
Depth estimation from a single image represents a very exciting challenge in
computer vision. While other image-based depth sensing techniques leverage on
the geometry between different viewpoints (e.g., stereo or structure from
motion), the lack of these cues within a single image renders ill-posed the
monocular depth estimation task. For inference, state-of-the-art
encoder-decoder architectures for monocular depth estimation rely on effective
feature representations learned at training time. For unsupervised training of
these models, geometry has been effectively exploited by suitable images
warping losses computed from views acquired by a stereo rig or a moving camera.
In this paper, we make a further step forward showing that learning semantic
information from images enables to improve effectively monocular depth
estimation as well. In particular, by leveraging on semantically labeled images
together with unsupervised signals gained by geometry through an image warping
loss, we propose a deep learning approach aimed at joint semantic segmentation
and depth estimation. Our overall learning framework is semi-supervised, as we
deploy groundtruth data only in the semantic domain. At training time, our
network learns a common feature representation for both tasks and a novel
cross-task loss function is proposed. The experimental findings show how,
jointly tackling depth prediction and semantic segmentation, allows to improve
depth estimation accuracy. In particular, on the KITTI dataset our network
outperforms state-of-the-art methods for monocular depth estimation.Comment: 16 pages, Accepted to ACCV 201
Can ground truth label propagation from video help semantic segmentation?
For state-of-the-art semantic segmentation task, training convolutional
neural networks (CNNs) requires dense pixelwise ground truth (GT) labeling,
which is expensive and involves extensive human effort. In this work, we study
the possibility of using auxiliary ground truth, so-called \textit{pseudo
ground truth} (PGT) to improve the performance. The PGT is obtained by
propagating the labels of a GT frame to its subsequent frames in the video
using a simple CRF-based, cue integration framework. Our main contribution is
to demonstrate the use of noisy PGT along with GT to improve the performance of
a CNN. We perform a systematic analysis to find the right kind of PGT that
needs to be added along with the GT for training a CNN. In this regard, we
explore three aspects of PGT which influence the learning of a CNN: i) the PGT
labeling has to be of good quality; ii) the PGT images have to be different
compared to the GT images; iii) the PGT has to be trusted differently than GT.
We conclude that PGT which is diverse from GT images and has good quality of
labeling can indeed help improve the performance of a CNN. Also, when PGT is
multiple folds larger than GT, weighing down the trust on PGT helps in
improving the accuracy. Finally, We show that using PGT along with GT, the
performance of Fully Convolutional Network (FCN) on Camvid data is increased by
on IoU accuracy. We believe such an approach can be used to train CNNs
for semantic video segmentation where sequentially labeled image frames are
needed. To this end, we provide recommendations for using PGT strategically for
semantic segmentation and hence bypass the need for extensive human efforts in
labeling.Comment: To appear at ECCV 2016 Workshop on Video Segmentatio
Joint Learning of Intrinsic Images and Semantic Segmentation
Semantic segmentation of outdoor scenes is problematic when there are
variations in imaging conditions. It is known that albedo (reflectance) is
invariant to all kinds of illumination effects. Thus, using reflectance images
for semantic segmentation task can be favorable. Additionally, not only
segmentation may benefit from reflectance, but also segmentation may be useful
for reflectance computation. Therefore, in this paper, the tasks of semantic
segmentation and intrinsic image decomposition are considered as a combined
process by exploring their mutual relationship in a joint fashion. To that end,
we propose a supervised end-to-end CNN architecture to jointly learn intrinsic
image decomposition and semantic segmentation. We analyze the gains of
addressing those two problems jointly. Moreover, new cascade CNN architectures
for intrinsic-for-segmentation and segmentation-for-intrinsic are proposed as
single tasks. Furthermore, a dataset of 35K synthetic images of natural
environments is created with corresponding albedo and shading (intrinsics), as
well as semantic labels (segmentation) assigned to each object/scene. The
experiments show that joint learning of intrinsic image decomposition and
semantic segmentation is beneficial for both tasks for natural scenes. Dataset
and models are available at: https://ivi.fnwi.uva.nl/cv/intrinsegComment: ECCV 201
Estimating Depth from RGB and Sparse Sensing
We present a deep model that can accurately produce dense depth maps given an
RGB image with known depth at a very sparse set of pixels. The model works
simultaneously for both indoor/outdoor scenes and produces state-of-the-art
dense depth maps at nearly real-time speeds on both the NYUv2 and KITTI
datasets. We surpass the state-of-the-art for monocular depth estimation even
with depth values for only 1 out of every ~10000 image pixels, and we
outperform other sparse-to-dense depth methods at all sparsity levels. With
depth values for 1/256 of the image pixels, we achieve a mean absolute error of
less than 1% of actual depth on indoor scenes, comparable to the performance of
consumer-grade depth sensor hardware. Our experiments demonstrate that it would
indeed be possible to efficiently transform sparse depth measurements obtained
using e.g. lower-power depth sensors or SLAM systems into high-quality dense
depth maps.Comment: European Conference on Computer Vision (ECCV) 2018. Updated to
camera-ready version with additional experiment
Deep Depth From Focus
Depth from focus (DFF) is one of the classical ill-posed inverse problems in
computer vision. Most approaches recover the depth at each pixel based on the
focal setting which exhibits maximal sharpness. Yet, it is not obvious how to
reliably estimate the sharpness level, particularly in low-textured areas. In
this paper, we propose `Deep Depth From Focus (DDFF)' as the first end-to-end
learning approach to this problem. One of the main challenges we face is the
hunger for data of deep neural networks. In order to obtain a significant
amount of focal stacks with corresponding groundtruth depth, we propose to
leverage a light-field camera with a co-calibrated RGB-D sensor. This allows us
to digitally create focal stacks of varying sizes. Compared to existing
benchmarks our dataset is 25 times larger, enabling the use of machine learning
for this inverse problem. We compare our results with state-of-the-art DFF
methods and we also analyze the effect of several key deep architectural
components. These experiments show that our proposed method `DDFFNet' achieves
state-of-the-art performance in all scenes, reducing depth error by more than
75% compared to the classical DFF methods.Comment: accepted to Asian Conference on Computer Vision (ACCV) 201
The consequence of excess configurational entropy on fragility: the case of a polymer/oligomer blend
By taking advantage of the molecular weight dependence of the glass
transition of polymers and their ability to form perfectly miscible blends, we
propose a way to modify the fragility of a system, from fragile to strong,
keeping the same glass properties, i.e. vibrational density of states,
mean-square displacement and local structure. Both slow and fast dynamics are
investigated by calorimetry and neutron scattering in an athermal
polystyrene/oligomer blend, and compared to those of a pure 17-mer polystyrene
considered to be a reference, of same Tg. Whereas the blend and the pure 17-mer
have the same heat capacity in the glass and in the liquid, their fragilities
differ strongly. This difference in fragility is related to an extra
configurational entropy created by the mixing process and acting at a scale
much larger than the interchain distance, without affecting the fast dynamics
and the structure of the glass
Compositon of Tantalum Nitride Thin Films Grown by Low-Energy Nitrogen Implantation: A Factor Analysis Study of the Ta 4f XPS Core Level
Tantalum nitride thin films have been grown by in situ nitrogen implantation
of metallic tantalum at room temperature over the energy range of 0.5-5keV.
X-ray photoelectron spectroscopy (XPS) and Factor Analysis (FA) have been used
to characterise the chemical composition of the films. The number of the
different Ta-N phases formed during nitrogen implantation, as well as their
spectral shape and concentrations, have been obtained using principal component
analysis (PCA) and iterative target transformation factor analysis (ITTFA),
without any prior assumptions. According to FA results, the composition of the
tantalum nitride films depends on both the ion dose and ion energy, and is
mainly formed by a mixture of metallic tantalum, beta-TaN0.05, gamma-Ta2N and
cubic/hexagonal TaN phases.Comment: 24 pages, 5 figures submitted to Applied Physics
- …