3,037 research outputs found
Morphological evolution of a 3D CME cloud reconstructed from three viewpoints
The propagation properties of coronal mass ejections (CMEs) are crucial to
predict its geomagnetic effect. A newly developed three dimensional (3D) mask
fitting reconstruction method using coronagraph images from three viewpoints
has been described and applied to the CME ejected on August 7, 2010. The CME's
3D localisation, real shape and morphological evolution are presented. Due to
its interaction with the ambient solar wind, the morphology of this CME changed
significantly in the early phase of evolution. Two hours after its initiation,
it was expanding almost self-similarly. CME's 3D localisation is quite helpful
to link remote sensing observations to in situ measurements. The investigated
CME was propagating to Venus with its flank just touching STEREO B. Its
corresponding ICME in the interplanetary space shows a possible signature of a
magnetic cloud with a preceding shock in VEX observations, while from STEREO B
only a shock is observed. We have calculated three principle axes for the
reconstructed 3D CME cloud. The orientation of the major axis is in general
consistent with the orientation of a filament (polarity inversion line)
observed by SDO/AIA and SDO/HMI. The flux rope axis derived by the MVA analysis
from VEX indicates a radial-directed axis orientation. It might be that locally
only the leg of the flux rope passed through VEX. The height and speed profiles
from the Sun to Venus are obtained. We find that the CME speed possibly had
been adjusted to the speed of the ambient solar wind flow after leaving COR2
field of view and before arriving Venus. A southward deflection of the CME from
the source region is found from the trajectory of the CME geometric center. We
attribute it to the influence of the coronal hole where the fast solar wind
emanated from.Comment: ApJ, accepte
Multi-Scale 3D Scene Flow from Binocular Stereo Sequences
Scene ïŹow methods estimate the three-dimensional motion ïŹeld for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene ïŹow estimation that provides reliable results using only two cameras by fusing stereo and optical ïŹow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical ïŹow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene ïŹow than previous methods allow. To handle the aperture problems inherent in the estimation of optical ïŹow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization â two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach.National Science Foundation (CNS-0202067, IIS-0208876); Office of Naval Research (N00014-03-1-0108
Multi-frame scene-flow estimation using a patch model and smooth motion prior
This paper addresses the problem of estimating the dense 3D motion of a scene over several frames using a set of calibrated cameras. Most current 3D motion estimation techniques are limited to estimating the motion over a single frame, unless a strong prior model of the scene (such as a skeleton) is introduced. Estimating the 3D motion of a general scene is difficult due to untextured surfaces, complex movements and occlusions. In this paper, we show that it is possible to track the surfaces of a scene over several frames, by introducing an effective prior on the scene motion. Experimental results show that the proposed method estimates the dense scene-flow over multiple frames, without the need for multiple-view reconstructions at every frame. Furthermore, the accuracy of the proposed method is demonstrated by comparing the estimated motion against a ground truth
Calibration and Sensitivity Analysis of a Stereo Vision-Based Driver Assistance System
Az http://intechweb.org/ alatti "Books" fĂŒl alatt kell rĂĄkeresni a "Stereo Vision" cĂmre Ă©s az 1. fejezetre
Single-image Tomography: 3D Volumes from 2D Cranial X-Rays
As many different 3D volumes could produce the same 2D x-ray image, inverting
this process is challenging. We show that recent deep learning-based
convolutional neural networks can solve this task. As the main challenge in
learning is the sheer amount of data created when extending the 2D image into a
3D volume, we suggest firstly to learn a coarse, fixed-resolution volume which
is then fused in a second step with the input x-ray into a high-resolution
volume. To train and validate our approach we introduce a new dataset that
comprises of close to half a million computer-simulated 2D x-ray images of 3D
volumes scanned from 175 mammalian species. Applications of our approach
include stereoscopic rendering of legacy x-ray images, re-rendering of x-rays
including changes of illumination, view pose or geometry. Our evaluation
includes comparison to previous tomography work, previous learning methods
using our data, a user study and application to a set of real x-rays
Streamers in air splitting into three branches
We investigate the branching of positive streamers in air and present the
first systematic investigation of splitting into more than two branches. We
study discharges in 100 mbar artificial air that is exposed to voltage pulses
of 10 kV applied to a needle electrode 160 mm above a grounded plate. By
imaging the discharge with two cameras from three angles, we establish that
about every 200th branching event is a branching into three. Branching into
three occurs more frequently for the relatively thicker streamers. In fact, we
find that the surface of the total streamer cross-sections before and after a
branching event is roughly the same.Comment: 6 pages, 7 figure
- âŠ