17,535 research outputs found
Reconstructing CMEs with Coordinated Imaging and In Situ Observations: Global Structure, Kinematics, and Implications for Space Weather Forecasting
See the pdf for detailsComment: 45 pages, 16 figures, ApJ, in pres
Depth mapping of integral images through viewpoint image extraction with a hybrid disparity analysis algorithm
Integral imaging is a technique capable of displaying 3âD images with continuous parallax in full natural color. It is one of the most promising methods for producing smooth 3âD images. Extracting depth information from integral image has various applications ranging from remote inspection, robotic vision, medical imaging, virtual reality, to content-based image coding and manipulation for integral imaging based 3âD TV. This paper presents a method of generating a depth map from unidirectional integral images through viewpoint image extraction and using a hybrid disparity analysis algorithm combining multi-baseline, neighbourhood constraint and relaxation strategies. It is shown that a depth map having few areas of uncertainty can be obtained from both computer and photographically generated integral images using this approach. The acceptable depth maps can be achieved from photographic captured integral images containing complicated object scene
Propagation of an Earth-directed coronal mass ejection in three dimensions
Solar coronal mass ejections (CMEs) are the most significant drivers of
adverse space weather at Earth, but the physics governing their propagation
through the heliosphere is not well understood. While stereoscopic imaging of
CMEs with the Solar Terrestrial Relations Observatory (STEREO) has provided
some insight into their three-dimensional (3D) propagation, the mechanisms
governing their evolution remain unclear due to difficulties in reconstructing
their true 3D structure. Here we use a new elliptical tie-pointing technique to
reconstruct a full CME front in 3D, enabling us to quantify its deflected
trajectory from high latitudes along the ecliptic, and measure its increasing
angular width and propagation from 2-46 solar radii (approximately 0.2 AU).
Beyond 7 solar radii, we show that its motion is determined by an aerodynamic
drag in the solar wind and, using our reconstruction as input for a 3D
magnetohydrodynamic simulation, we determine an accurate arrival time at the
Lagrangian L1 point near Earth.Comment: 5 figures, 2 supplementary movie
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging
A variety of techniques such as light field, structured illumination, and
time-of-flight (TOF) are commonly used for depth acquisition in consumer
imaging, robotics and many other applications. Unfortunately, each technique
suffers from its individual limitations preventing robust depth sensing. In
this paper, we explore the strengths and weaknesses of combining light field
and time-of-flight imaging, particularly the feasibility of an on-chip
implementation as a single hybrid depth sensor. We refer to this combination as
depth field imaging. Depth fields combine light field advantages such as
synthetic aperture refocusing with TOF imaging advantages such as high depth
resolution and coded signal processing to resolve multipath interference. We
show applications including synthesizing virtual apertures for TOF imaging,
improved depth mapping through partial and scattering occluders, and single
frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding,
depth fields can improve depth sensing in the wild and generate new insights
into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201
- âŠ