11,503 research outputs found
Learning-based Spatial and Angular Information Separation for Light Field Compression
Light fields are a type of image data that capture both spatial and angular
scene information by recording light rays emitted by a scene from different
orientations. In this context, spatial information is defined as features that
remain static regardless of perspectives, while angular information refers to
features that vary between viewpoints. We propose a novel neural network that,
by design, can separate angular and spatial information of a light field. The
network represents spatial information using spatial kernels shared among all
Sub-Aperture Images (SAIs), and angular information using sets of angular
kernels for each SAI. To further improve the representation capability of the
network without increasing parameter number, we also introduce angular kernel
allocation and kernel tensor decomposition mechanisms. Extensive experiments
demonstrate the benefits of information separation: when applied to the
compression task, our network outperforms other state-of-the-art methods by a
large margin. And angular information can be easily transferred to other scenes
for rendering dense views, showing the successful separation and the potential
use case for the view synthesis task. We plan to release the code upon
acceptance of the paper to encourage further research on this topic
Neural View-Interpolation for Sparse Light Field Video
We suggest representing light field (LF) videos as "one-off" neural networks (NN), i.e., a learned mapping from view-plus-time coordinates to high-resolution color values, trained on sparse views. Initially, this sounds like a bad idea for three main reasons: First, a NN LF will likely have less quality than a same-sized pixel basis representation. Second, only few training data, e.g., 9 exemplars per frame are available for sparse LF videos. Third, there is no generalization across LFs, but across view and time instead. Consequently, a network needs to be trained for each LF video. Surprisingly, these problems can turn into substantial advantages: Other than the linear pixel basis, a NN has to come up with a compact, non-linear i.e., more intelligent, explanation of color, conditioned on the sparse view and time coordinates. As observed for many NN however, this representation now is interpolatable: if the image output for sparse view coordinates is plausible, it is for all intermediate, continuous coordinates as well. Our specific network architecture involves a differentiable occlusion-aware warping step, which leads to a compact set of trainable parameters and consequently fast learning and fast execution
Quantifying and containing the curse of high resolution coronal imaging
Future missions such as Solar Orbiter (SO), InterHelioprobe, or Solar Probe
aim at approaching the Sun closer than ever before, with on board some high
resolution imagers (HRI) having a subsecond cadence and a pixel area of about
at the Sun during perihelion. In order to guarantee their scientific
success, it is necessary to evaluate if the photon counts available at these
resolution and cadence will provide a sufficient signal-to-noise ratio (SNR).
We perform a first step in this direction by analyzing and characterizing the
spatial intermittency of Quiet Sun images thanks to a multifractal analysis.
We identify the parameters that specify the scale-invariance behavior. This
identification allows next to select a family of multifractal processes, namely
the Compound Poisson Cascades, that can synthesize artificial images having
some of the scale-invariance properties observed on the recorded images.
The prevalence of self-similarity in Quiet Sun coronal images makes it
relevant to study the ratio between the SNR present at SoHO/EIT images and in
coarsened images. SoHO/EIT images thus play the role of 'high resolution'
images, whereas the 'low-resolution' coarsened images are rebinned so as to
simulate a smaller angular resolution and/or a larger distance to the Sun. For
a fixed difference in angular resolution and in Spacecraft-Sun distance, we
determine the proportion of pixels having a SNR preserved at high resolution
given a particular increase in effective area. If scale-invariance continues to
prevail at smaller scales, the conclusion reached with SoHO/EIT images can be
transposed to the situation where the resolution is increased from SoHO/EIT to
SO/HRI resolution at perihelion.Comment: 25 pages, 1 table, 7 figure
The SWAP EUV Imaging Telescope Part I: Instrument Overview and Pre-Flight Testing
The Sun Watcher with Active Pixels and Image Processing (SWAP) is an EUV
solar telescope on board ESA's Project for Onboard Autonomy 2 (PROBA2) mission
launched on 2 November 2009. SWAP has a spectral bandpass centered on 17.4 nm
and provides images of the low solar corona over a 54x54 arcmin field-of-view
with 3.2 arcsec pixels and an imaging cadence of about two minutes. SWAP is
designed to monitor all space-weather-relevant events and features in the low
solar corona. Given the limited resources of the PROBA2 microsatellite, the
SWAP telescope is designed with various innovative technologies, including an
off-axis optical design and a CMOS-APS detector. This article provides
reference documentation for users of the SWAP image data.Comment: 26 pages, 9 figures, 1 movi
- …