58,908 research outputs found
Sharpening up Galactic all-sky maps with complementary data - A machine learning approach
Galactic all-sky maps at very disparate frequencies, like in the radio and
-ray regime, show similar morphological structures. This mutual
information reflects the imprint of the various physical components of the
interstellar medium. We want to use multifrequency all-sky observations to test
resolution improvement and restoration of unobserved areas for maps in certain
frequency ranges. For this we aim to reconstruct or predict from sets of other
maps all-sky maps that, in their original form, lack a high resolution compared
to other available all-sky surveys or are incomplete in their spatial coverage.
Additionally, we want to investigate the commonalities and differences that the
ISM components exhibit over the electromagnetic spectrum. We build an
-dimensional representation of the joint pixel-brightness distribution of
maps using a Gaussian mixture model and see how predictive it is: How well
can one map be reproduced based on subsets of other maps? Tests with mock data
show that reconstructing the map of a certain frequency from other frequency
regimes works astonishingly well, predicting reliably small-scale details well
below the spatial resolution of the initially learned map. Applied to the
observed multifrequency data sets of the Milky Way this technique is able to
improve the resolution of, e.g., the low-resolution Fermi LAT maps as well as
to recover the sky from artifact-contaminated data like the ROSAT 0.855 keV
map. The predicted maps generally show less imaging artifacts compared to the
original ones. A comparison of predicted and original maps highlights
surprising structures, imaging artifacts (fortunately not reproduced in the
prediction), and features genuine to the respective frequency range that are
not present at other frequency bands. We discuss limitations of this machine
learning approach and ideas how to overcome them
Efficient completeness inspection using real-time 3D color reconstruction with a dual-laser triangulation system
In this chapter, we present the final system resulting from the European Project \u201d3DComplete\u201d aimed at creating a low-cost and flexible quality inspection system capable of capturing 2.5D color data for completeness inspection. The system uses a single color camera to capture at the same time 3D data with laser triangulation and color texture with a special projector of a narrow line of white light, which are then combined into a color 2.5D model in real-time. Many examples of completeness inspection tasks are reported which are extremely difficult to analyze with state-of-the-art 2D-based methods. Our system has been integrated into a real production environment, showing that completeness inspection incorporating 3D technology can be readily achieved in a short time at low costs
The Denoised, Deconvolved, and Decomposed Fermi -ray sky - An application of the DPO algorithm
We analyze the 6.5yr all-sky data from the Fermi LAT restricted to gamma-ray
photons with energies between 0.6-307.2GeV. Raw count maps show a superposition
of diffuse and point-like emission structures and are subject to shot noise and
instrumental artifacts. Using the D3PO inference algorithm, we model the
observed photon counts as the sum of a diffuse and a point-like photon flux,
convolved with the instrumental beam and subject to Poissonian shot noise. D3PO
performs a Bayesian inference in this setting without the use of spatial or
spectral templates;i.e., it removes the shot noise, deconvolves the
instrumental response, and yields estimates for the two flux components
separately. The non-parametric reconstruction uncovers the morphology of the
diffuse photon flux up to several hundred GeV. We present an all-sky spectral
index map for the diffuse component. We show that the diffuse gamma-ray flux
can be described phenomenologically by only two distinct components: a soft
component, presumably dominated by hadronic processes, tracing the dense, cold
interstellar medium and a hard component, presumably dominated by leptonic
interactions, following the hot and dilute medium and outflows such as the
Fermi bubbles. A comparison of the soft component with the Galactic dust
emission indicates that the dust-to-soft-gamma ratio in the interstellar medium
decreases with latitude. The spectrally hard component exists in a thick
Galactic disk and tends to flow out of the Galaxy at some locations.
Furthermore, we find the angular power spectrum of the diffuse flux to roughly
follow a power law with an index of 2.47 on large scales, independent of
energy. Our first catalog of source candidates includes 3106 candidates of
which we associate 1381(1897) with known sources from the 2nd(3rd) Fermi
catalog. We observe gamma-ray emission in the direction of a few galaxy
clusters hosting radio halos.Comment: re-submission after referee report (A&A); 17 pages, many colorful
figures, 4 tables; bug fixed, flux scale now consistent with Fermi, even
lower residual level, pDF -> 1DF source catalog, tentative detection of a few
clusters of galaxies, online material
http://www.mpa-garching.mpg.de/ift/fermi
Non-iterative RGB-D-inertial Odometry
This paper presents a non-iterative solution to RGB-D-inertial odometry
system. Traditional odometry methods resort to iterative algorithms which are
usually computationally expensive or require well-designed initialization. To
overcome this problem, this paper proposes to combine a non-iterative front-end
(odometry) with an iterative back-end (loop closure) for the RGB-D-inertial
SLAM system. The main contribution lies in the novel non-iterative front-end,
which leverages on inertial fusion and kernel cross-correlators (KCC) to match
point clouds in frequency domain. Dominated by the fast Fourier transform
(FFT), our method is only of complexity , where is
the number of points. Map fusion is conducted by element-wise operations, so
that both time and space complexity are further reduced. Extensive experiments
show that, due to the lightweight of the proposed front-end, the framework is
able to run at a much faster speed yet still with comparable accuracy with the
state-of-the-arts
Shape Generation using Spatially Partitioned Point Clouds
We propose a method to generate 3D shapes using point clouds. Given a
point-cloud representation of a 3D shape, our method builds a kd-tree to
spatially partition the points. This orders them consistently across all
shapes, resulting in reasonably good correspondences across all shapes. We then
use PCA analysis to derive a linear shape basis across the spatially
partitioned points, and optimize the point ordering by iteratively minimizing
the PCA reconstruction error. Even with the spatial sorting, the point clouds
are inherently noisy and the resulting distribution over the shape coefficients
can be highly multi-modal. We propose to use the expressive power of neural
networks to learn a distribution over the shape coefficients in a
generative-adversarial framework. Compared to 3D shape generative models
trained on voxel-representations, our point-based method is considerably more
light-weight and scalable, with little loss of quality. It also outperforms
simpler linear factor models such as Probabilistic PCA, both qualitatively and
quantitatively, on a number of categories from the ShapeNet dataset.
Furthermore, our method can easily incorporate other point attributes such as
normal and color information, an additional advantage over voxel-based
representations.Comment: To appear at BMVC 201
- …