1,318 research outputs found
Deep Generative Modeling of LiDAR Data
Building models capable of generating structured output is a key challenge
for AI and robotics. While generative models have been explored on many types
of data, little work has been done on synthesizing lidar scans, which play a
key role in robot mapping and localization. In this work, we show that one can
adapt deep generative models for this task by unravelling lidar scans into a 2D
point map. Our approach can generate high quality samples, while simultaneously
learning a meaningful latent representation of the data. We demonstrate
significant improvements against state-of-the-art point cloud generation
methods. Furthermore, we propose a novel data representation that augments the
2D signal with absolute positional information. We show that this helps
robustness to noisy and imputed input; the learned model can recover the
underlying lidar scan from seemingly uninformative dataComment: Presented at IROS 201
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
MR image reconstruction using deep density priors
Algorithms for Magnetic Resonance (MR) image reconstruction from undersampled
measurements exploit prior information to compensate for missing k-space data.
Deep learning (DL) provides a powerful framework for extracting such
information from existing image datasets, through learning, and then using it
for reconstruction. Leveraging this, recent methods employed DL to learn
mappings from undersampled to fully sampled images using paired datasets,
including undersampled and corresponding fully sampled images, integrating
prior knowledge implicitly. In this article, we propose an alternative approach
that learns the probability distribution of fully sampled MR images using
unsupervised DL, specifically Variational Autoencoders (VAE), and use this as
an explicit prior term in reconstruction, completely decoupling the encoding
operation from the prior. The resulting reconstruction algorithm enjoys a
powerful image prior to compensate for missing k-space data without requiring
paired datasets for training nor being prone to associated sensitivities, such
as deviations in undersampling patterns used in training and test time or coil
settings. We evaluated the proposed method with T1 weighted images from a
publicly available dataset, multi-coil complex images acquired from healthy
volunteers (N=8) and images with white matter lesions. The proposed algorithm,
using the VAE prior, produced visually high quality reconstructions and
achieved low RMSE values, outperforming most of the alternative methods on the
same dataset. On multi-coil complex data, the algorithm yielded accurate
magnitude and phase reconstruction results. In the experiments on images with
white matter lesions, the method faithfully reconstructed the lesions.
Keywords: Reconstruction, MRI, prior probability, machine learning, deep
learning, unsupervised learning, density estimationComment: Published in IEEE TMI. Main text and supplementary material, 19 pages
tota
TT-SDF2PC: Registration of Point Cloud and Compressed SDF Directly in the Memory-Efficient Tensor Train Domain
This paper addresses the following research question: ``can one compress a
detailed 3D representation and use it directly for point cloud registration?''.
Map compression of the scene can be achieved by the tensor train (TT)
decomposition of the signed distance function (SDF) representation. It
regulates the amount of data reduced by the so-called TT-ranks.
Using this representation we have proposed an algorithm, the TT-SDF2PC, that
is capable of directly registering a PC to the compressed SDF by making use of
efficient calculations of its derivatives in the TT domain, saving computations
and memory. We compare TT-SDF2PC with SOTA local and global registration
methods in a synthetic dataset and a real dataset and show on par performance
while requiring significantly less resources
Unsupervised learning for cross-domain medical image synthesis using deformation invariant cycle consistency networks
Recently, the cycle-consistent generative adversarial networks (CycleGAN) has
been widely used for synthesis of multi-domain medical images. The
domain-specific nonlinear deformations captured by CycleGAN make the
synthesized images difficult to be used for some applications, for example,
generating pseudo-CT for PET-MR attenuation correction. This paper presents a
deformation-invariant CycleGAN (DicycleGAN) method using deformable
convolutional layers and new cycle-consistency losses. Its robustness dealing
with data that suffer from domain-specific nonlinear deformations has been
evaluated through comparison experiments performed on a multi-sequence brain MR
dataset and a multi-modality abdominal dataset. Our method has displayed its
ability to generate synthesized data that is aligned with the source while
maintaining a proper quality of signal compared to CycleGAN-generated data. The
proposed model also obtained comparable performance with CycleGAN when data
from the source and target domains are alignable through simple affine
transformations
- …