4,898 research outputs found
A Deep Learning Reconstruction Framework for Differential Phase-Contrast Computed Tomography with Incomplete Data
Differential phase-contrast computed tomography (DPC-CT) is a powerful
analysis tool for soft-tissue and low-atomic-number samples. Limited by the
implementation conditions, DPC-CT with incomplete projections happens quite
often. Conventional reconstruction algorithms are not easy to deal with
incomplete data. They are usually involved with complicated parameter selection
operations, also sensitive to noise and time-consuming. In this paper, we
reported a new deep learning reconstruction framework for incomplete data
DPC-CT. It is the tight coupling of the deep learning neural network and DPC-CT
reconstruction algorithm in the phase-contrast projection sinogram domain. The
estimated result is the complete phase-contrast projection sinogram not the
artifacts caused by the incomplete data. After training, this framework is
determined and can reconstruct the final DPC-CT images for a given incomplete
phase-contrast projection sinogram. Taking the sparse-view DPC-CT as an
example, this framework has been validated and demonstrated with synthetic and
experimental data sets. Embedded with DPC-CT reconstruction, this framework
naturally encapsulates the physical imaging model of DPC-CT systems and is easy
to be extended to deal with other challengs. This work is helpful to push the
application of the state-of-the-art deep learning theory in the field of
DPC-CT
Efficient Optimal Reconstruction of Linear Fields and Band-powers from Cosmological Data
We present an efficient implementation of Wiener filtering of real-space
linear field and optimal quadratic estimator of its power spectrum Band-powers.
We first recast the field reconstruction into an optimization problem, which we
solve using quasi-Newton optimization. We then recast the power spectrum
estimation into the field marginalization problem, from which we obtain an
expression that depends on the field reconstruction solution and a determinant
term. We develop a novel simulation based method for the latter. We extend the
simulations formalism to provide the covariance matrix for the power spectrum.
We develop a flexible framework that can be used on a variety of cosmological
fields and present results for a variety of test cases, using simulated
examples of projected density fields, projected shear maps from galaxy lensing,
and observed Cosmic Microwave Background (CMB) temperature anisotropies, with a
wide range of map incompleteness and variable noise. For smaller cases where
direct numerical inversion is possible, we show that our solution matches that
created by direct Wiener Filtering at a fraction of the overall computation
cost. Even more significant reduction of computational is achieved by this
implementation of optimal quadratic estimator due to the fast evaluation of the
Hessian matrix. This technique allows for accurate map and power spectrum
reconstruction with complex masks and nontrivial noise properties.Comment: 23 pages, 14 figure
Enhanced Quadratic Video Interpolation
With the prosperity of digital video industry, video frame interpolation has
arisen continuous attention in computer vision community and become a new
upsurge in industry. Many learning-based methods have been proposed and
achieved progressive results. Among them, a recent algorithm named quadratic
video interpolation (QVI) achieves appealing performance. It exploits
higher-order motion information (e.g. acceleration) and successfully models the
estimation of interpolated flow. However, its produced intermediate frames
still contain some unsatisfactory ghosting, artifacts and inaccurate motion,
especially when large and complex motion occurs. In this work, we further
improve the performance of QVI from three facets and propose an enhanced
quadratic video interpolation (EQVI) model. In particular, we adopt a rectified
quadratic flow prediction (RQFP) formulation with least squares method to
estimate the motion more accurately. Complementary with image pixel-level
blending, we introduce a residual contextual synthesis network (RCSN) to employ
contextual information in high-dimensional feature space, which could help the
model handle more complicated scenes and motion patterns. Moreover, to further
boost the performance, we devise a novel multi-scale fusion network (MS-Fusion)
which can be regarded as a learnable augmentation process. The proposed EQVI
model won the first place in the AIM2020 Video Temporal Super-Resolution
Challenge.Comment: Winning solution of AIM2020 VTSR Challenge (in conjunction with ECCV
2020
Helioseismology of Sunspots: A Case Study of NOAA Region 9787
Various methods of helioseismology are used to study the subsurface
properties of the sunspot in NOAA Active Region 9787. This sunspot was chosen
because it is axisymmetric, shows little evolution during 20-28 January 2002,
and was observed continuously by the MDI/SOHO instrument. (...) Wave travel
times and mode frequencies are affected by the sunspot. In most cases, wave
packets that propagate through the sunspot have reduced travel times. At short
travel distances, however, the sign of the travel-time shifts appears to depend
sensitively on how the data are processed and, in particular, on filtering in
frequency-wavenumber space. We carry out two linear inversions for wave speed:
one using travel-times and phase-speed filters and the other one using mode
frequencies from ring analysis. These two inversions give subsurface wave-speed
profiles with opposite signs and different amplitudes. (...) From this study of
AR9787, we conclude that we are currently unable to provide a unified
description of the subsurface structure and dynamics of the sunspot.Comment: 28 pages, 18 figure
The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation
Denoising diffusion probabilistic models have transformed image generation
with their impressive fidelity and diversity. We show that they also excel in
estimating optical flow and monocular depth, surprisingly, without
task-specific architectures and loss functions that are predominant for these
tasks. Compared to the point estimates of conventional regression-based
methods, diffusion models also enable Monte Carlo inference, e.g., capturing
uncertainty and ambiguity in flow and depth. With self-supervised pre-training,
the combined use of synthetic and real data for supervised training, and
technical innovations (infilling and step-unrolled denoising diffusion
training) to handle noisy-incomplete training data, and a simple form of
coarse-to-fine refinement, one can train state-of-the-art diffusion models for
depth and optical flow estimation. Extensive experiments focus on quantitative
performance against benchmarks, ablations, and the model's ability to capture
uncertainty and multimodality, and impute missing values. Our model, DDVM
(Denoising Diffusion Vision Model), obtains a state-of-the-art relative depth
error of 0.074 on the indoor NYU benchmark and an Fl-all outlier rate of 3.26\%
on the KITTI optical flow benchmark, about 25\% better than the best published
method. For an overview see https://diffusion-vision.github.io
Drilling data quality improvement and information extraction with case studies
Data analytics is a process of data acquiring, transforming, interpreting, modelling, displaying and storing data with an aim of extracting useful information, so that decision-making, actions executing, events detecting and incidents managing can be handled in an efficient and certain manner. However, data analytics also meets some challenges, for instance, data corruption due to noises, time delays, missing and external disturbances, etc. This paper focuses on data quality improvement to cleanse, improve and interpret the post-well or real-time data to preserve and enhance data features, like accuracy, consistency, reliability and validity. In this study, laboratory data and field data are used to illustrate data issues and show data quality improvements with using different data processing methods. Case study clearly demonstrates that the proper data quality management process and information extraction methods are essential to carry out an intelligent digitalization in oil and gas industry.publishedVersio
- …