33,353 research outputs found
Quick X-ray microtomography using a laser-driven betatron source
Laser-driven X-ray sources are an emerging alternative to conventional X-ray
tubes and synchrotron sources. We present results on microtomographic X-ray
imaging of a cancellous human bone sample using synchrotron-like betatron
radiation. The source is driven by a 100-TW-class titanium-sapphire laser
system and delivers over X-ray photons per second. Compared to earlier
studies, the acquisition time for an entire tomographic dataset has been
reduced by more than an order of magnitude. Additionally, the reconstruction
quality benefits from the use of statistical iterative reconstruction
techniques. Depending on the desired resolution, tomographies are thereby
acquired within minutes, which is an important milestone towards real-life
applications of laser-plasma X-ray sources
Dense 3D Object Reconstruction from a Single Depth View
In this paper, we propose a novel approach, 3D-RecGAN++, which reconstructs
the complete 3D structure of a given object from a single arbitrary depth view
using generative adversarial networks. Unlike existing work which typically
requires multiple views of the same object or class labels to recover the full
3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation
of a depth view of the object as input, and is able to generate the complete 3D
occupancy grid with a high resolution of 256^3 by recovering the
occluded/missing regions. The key idea is to combine the generative
capabilities of autoencoders and the conditional Generative Adversarial
Networks (GAN) framework, to infer accurate and fine-grained 3D structures of
objects in high-dimensional voxel space. Extensive experiments on large
synthetic datasets and real-world Kinect datasets show that the proposed
3D-RecGAN++ significantly outperforms the state of the art in single view 3D
object reconstruction, and is able to reconstruct unseen types of objects.Comment: TPAMI 2018. Code and data are available at:
https://github.com/Yang7879/3D-RecGAN-extended. This article extends from
arXiv:1708.0796
Single-breath-hold photoacoustic computed tomography of the breast
We have developed a single-breath-hold photoacoustic computed tomography (SBH-PACT) system to reveal detailed angiographic structures in human breasts. SBH-PACT features a deep penetration depth (4 cm in vivo) with high spatial and temporal resolutions (255 µm in-plane resolution and a 10 Hz 2D frame rate). By scanning the entire breast within a single breath hold (~15 s), a volumetric image can be acquired and subsequently reconstructed utilizing 3D back-projection with negligible breathing-induced motion artifacts. SBH-PACT clearly reveals tumors by observing higher blood vessel densities associated with tumors at high spatial resolution, showing early promise for high sensitivity in radiographically dense breasts. In addition to blood vessel imaging, the high imaging speed enables dynamic studies, such as photoacoustic elastography, which identifies tumors by showing less compliance. We imaged breast cancer patients with breast sizes ranging from B cup to DD cup, and skin pigmentations ranging from light to dark. SBH-PACT identified all the tumors without resorting to ionizing radiation or exogenous contrast, posing no health risks
Video Frame Interpolation via Adaptive Separable Convolution
Standard video frame interpolation methods first estimate optical flow
between input frames and then synthesize an intermediate frame guided by
motion. Recent approaches merge these two steps into a single convolution
process by convolving input frames with spatially adaptive kernels that account
for motion and re-sampling simultaneously. These methods require large kernels
to handle large motion, which limits the number of pixels whose kernels can be
estimated at once due to the large memory demand. To address this problem, this
paper formulates frame interpolation as local separable convolution over input
frames using pairs of 1D kernels. Compared to regular 2D kernels, the 1D
kernels require significantly fewer parameters to be estimated. Our method
develops a deep fully convolutional neural network that takes two input frames
and estimates pairs of 1D kernels for all pixels simultaneously. Since our
method is able to estimate kernels and synthesizes the whole video frame at
once, it allows for the incorporation of perceptual loss to train the neural
network to produce visually pleasing frames. This deep neural network is
trained end-to-end using widely available video data without any human
annotation. Both qualitative and quantitative experiments show that our method
provides a practical solution to high-quality video frame interpolation.Comment: ICCV 2017, http://graphics.cs.pdx.edu/project/sepconv
Multi-view passive 3D face acquisition device
Approaches to acquisition of 3D facial data include laser scanners, structured
light devices and (passive) stereo vision. The laser scanner and structured light
methods allow accurate reconstruction of the 3D surface but strong light is projected
on the faces of subjects. Passive stereo vision based approaches do not require strong
light to be projected, however, it is hard to obtain comparable accuracy and robustness
of the surface reconstruction. In this paper a passive multiple view approach using
5 cameras in a ’+’ configuration is proposed that significantly increases robustness
and accuracy relative to traditional stereo vision approaches. The normalised cross
correlations of all 5 views are combined using direct projection of points instead of
the traditionally used rectified images. Also, errors caused by different perspective
deformation of the surface in the different views are reduced by using an iterative reconstruction
technique where the depth estimation of the previous iteration is used to
warp the windows of the normalised cross correlation for the different views
- …