40,757 research outputs found
Accelerated gradient methods for total-variation-based CT image reconstruction
Total-variation (TV)-based Computed Tomography (CT) image reconstruction has
shown experimentally to be capable of producing accurate reconstructions from
sparse-view data. In particular TV-based reconstruction is very well suited for
images with piecewise nearly constant regions. Computationally, however,
TV-based reconstruction is much more demanding, especially for 3D imaging, and
the reconstruction from clinical data sets is far from being close to
real-time. This is undesirable from a clinical perspective, and thus there is
an incentive to accelerate the solution of the underlying optimization problem.
The TV reconstruction can in principle be found by any optimization method, but
in practice the large-scale systems arising in CT image reconstruction preclude
the use of memory-demanding methods such as Newton's method. The simple
gradient method has much lower memory requirements, but exhibits slow
convergence. In the present work we consider the use of two accelerated
gradient-based methods, GPBB and UPN, for reducing the number of gradient
method iterations needed to achieve a high-accuracy TV solution in CT image
reconstruction. The former incorporates several heuristics from the
optimization literature such as Barzilai-Borwein (BB) step size selection and
nonmonotone line search. The latter uses a cleverly chosen sequence of
auxiliary points to achieve a better convergence rate. The methods are memory
efficient and equipped with a stopping criterion to ensure that the TV
reconstruction has indeed been found. An implementation of the methods (in C
with interface to Matlab) is available for download from
http://www2.imm.dtu.dk/~pch/TVReg/. We compare the proposed methods with the
standard gradient method, applied to a 3D test problem with synthetic few-view
data. We find experimentally that for realistic parameters the proposed methods
significantly outperform the gradient method.Comment: 4 pages, 2 figure
Large-Scale Automatic Reconstruction of Neuronal Processes from Electron Microscopy Images
Automated sample preparation and electron microscopy enables acquisition of
very large image data sets. These technical advances are of special importance
to the field of neuroanatomy, as 3D reconstructions of neuronal processes at
the nm scale can provide new insight into the fine grained structure of the
brain. Segmentation of large-scale electron microscopy data is the main
bottleneck in the analysis of these data sets. In this paper we present a
pipeline that provides state-of-the art reconstruction performance while
scaling to data sets in the GB-TB range. First, we train a random forest
classifier on interactive sparse user annotations. The classifier output is
combined with an anisotropic smoothing prior in a Conditional Random Field
framework to generate multiple segmentation hypotheses per image. These
segmentations are then combined into geometrically consistent 3D objects by
segmentation fusion. We provide qualitative and quantitative evaluation of the
automatic segmentation and demonstrate large-scale 3D reconstructions of
neuronal processes from a volume of brain
tissue over a cube of in each dimension corresponding to
1000 consecutive image sections. We also introduce Mojo, a proofreading tool
including semi-automated correction of merge errors based on sparse user
scribbles
Probabilistic 3D surface reconstruction from sparse MRI information
Surface reconstruction from magnetic resonance (MR) imaging data is
indispensable in medical image analysis and clinical research. A reliable and
effective reconstruction tool should: be fast in prediction of accurate well
localised and high resolution models, evaluate prediction uncertainty, work
with as little input data as possible. Current deep learning state of the art
(SOTA) 3D reconstruction methods, however, often only produce shapes of limited
variability positioned in a canonical position or lack uncertainty evaluation.
In this paper, we present a novel probabilistic deep learning approach for
concurrent 3D surface reconstruction from sparse 2D MR image data and aleatoric
uncertainty prediction. Our method is capable of reconstructing large surface
meshes from three quasi-orthogonal MR imaging slices from limited training sets
whilst modelling the location of each mesh vertex through a Gaussian
distribution. Prior shape information is encoded using a built-in linear
principal component analysis (PCA) model. Extensive experiments on cardiac MR
data show that our probabilistic approach successfully assesses prediction
uncertainty while at the same time qualitatively and quantitatively outperforms
SOTA methods in shape prediction. Compared to SOTA, we are capable of properly
localising and orientating the prediction via the use of a spatially aware
neural network.Comment: MICCAI 202
Scalable Dense Monocular Surface Reconstruction
This paper reports on a novel template-free monocular non-rigid surface
reconstruction approach. Existing techniques using motion and deformation cues
rely on multiple prior assumptions, are often computationally expensive and do
not perform equally well across the variety of data sets. In contrast, the
proposed Scalable Monocular Surface Reconstruction (SMSR) combines strengths of
several algorithms, i.e., it is scalable with the number of points, can handle
sparse and dense settings as well as different types of motions and
deformations. We estimate camera pose by singular value thresholding and
proximal gradient. Our formulation adopts alternating direction method of
multipliers which converges in linear time for large point track matrices. In
the proposed SMSR, trajectory space constraints are integrated by smoothing of
the measurement matrix. In the extensive experiments, SMSR is demonstrated to
consistently achieve state-of-the-art accuracy on a wide variety of data sets.Comment: International Conference on 3D Vision (3DV), Qingdao, China, October
201
DeformNet: Free-Form Deformation Network for 3D Shape Reconstruction from a Single Image
3D reconstruction from a single image is a key problem in multiple
applications ranging from robotic manipulation to augmented reality. Prior
methods have tackled this problem through generative models which predict 3D
reconstructions as voxels or point clouds. However, these methods can be
computationally expensive and miss fine details. We introduce a new
differentiable layer for 3D data deformation and use it in DeformNet to learn a
model for 3D reconstruction-through-deformation. DeformNet takes an image
input, searches the nearest shape template from a database, and deforms the
template to match the query image. We evaluate our approach on the ShapeNet
dataset and show that - (a) the Free-Form Deformation layer is a powerful new
building block for Deep Learning models that manipulate 3D data (b) DeformNet
uses this FFD layer combined with shape retrieval for smooth and
detail-preserving 3D reconstruction of qualitatively plausible point clouds
with respect to a single query image (c) compared to other state-of-the-art 3D
reconstruction methods, DeformNet quantitatively matches or outperforms their
benchmarks by significant margins. For more information, visit:
https://deformnet-site.github.io/DeformNet-website/ .Comment: 11 pages, 9 figures, NIP
- …