36,528 research outputs found
Recommended from our members
Decomposition of heterogeneous organic matter and its long-term stabilization in soils
This is the publisher’s final pdf. The published article is copyrighted by Ecological Society of America and can be found at: www.esa.org/.Soil organic matter is a complex mixture of material with heterogeneous biological, physical, and chemical properties. Decomposition models represent this heterogeneity either as a set of discrete pools with different residence times or as a continuum of qualities. It is unclear though, whether these two different approaches yield comparable predictions of organic matter dynamics. Here, we compare predictions from these two different approaches and propose an intermediate approach to study organic matter decomposition based on concepts from continuous models implemented numerically. We found that the disagreement between discrete and continuous approaches can be considerable depending on the degree of nonlinearity of the model and simulation time. The two approaches can diverge substantially for predicting long-term processes in soils. Based on our alternative approach, which is a modification of the continuous quality theory, we explored the temporal patterns that emerge by treating substrate heterogeneity explicitly. The analysis suggests that the pattern of carbon mineralization over time is highly dependent on the degree and form of nonlinearity in the model, mostly expressed as differences in microbial growth and efficiency for different substrates. Moreover, short-term stabilization and destabilization mechanisms operating simultaneously result in long-term accumulation of carbon characterized by low decomposition rates, independent of the characteristics of the incoming litter. We show that representation of heterogeneity in the decomposition process can lead to substantial improvements in our understanding of carbon mineralization and its long-term stability in soils
RevealNet: Seeing Behind Objects in RGB-D Scans
During 3D reconstruction, it is often the case that people cannot scan each
individual object from all views, resulting in missing geometry in the captured
scan. This missing geometry can be fundamentally limiting for many
applications, e.g., a robot needs to know the unseen geometry to perform a
precise grasp on an object. Thus, we introduce the task of semantic instance
completion: from an incomplete RGB-D scan of a scene, we aim to detect the
individual object instances and infer their complete object geometry. This will
open up new possibilities for interactions with objects in a scene, for
instance for virtual or robotic agents. We tackle this problem by introducing
RevealNet, a new data-driven approach that jointly detects object instances and
predicts their complete geometry. This enables a semantically meaningful
decomposition of a scanned scene into individual, complete 3D objects,
including hidden and unobserved object parts. RevealNet is an end-to-end 3D
neural network architecture that leverages joint color and geometry feature
learning. The fully-convolutional nature of our 3D network enables efficient
inference of semantic instance completion for 3D scans at scale of large indoor
environments in a single forward pass. We show that predicting complete object
geometry improves both 3D detection and instance segmentation performance. We
evaluate on both real and synthetic scan benchmark data for the new task, where
we outperform state-of-the-art approaches by over 15 in [email protected] on ScanNet, and
over 18 in [email protected] on SUNCG.Comment: CVPR 202
Tensorized Embedding Layers for Efficient Model Compression
The embedding layers transforming input words into real vectors are the key
components of deep neural networks used in natural language processing.
However, when the vocabulary is large, the corresponding weight matrices can be
enormous, which precludes their deployment in a limited resource setting. We
introduce a novel way of parametrizing embedding layers based on the Tensor
Train (TT) decomposition, which allows compressing the model significantly at
the cost of a negligible drop or even a slight gain in performance. We evaluate
our method on a wide range of benchmarks in natural language processing and
analyze the trade-off between performance and compression ratios for a wide
range of architectures, from MLPs to LSTMs and Transformers
A Survey on Non-rigid 3D Shape Analysis
Shape is an important physical property of natural and manmade 3D objects
that characterizes their external appearances. Understanding differences
between shapes and modeling the variability within and across shape classes,
hereinafter referred to as \emph{shape analysis}, are fundamental problems to
many applications, ranging from computer vision and computer graphics to
biology and medicine. This chapter provides an overview of some of the recent
techniques that studied the shape of 3D objects that undergo non-rigid
deformations including bending and stretching. Recent surveys that covered some
aspects such classification, retrieval, recognition, and rigid or nonrigid
registration, focused on methods that use shape descriptors. Descriptors,
however, provide abstract representations that do not enable the exploration of
shape variability. In this chapter, we focus on recent techniques that treated
the shape of 3D objects as points in some high dimensional space where paths
describe deformations. Equipping the space with a suitable metric enables the
quantification of the range of deformations of a given shape, which in turn
enables (1) comparing and classifying 3D objects based on their shape, (2)
computing smooth deformations, i.e. geodesics, between pairs of objects, and
(3) modeling and exploring continuous shape variability in a collection of 3D
models. This article surveys and classifies recent developments in this field,
outlines fundamental issues, discusses their potential applications in computer
vision and graphics, and highlights opportunities for future research. Our
primary goal is to bridge the gap between various techniques that have been
often independently proposed by different communities including mathematics and
statistics, computer vision and graphics, and medical image analysis
Canonical and Compact Point Cloud Representation for Shape Classification
We present a novel compact point cloud representation that is inherently
invariant to scale, coordinate change and point permutation. The key idea is to
parametrize a distance field around an individual shape into a unique,
canonical, and compact vector in an unsupervised manner. We firstly project a
distance field to a D canonical space using singular value decomposition. We
then train a neural network for each instance to non-linearly embed its
distance field into network parameters. We employ a bias-free Extreme Learning
Machine (ELM) with ReLU activation units, which has scale-factor commutative
property between layers. We demonstrate the descriptiveness of the
instance-wise, shape-embedded network parameters by using them to classify
shapes in D datasets. Our learning-based representation requires minimal
augmentation and simple neural networks, where previous approaches demand
numerous representations to handle coordinate change and point permutation.Comment: 16 pages, 5 figure
Global Gravity Inversion of Bodies with Arbitrary Shape
Gravity inversion allows us to constrain the interior mass distribution of a
planetary body using the observed shape, rotation, and gravity. Traditionally,
techniques developed for gravity inversion can be divided into Monte Carlo
methods, matrix inversion methods, and spectral methods. Here we employ both
matrix inversion and Monte Carlo in order to explore the space of exact
solutions, in a method which is particularly suited for arbitrary shape bodies.
We expand the mass density function using orthogonal polynomials, and map the
contribution of each term to the global gravitational field generated. This map
is linear in the density terms, and can be pseudo-inverted in the
under-determined regime using QR decomposition, to obtain a basis of the affine
space of exact interior structure solutions. As the interior structure
solutions are degenerate, assumptions have to be made in order to control their
properties, and these assumptions can be transformed into scalar functions and
used to explore the solutions space using Monte Carlo techniques. Sample
applications show that the range of solutions tend to converge towards the
nominal one as long as the generic assumptions made are correct, even in the
presence of moderate noise. We present the underlying mathematical formalism
and an analysis of how to impose specific features on the global solution,
including uniform solutions, gradients, and layered models. Analytical formulas
for the computation of the relevant quantities when the shape is represented
using several common methods are included in the Appendix.Comment: 23 pages, 9 figures, 4 tables. Accepted for publication in
Geophysical Journal Internationa
PartNet: A Recursive Part Decomposition Network for Fine-grained and Hierarchical Shape Segmentation
Deep learning approaches to 3D shape segmentation are typically formulated as
a multi-class labeling problem. Existing models are trained for a fixed set of
labels, which greatly limits their flexibility and adaptivity. We opt for
top-down recursive decomposition and develop the first deep learning model for
hierarchical segmentation of 3D shapes, based on recursive neural networks.
Starting from a full shape represented as a point cloud, our model performs
recursive binary decomposition, where the decomposition network at all nodes in
the hierarchy share weights. At each node, a node classifier is trained to
determine the type (adjacency or symmetry) and stopping criteria of its
decomposition. The features extracted in higher level nodes are recursively
propagated to lower level ones. Thus, the meaningful decompositions in higher
levels provide strong contextual cues constraining the segmentations in lower
levels. Meanwhile, to increase the segmentation accuracy at each node, we
enhance the recursive contextual feature with the shape feature extracted for
the corresponding part. Our method segments a 3D shape in point cloud into an
unfixed number of parts, depending on the shape complexity, showing strong
generality and flexibility. It achieves the state-of-the-art performance, both
for fine-grained and semantic segmentation, on the public benchmark and a new
benchmark of fine-grained segmentation proposed in this work. We also
demonstrate its application for fine-grained part refinements in image-to-shape
reconstruction.Comment: CVPR 2019; Corresponding author: Kai Xu ([email protected]);
Project page: www.kevinkaixu.net/projects/partnet.htm
Composite Shape Modeling via Latent Space Factorization
We present a novel neural network architecture, termed Decomposer-Composer,
for semantic structure-aware 3D shape modeling. Our method utilizes an
auto-encoder-based pipeline, and produces a novel factorized shape embedding
space, where the semantic structure of the shape collection translates into a
data-dependent sub-space factorization, and where shape composition and
decomposition become simple linear operations on the embedding coordinates. We
further propose to model shape assembly using an explicit learned part
deformation module, which utilizes a 3D spatial transformer network to perform
an in-network volumetric grid deformation, and which allows us to train the
whole system end-to-end. The resulting network allows us to perform part-level
shape manipulation, unattainable by existing approaches. Our extensive ablation
study, comparison to baseline methods and qualitative analysis demonstrate the
improved performance of the proposed method
Autocomplete Textures for 3D Printing
Texture is an essential property of physical objects that affects aesthetics,
usability, and functionality. However, designing and applying textures to 3D
objects with existing tools remains difficult and time-consuming; it requires
proficient 3D modeling skills. To address this, we investigated an
auto-completion approach for efficient texture creation that automates the
tedious, repetitive process of applying texture while allowing flexible
customization. We developed techniques for users to select a target surface,
sketch and manipulate a texture with 2D drawings, and then generate 3D
printable textures onto an arbitrary curved surface. In a controlled experiment
our tool sped texture creation by 80% over conventional tools, a performance
gain that is higher with more complex target surfaces. This result confirms
that auto-completion is powerful for creating 3D textures
Tidal alignment of galaxies
We develop an analytic model for galaxy intrinsic alignments (IA) based on
the theory of tidal alignment. We calculate all relevant nonlinear corrections
at one-loop order, including effects from nonlinear density evolution, galaxy
biasing, and source density weighting. Contributions from density weighting are
found to be particularly important and lead to bias dependence of the IA
amplitude, even on large scales. This effect may be responsible for much of the
luminosity dependence in IA observations. The increase in IA amplitude for more
highly biased galaxies reflects their locations in regions with large tidal
fields. We also consider the impact of smoothing the tidal field on halo
scales. We compare the performance of this consistent nonlinear model in
describing the observed alignment of luminous red galaxies with the linear
model as well as the frequently used "nonlinear alignment model," finding a
significant improvement on small and intermediate scales. We also show that the
cross-correlation between density and IA (the "GI" term) can be effectively
separated into source alignment and source clustering, and we accurately model
the observed alignment down to the one-halo regime using the tidal field from
the fully nonlinear halo-matter cross correlation. Inside the one-halo regime,
the average alignment of galaxies with density tracers no longer follows the
tidal alignment prediction, likely reflecting nonlinear processes that must be
considered when modeling IA on these scales. Finally, we discuss tidal
alignment in the context of cosmic shear measurements.Comment: 31 pages, 5 figures, appendix. JCAP style. Submitted to JCA
- …