11,310 research outputs found
Generic Regular Decompositions for Parametric Polynomial Systems
This paper presents a generalization of our earlier work in [19]. In this
paper, the two concepts, generic regular decomposition (GRD) and
regular-decomposition-unstable (RDU) variety introduced in [19] for generic
zero-dimensional systems, are extended to the case where the parametric systems
are not necessarily zero-dimensional. An algorithm is provided to compute GRDs
and the associated RDU varieties of parametric systems simultaneously on the
basis of the algorithm for generic zero-dimensional systems proposed in [19].
Then the solutions of any parametric system can be represented by the solutions
of finitely many regular systems and the decomposition is stable at any
parameter value in the complement of the associated RDU variety of the
parameter space. The related definitions and the results presented in [19] are
also generalized and a further discussion on RDU varieties is given from an
experimental point of view. The new algorithm has been implemented on the basis
of DISCOVERER with Maple 16 and experimented with a number of benchmarks from
the literature.Comment: It is the latest version. arXiv admin note: text overlap with
arXiv:1208.611
High-Dimensional Bayesian Geostatistics
With the growing capabilities of Geographic Information Systems (GIS) and
user-friendly software, statisticians today routinely encounter geographically
referenced data containing observations from a large number of spatial
locations and time points. Over the last decade, hierarchical spatiotemporal
process models have become widely deployed statistical tools for researchers to
better understand the complex nature of spatial and temporal variability.
However, fitting hierarchical spatiotemporal models often involves expensive
matrix computations with complexity increasing in cubic order for the number of
spatial locations and temporal points. This renders such models unfeasible for
large data sets. This article offers a focused review of two methods for
constructing well-defined highly scalable spatiotemporal stochastic processes.
Both these processes can be used as "priors" for spatiotemporal random fields.
The first approach constructs a low-rank process operating on a
lower-dimensional subspace. The second approach constructs a Nearest-Neighbor
Gaussian Process (NNGP) that ensures sparse precision matrices for its finite
realizations. Both processes can be exploited as a scalable prior embedded
within a rich hierarchical modeling framework to deliver full Bayesian
inference. These approaches can be described as model-based solutions for big
spatiotemporal datasets. The models ensure that the algorithmic complexity has
floating point operations (flops), where the number of spatial
locations (per iteration). We compare these methods and provide some insight
into their methodological underpinnings
Recommended from our members
Preparing sparse solvers for exascale computing.
Sparse solvers provide essential functionality for a wide variety of scientific applications. Highly parallel sparse solvers are essential for continuing advances in high-fidelity, multi-physics and multi-scale simulations, especially as we target exascale platforms. This paper describes the challenges, strategies and progress of the US Department of Energy Exascale Computing project towards providing sparse solvers for exascale computing platforms. We address the demands of systems with thousands of high-performance node devices where exposing concurrency, hiding latency and creating alternative algorithms become essential. The efforts described here are works in progress, highlighting current success and upcoming challenges. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'
Decomposition of Optical Flow on the Sphere
We propose a number of variational regularisation methods for the estimation
and decomposition of motion fields on the -sphere. While motion estimation
is based on the optical flow equation, the presented decomposition models are
motivated by recent trends in image analysis. In particular we treat
decomposition as well as hierarchical decomposition. Helmholtz decomposition of
motion fields is obtained as a natural by-product of the chosen numerical
method based on vector spherical harmonics. All models are tested on time-lapse
microscopy data depicting fluorescently labelled endodermal cells of a
zebrafish embryo.Comment: The final publication is available at link.springer.co
Random Surfing Without Teleportation
In the standard Random Surfer Model, the teleportation matrix is necessary to
ensure that the final PageRank vector is well-defined. The introduction of this
matrix, however, results in serious problems and imposes fundamental
limitations to the quality of the ranking vectors. In this work, building on
the recently proposed NCDawareRank framework, we exploit the decomposition of
the underlying space into blocks, and we derive easy to check necessary and
sufficient conditions for random surfing without teleportation.Comment: 13 pages. Published in the Volume: "Algorithms, Probability, Networks
and Games, Springer-Verlag, 2015". (The updated version corrects small
typos/errors
- …