158,709 research outputs found
Micro-Macro Analysis of Complex Networks
Complex systems have attracted considerable interest because of their wide range of applications, and are often studied via a \u201cclassic\u201d approach: study a specific system, find a complex network behind it, and analyze the corresponding properties. This simple methodology has produced a great deal of interesting results, but relies on an often implicit underlying assumption: the level of detail on which the system is observed. However, in many situations, physical or abstract, the level of detail can be one out of many, and might also depend on intrinsic limitations in viewing the data with a different level of abstraction or precision. So, a fundamental question arises: do properties of a network depend on its level of observability, or are they invariant? If there is a dependence, then an apparently correct network modeling could in fact just be a bad approximation of the true behavior of a complex system. In order to answer this question, we propose a novel micro-macro analysis of complex systems that quantitatively describes how the structure of complex networks varies as a function of the detail level. To this extent, we have developed a new telescopic algorithm that abstracts from the local properties of a system and reconstructs the original structure according to a fuzziness level. This way we can study what happens when passing from a fine level of detail (\u201cmicro\u201d) to a different scale level (\u201cmacro\u201d), and analyze the corresponding behavior in this transition, obtaining a deeper spectrum analysis. The obtained results show that many important properties are not universally invariant with respect to the level of detail, but instead strongly depend on the specific level on which a network is observed. Therefore, caution should be taken in every situation where a complex network is considered, if its context allows for different levels of observability
Hierarchical Implicit Models and Likelihood-Free Variational Inference
Implicit probabilistic models are a flexible class of models defined by a
simulation process for data. They form the basis for theories which encompass
our understanding of the physical world. Despite this fundamental nature, the
use of implicit models remains limited due to challenges in specifying complex
latent structure in them, and in performing inferences in such models with
large data sets. In this paper, we first introduce hierarchical implicit models
(HIMs). HIMs combine the idea of implicit densities with hierarchical Bayesian
modeling, thereby defining models via simulators of data with rich hidden
structure. Next, we develop likelihood-free variational inference (LFVI), a
scalable variational inference algorithm for HIMs. Key to LFVI is specifying a
variational family that is also implicit. This matches the model's flexibility
and allows for accurate approximation of the posterior. We demonstrate diverse
applications: a large-scale physical simulator for predator-prey populations in
ecology; a Bayesian generative adversarial network for discrete data; and a
deep implicit model for text generation.Comment: Appears in Neural Information Processing Systems, 201
Approximate Computation and Implicit Regularization for Very Large-scale Data Analysis
Database theory and database practice are typically the domain of computer
scientists who adopt what may be termed an algorithmic perspective on their
data. This perspective is very different than the more statistical perspective
adopted by statisticians, scientific computers, machine learners, and other who
work on what may be broadly termed statistical data analysis. In this article,
I will address fundamental aspects of this algorithmic-statistical disconnect,
with an eye to bridging the gap between these two very different approaches. A
concept that lies at the heart of this disconnect is that of statistical
regularization, a notion that has to do with how robust is the output of an
algorithm to the noise properties of the input data. Although it is nearly
completely absent from computer science, which historically has taken the input
data as given and modeled algorithms discretely, regularization in one form or
another is central to nearly every application domain that applies algorithms
to noisy data. By using several case studies, I will illustrate, both
theoretically and empirically, the nonobvious fact that approximate
computation, in and of itself, can implicitly lead to statistical
regularization. This and other recent work suggests that, by exploiting in a
more principled way the statistical properties implicit in worst-case
algorithms, one can in many cases satisfy the bicriteria of having algorithms
that are scalable to very large-scale databases and that also have good
inferential or predictive properties.Comment: To appear in the Proceedings of the 2012 ACM Symposium on Principles
of Database Systems (PODS 2012
Implicit ODE solvers with good local error control for the transient analysis of Markov models
Obtaining the transient probability distribution vector of a continuous-time Markov chain (CTMC) using an implicit ordinary differential equation (ODE) solver tends to be advantageous in terms of run-time computational cost when the product of the maximum output rate of the CTMC and the largest time of interest is large. In this paper, we show that when applied to the transient analysis of CTMCs, many implicit ODE solvers are such that the linear systems involved in their steps can be solved by using iterative methods with strict control of the 1-norm of the error. This allows the development of implementations of those ODE solvers for the transient analysis of CTMCs that can be more efficient and more accurate than more standard implementations.Peer ReviewedPostprint (published version
Matrix representations for toric parametrizations
In this paper we show that a surface in P^3 parametrized over a 2-dimensional
toric variety T can be represented by a matrix of linear syzygies if the base
points are finite in number and form locally a complete intersection. This
constitutes a direct generalization of the corresponding result over P^2
established in [BJ03] and [BC05]. Exploiting the sparse structure of the
parametrization, we obtain significantly smaller matrices than in the
homogeneous case and the method becomes applicable to parametrizations for
which it previously failed. We also treat the important case T = P^1 x P^1 in
detail and give numerous examples.Comment: 20 page
Implicit sampling for path integral control, Monte Carlo localization, and SLAM
The applicability and usefulness of implicit sampling in stochastic optimal
control, stochastic localization, and simultaneous localization and mapping
(SLAM), is explored; implicit sampling is a recently-developed
variationally-enhanced sampling method. The theory is illustrated with
examples, and it is found that implicit sampling is significantly more
efficient than current Monte Carlo methods in test problems for all three
applications
High-order implicit palindromic discontinuous Galerkin method for kinetic-relaxation approximation
We construct a high order discontinuous Galerkin method for solving general
hyperbolic systems of conservation laws. The method is CFL-less, matrix-free,
has the complexity of an explicit scheme and can be of arbitrary order in space
and time. The construction is based on: (a) the representation of the system of
conservation laws by a kinetic vectorial representation with a stiff relaxation
term; (b) a matrix-free, CFL-less implicit discontinuous Galerkin transport
solver; and (c) a stiffly accurate composition method for time integration. The
method is validated on several one-dimensional test cases. It is then applied
on two-dimensional and three-dimensional test cases: flow past a cylinder,
magnetohydrodynamics and multifluid sedimentation
Reconstruction of freeform surfaces for metrology
The application of freeform surfaces has increased since their complex shapes closely express a product's functional specifications and their machining is obtained with higher accuracy. In particular, optical surfaces exhibit enhanced performance especially when they take aspheric forms or more complex forms with multi-undulations. This study is mainly focused on the reconstruction of complex shapes such as freeform optical surfaces, and on the characterization of their form. The computer graphics community has proposed various algorithms for constructing a mesh based on the cloud of sample points. The mesh is a piecewise linear approximation of the surface and an interpolation of the point set. The mesh can further be processed for fitting parametric surfaces (Polyworks® or Geomagic®). The metrology community investigates direct fitting approaches. If the surface mathematical model is given, fitting is a straight forward task. Nonetheless, if the surface model is unknown, fitting is only possible through the association of polynomial Spline parametric surfaces. In this paper, a comparative study carried out on methods proposed by the computer graphics community will be presented to elucidate the advantages of these approaches. We stress the importance of the pre-processing phase as well as the significance of initial conditions. We further emphasize the importance of the meshing phase by stating that a proper mesh has two major advantages. First, it organizes the initially unstructured point set and it provides an insight of orientation, neighbourhood and curvature, and infers information on both its geometry and topology. Second, it conveys a better segmentation of the space, leading to a correct patching and association of parametric surfaces.EMR
Implicit particle methods and their connection with variational data assimilation
The implicit particle filter is a sequential Monte Carlo method for data
assimilation that guides the particles to the high-probability regions via a
sequence of steps that includes minimizations. We present a new and more
general derivation of this approach and extend the method to particle smoothing
as well as to data assimilation for perfect models. We show that the
minimizations required by implicit particle methods are similar to the ones one
encounters in variational data assimilation and explore the connection of
implicit particle methods with variational data assimilation. In particular, we
argue that existing variational codes can be converted into implicit particle
methods at a low cost, often yielding better estimates, that are also equipped
with quantitative measures of the uncertainty. A detailed example is presented
- …