8,355 research outputs found
Quantization as histogram segmentation: globally optimal scalar quantizer design in network systems
We propose a polynomial-time algorithm for optimal scalar quantizer design on discrete-alphabet sources. Special cases of the proposed approach yield optimal design algorithms for fixed-rate and entropy-constrained scalar quantizers, multi-resolution scalar quantizers, multiple description scalar quantizers, and Wyner-Ziv scalar quantizers. The algorithm guarantees globally optimal solutions for fixed-rate and entropy-constrained scalar quantizers and constrained optima for the other coding scenarios. We derive the algorithm by demonstrating the connection between scalar quantization, histogram segmentation, and the shortest path problem in a certain directed acyclic graph
Optimal multiple description and multiresolution scalar quantizer design
The author presents new algorithms for fixed-rate multiple description and multiresolution scalar quantizer design. The algorithms both run in time polynomial in the size of the source alphabet and guarantee globally optimal solutions. To the author's knowledge, these are the first globally optimal design algorithms for multiple description and multiresolution quantizers
Quantization as Histogram Segmentation: Optimal Scalar Quantizer Design in Network Systems
An algorithm for scalar quantizer design on discrete-alphabet sources is proposed. The proposed algorithm can be used to design fixed-rate and entropy-constrained conventional scalar quantizers, multiresolution scalar quantizers, multiple description scalar quantizers, and WynerâZiv scalar quantizers. The algorithm guarantees globally optimal solutions for conventional fixed-rate scalar quantizers and entropy-constrained scalar quantizers. For the other coding scenarios, the algorithm yields the best code among all codes that meet a given convexity constraint. In all cases, the algorithm run-time is polynomial in the size of the source alphabet. The algorithm derivation arises from a demonstration of the connection between scalar quantization, histogram segmentation, and the shortest path problem in a certain directed acyclic graph
Multilevel Preconditioning of Discontinuous-Galerkin Spectral Element Methods, Part I: Geometrically Conforming Meshes
This paper is concerned with the design, analysis and implementation of
preconditioning concepts for spectral Discontinuous Galerkin discretizations of
elliptic boundary value problems. While presently known techniques realize a
growth of the condition numbers that is logarithmic in the polynomial degrees
when all degrees are equal and quadratic otherwise, our main objective is to
realize full robustness with respect to arbitrarily large locally varying
polynomial degrees degrees, i.e., under mild grading constraints condition
numbers stay uniformly bounded with respect to the mesh size and variable
degrees. The conceptual foundation of the envisaged preconditioners is the
auxiliary space method. The main conceptual ingredients that will be shown in
this framework to yield "optimal" preconditioners in the above sense are
Legendre-Gauss-Lobatto grids in connection with certain associated anisotropic
nested dyadic grids as well as specially adapted wavelet preconditioners for
the resulting low order auxiliary problems. Moreover, the preconditioners have
a modular form that facilitates somewhat simplified partial realizations. One
of the components can, for instance, be conveniently combined with domain
decomposition, at the expense though of a logarithmic growth of condition
numbers. Our analysis is complemented by quantitative experimental studies of
the main components.Comment: 41 pages, 11 figures; Major revision: rearrangement of the contents
for better readability, part on wavelet preconditioner adde
A trivariate interpolation algorithm using a cube-partition searching procedure
In this paper we propose a fast algorithm for trivariate interpolation, which
is based on the partition of unity method for constructing a global interpolant
by blending local radial basis function interpolants and using locally
supported weight functions. The partition of unity algorithm is efficiently
implemented and optimized by connecting the method with an effective
cube-partition searching procedure. More precisely, we construct a cube
structure, which partitions the domain and strictly depends on the size of its
subdomains, so that the new searching procedure and, accordingly, the resulting
algorithm enable us to efficiently deal with a large number of nodes.
Complexity analysis and numerical experiments show high efficiency and accuracy
of the proposed interpolation algorithm
Community detection in temporal multilayer networks, with an application to correlation networks
Networks are a convenient way to represent complex systems of interacting
entities. Many networks contain "communities" of nodes that are more densely
connected to each other than to nodes in the rest of the network. In this
paper, we investigate the detection of communities in temporal networks
represented as multilayer networks. As a focal example, we study time-dependent
financial-asset correlation networks. We first argue that the use of the
"modularity" quality function---which is defined by comparing edge weights in
an observed network to expected edge weights in a "null network"---is
application-dependent. We differentiate between "null networks" and "null
models" in our discussion of modularity maximization, and we highlight that the
same null network can correspond to different null models. We then investigate
a multilayer modularity-maximization problem to identify communities in
temporal networks. Our multilayer analysis only depends on the form of the
maximization problem and not on the specific quality function that one chooses.
We introduce a diagnostic to measure \emph{persistence} of community structure
in a multilayer network partition. We prove several results that describe how
the multilayer maximization problem measures a trade-off between static
community structure within layers and larger values of persistence across
layers. We also discuss some computational issues that the popular "Louvain"
heuristic faces with temporal multilayer networks and suggest ways to mitigate
them.Comment: 42 pages, many figures, final accepted version before typesettin
A Sparse Multi-Scale Algorithm for Dense Optimal Transport
Discrete optimal transport solvers do not scale well on dense large problems
since they do not explicitly exploit the geometric structure of the cost
function. In analogy to continuous optimal transport we provide a framework to
verify global optimality of a discrete transport plan locally. This allows
construction of an algorithm to solve large dense problems by considering a
sequence of sparse problems instead. The algorithm lends itself to being
combined with a hierarchical multi-scale scheme. Any existing discrete solver
can be used as internal black-box.Several cost functions, including the noisy
squared Euclidean distance, are explicitly detailed. We observe a significant
reduction of run-time and memory requirements.Comment: Published "online first" in Journal of Mathematical Imaging and
Vision, see DO
Post-processing partitions to identify domains of modularity optimization
We introduce the Convex Hull of Admissible Modularity Partitions (CHAMP)
algorithm to prune and prioritize different network community structures
identified across multiple runs of possibly various computational heuristics.
Given a set of partitions, CHAMP identifies the domain of modularity
optimization for each partition ---i.e., the parameter-space domain where it
has the largest modularity relative to the input set---discarding partitions
with empty domains to obtain the subset of partitions that are "admissible"
candidate community structures that remain potentially optimal over indicated
parameter domains. Importantly, CHAMP can be used for multi-dimensional
parameter spaces, such as those for multilayer networks where one includes a
resolution parameter and interlayer coupling. Using the results from CHAMP, a
user can more appropriately select robust community structures by observing the
sizes of domains of optimization and the pairwise comparisons between
partitions in the admissible subset. We demonstrate the utility of CHAMP with
several example networks. In these examples, CHAMP focuses attention onto
pruned subsets of admissible partitions that are 20-to-1785 times smaller than
the sets of unique partitions obtained by community detection heuristics that
were input into CHAMP.Comment: http://www.mdpi.com/1999-4893/10/3/9
- âŠ