2,327 research outputs found
Improving Table Compression with Combinatorial Optimization
We study the problem of compressing massive tables within the
partition-training paradigm introduced by Buchsbaum et al. [SODA'00], in which
a table is partitioned by an off-line training procedure into disjoint
intervals of columns, each of which is compressed separately by a standard,
on-line compressor like gzip. We provide a new theory that unifies previous
experimental observations on partitioning and heuristic observations on column
permutation, all of which are used to improve compression rates. Based on the
theory, we devise the first on-line training algorithms for table compression,
which can be applied to individual files, not just continuously operating
sources; and also a new, off-line training algorithm, based on a link to the
asymmetric traveling salesman problem, which improves on prior work by
rearranging columns prior to partitioning. We demonstrate these results
experimentally. On various test files, the on-line algorithms provide 35-55%
improvement over gzip with negligible slowdown; the off-line reordering
provides up to 20% further improvement over partitioning alone. We also show
that a variation of the table compression problem is MAX-SNP hard.Comment: 22 pages, 2 figures, 5 tables, 23 references. Extended abstract
appears in Proc. 13th ACM-SIAM SODA, pp. 213-222, 200
Generalized Strong Preservation by Abstract Interpretation
Standard abstract model checking relies on abstract Kripke structures which
approximate concrete models by gluing together indistinguishable states, namely
by a partition of the concrete state space. Strong preservation for a
specification language L encodes the equivalence of concrete and abstract model
checking of formulas in L. We show how abstract interpretation can be used to
design abstract models that are more general than abstract Kripke structures.
Accordingly, strong preservation is generalized to abstract
interpretation-based models and precisely related to the concept of
completeness in abstract interpretation. The problem of minimally refining an
abstract model in order to make it strongly preserving for some language L can
be formulated as a minimal domain refinement in abstract interpretation in
order to get completeness w.r.t. the logical/temporal operators of L. It turns
out that this refined strongly preserving abstract model always exists and can
be characterized as a greatest fixed point. As a consequence, some well-known
behavioural equivalences, like bisimulation, simulation and stuttering, and
their corresponding partition refinement algorithms can be elegantly
characterized in abstract interpretation as completeness properties and
refinements
Generalizing the Paige-Tarjan Algorithm by Abstract Interpretation
The Paige and Tarjan algorithm (PT) for computing the coarsest refinement of
a state partition which is a bisimulation on some Kripke structure is well
known. It is also well known in model checking that bisimulation is equivalent
to strong preservation of CTL, or, equivalently, of Hennessy-Milner logic.
Drawing on these observations, we analyze the basic steps of the PT algorithm
from an abstract interpretation perspective, which allows us to reason on
strong preservation in the context of generic inductively defined (temporal)
languages and of possibly non-partitioning abstract models specified by
abstract interpretation. This leads us to design a generalized Paige-Tarjan
algorithm, called GPT, for computing the minimal refinement of an abstract
interpretation-based model that strongly preserves some given language. It
turns out that PT is a straight instance of GPT on the domain of state
partitions for the case of strong preservation of Hennessy-Milner logic. We
provide a number of examples showing that GPT is of general use. We first show
how a well-known efficient algorithm for computing stuttering equivalence can
be viewed as a simple instance of GPT. We then instantiate GPT in order to
design a new efficient algorithm for computing simulation equivalence that is
competitive with the best available algorithms. Finally, we show how GPT allows
to compute new strongly preserving abstract models by providing an efficient
algorithm that computes the coarsest refinement of a given partition that
strongly preserves the language generated by the reachability operator.Comment: Keywords: Abstract interpretation, abstract model checking, strong
preservation, Paige-Tarjan algorithm, refinement algorith
A jigsaw puzzle framework for homogenization of high porosity foams
An approach to homogenization of high porosity metallic foams is explored.
The emphasis is on the \Alporas{} foam and its representation by means of
two-dimensional wire-frame models. The guaranteed upper and lower bounds on the
effective properties are derived by the first-order homogenization with the
uniform and minimal kinematic boundary conditions at heart. This is combined
with the method of Wang tilings to generate sufficiently large material samples
along with their finite element discretization. The obtained results are
compared to experimental and numerical data available in literature and the
suitability of the two-dimensional setting itself is discussed.Comment: 11 pages, 7 figures, 3 table
Multiscale Markov Decision Problems: Compression, Solution, and Transfer Learning
Many problems in sequential decision making and stochastic control often have
natural multiscale structure: sub-tasks are assembled together to accomplish
complex goals. Systematically inferring and leveraging hierarchical structure,
particularly beyond a single level of abstraction, has remained a longstanding
challenge. We describe a fast multiscale procedure for repeatedly compressing,
or homogenizing, Markov decision processes (MDPs), wherein a hierarchy of
sub-problems at different scales is automatically determined. Coarsened MDPs
are themselves independent, deterministic MDPs, and may be solved using
existing algorithms. The multiscale representation delivered by this procedure
decouples sub-tasks from each other and can lead to substantial improvements in
convergence rates both locally within sub-problems and globally across
sub-problems, yielding significant computational savings. A second fundamental
aspect of this work is that these multiscale decompositions yield new transfer
opportunities across different problems, where solutions of sub-tasks at
different levels of the hierarchy may be amenable to transfer to new problems.
Localized transfer of policies and potential operators at arbitrary scales is
emphasized. Finally, we demonstrate compression and transfer in a collection of
illustrative domains, including examples involving discrete and continuous
statespaces.Comment: 86 pages, 15 figure
Medical image computing and computer-aided medical interventions applied to soft tissues. Work in progress in urology
Until recently, Computer-Aided Medical Interventions (CAMI) and Medical
Robotics have focused on rigid and non deformable anatomical structures.
Nowadays, special attention is paid to soft tissues, raising complex issues due
to their mobility and deformation. Mini-invasive digestive surgery was probably
one of the first fields where soft tissues were handled through the development
of simulators, tracking of anatomical structures and specific assistance
robots. However, other clinical domains, for instance urology, are concerned.
Indeed, laparoscopic surgery, new tumour destruction techniques (e.g. HIFU,
radiofrequency, or cryoablation), increasingly early detection of cancer, and
use of interventional and diagnostic imaging modalities, recently opened new
challenges to the urologist and scientists involved in CAMI. This resulted in
the last five years in a very significant increase of research and developments
of computer-aided urology systems. In this paper, we propose a description of
the main problems related to computer-aided diagnostic and therapy of soft
tissues and give a survey of the different types of assistance offered to the
urologist: robotization, image fusion, surgical navigation. Both research
projects and operational industrial systems are discussed
High-frequency asymptotic compression of dense BEM matrices for general geometries without ray tracing
Wave propagation and scattering problems in acoustics are often solved with
boundary element methods. They lead to a discretization matrix that is
typically dense and large: its size and condition number grow with increasing
frequency. Yet, high frequency scattering problems are intrinsically local in
nature, which is well represented by highly localized rays bouncing around.
Asymptotic methods can be used to reduce the size of the linear system, even
making it frequency independent, by explicitly extracting the oscillatory
properties from the solution using ray tracing or analogous techniques.
However, ray tracing becomes expensive or even intractable in the presence of
(multiple) scattering obstacles with complicated geometries. In this paper, we
start from the same discretization that constructs the fully resolved large and
dense matrix, and achieve asymptotic compression by explicitly localizing the
Green's function instead. This results in a large but sparse matrix, with a
faster associated matrix-vector product and, as numerical experiments indicate,
a much improved condition number. Though an appropriate localisation of the
Green's function also depends on asymptotic information unavailable for general
geometries, we can construct it adaptively in a frequency sweep from small to
large frequencies in a way which automatically takes into account a general
incident wave. We show that the approach is robust with respect to non-convex,
multiple and even near-trapping domains, though the compression rate is clearly
lower in the latter case. Furthermore, in spite of its asymptotic nature, the
method is robust with respect to low-order discretizations such as piecewise
constants, linears or cubics, commonly used in applications. On the other hand,
we do not decrease the total number of degrees of freedom compared to a
conventional classical discretization. The combination of the ...Comment: 24 pages, 13 figure
Exploring multimodal data fusion through joint decompositions with flexible couplings
A Bayesian framework is proposed to define flexible coupling models for joint
tensor decompositions of multiple data sets. Under this framework, a natural
formulation of the data fusion problem is to cast it in terms of a joint
maximum a posteriori (MAP) estimator. Data driven scenarios of joint posterior
distributions are provided, including general Gaussian priors and non Gaussian
coupling priors. We present and discuss implementation issues of algorithms
used to obtain the joint MAP estimator. We also show how this framework can be
adapted to tackle the problem of joint decompositions of large datasets. In the
case of a conditional Gaussian coupling with a linear transformation, we give
theoretical bounds on the data fusion performance using the Bayesian Cramer-Rao
bound. Simulations are reported for hybrid coupling models ranging from simple
additive Gaussian models, to Gamma-type models with positive variables and to
the coupling of data sets which are inherently of different size due to
different resolution of the measurement devices.Comment: 15 pages, 7 figures, revised versio
- …