100 research outputs found
On the Complexity of Spill Everywhere under SSA Form
Compilation for embedded processors can be either aggressive (time consuming
cross-compilation) or just in time (embedded and usually dynamic). The
heuristics used in dynamic compilation are highly constrained by limited
resources, time and memory in particular. Recent results on the SSA form open
promising directions for the design of new register allocation heuristics for
embedded systems and especially for embedded compilation. In particular,
heuristics based on tree scan with two separated phases -- one for spilling,
then one for coloring/coalescing -- seem good candidates for designing
memory-friendly, fast, and competitive register allocators. Still, also because
of the side effect on power consumption, the minimization of loads and stores
overhead (spilling problem) is an important issue. This paper provides an
exhaustive study of the complexity of the ``spill everywhere'' problem in the
context of the SSA form. Unfortunately, conversely to our initial hopes, many
of the questions we raised lead to NP-completeness results. We identify some
polynomial cases but that are impractical in JIT context. Nevertheless, they
can give hints to simplify formulations for the design of aggressive
allocators.Comment: 10 page
On the effectiveness of the incremental approach to minimal chordal edge modification
Because edge modification problems are computationally difficult for most target graph classes, considerable attention has been devoted to inclusion-minimal edge modifications, which are usually polynomial-time computable and which can serve as an approximation of minimum cardinality edge modifications, albeit with no guarantee on the cardinality of the resulting modification set. Over the past fifteen years, the primary design approach used for inclusion-minimal edge modification algorithms is based on a specific incremental scheme. Unfortunately, nothing guarantees that the set E of edge modifications of a graph G that can be obtained in this specific way spans all the inclusion-minimal edge modifications of G. Here, we focus on edge modification problems into the class of chordal graphs and we show that for this the set E may not even contain any solution of minimum size and may not even contain a solution close to the minimum; in fact, we show that it may not contain a solution better than within an Ω(n) factor of the minimum. These results show strong limitations on the use of the current favored algorithmic approach to inclusion-minimal edge modification in heuristics for computing a minimum cardinality edge modification. They suggest that further developments might be better using other approaches.publishedVersio
Two methods for the generation of chordal graphs
In this paper two methods for automatic generation of connected chordal graphs are proposed: the first one is based on results concerning the dynamic maintainance of chordality under edge insertions; the second is based on expansion/merging of maximal cliques. In both methods, chordality is preserved along the whole generation process
Network Filtering for Big Data: Triangulated Maximally Filtered Graph
We propose a network-filtering method, the Triangulated Maximally Filtered Graph (TMFG), that provides an approximate solution to the WEIGHTED MAXIMAL PLANAR GRAPH problem. The underlying idea of TMFG consists in building a triangulation that maximizes a score function associated with the amount of information retained by the network.TMFG uses as weights any arbitrary similarity measure to arrange data into a meaningful network structure that can be used for clustering, community detection and modelling. The method is fast, adaptable and scalable to very large datasets; it allows online updating and learning as new data can be inserted and deleted with combinations of local and non-local moves. Further, TMFG permits readjustments of the network in consequence of changes in the strength of the similarity measure. The method is based on local topological moves and can therefore take advantage of parallel and GPUs computing. We discuss how this network-filtering method can be used intuitively and efficiently for big data studies and its significance from an information-theoretic perspective
Split decomposition and graph-labelled trees: characterizations and fully-dynamic algorithms for totally decomposable graphs
In this paper, we revisit the split decomposition of graphs and give new
combinatorial and algorithmic results for the class of totally decomposable
graphs, also known as the distance hereditary graphs, and for two non-trivial
subclasses, namely the cographs and the 3-leaf power graphs. Precisely, we give
strutural and incremental characterizations, leading to optimal fully-dynamic
recognition algorithms for vertex and edge modifications, for each of these
classes. These results rely on a new framework to represent the split
decomposition, namely the graph-labelled trees, which also captures the modular
decomposition of graphs and thereby unify these two decompositions techniques.
The point of the paper is to use bijections between these graph classes and
trees whose nodes are labelled by cliques and stars. Doing so, we are also able
to derive an intersection model for distance hereditary graphs, which answers
an open problem.Comment: extended abstract appeared in ISAAC 2007: Dynamic distance hereditary
graphs using split decompositon. In International Symposium on Algorithms and
Computation - ISAAC. Number 4835 in Lecture Notes, pages 41-51, 200
Conservative Sparsification for Efficient Approximate Estimation
Linear Gaussian systems often exhibit sparse structures. For systems which grow as a function of time, marginalisation of past states will eventually introduce extra non-zero elements into the information matrix of the Gaussian distribution. These extra non-zeros can lead to dense problems as these systems progress through time. This thesis proposes a method that can delete elements of the information matrix while maintaining guarantees about the conservativeness of the resulting estimate with a computational complexity that is a function of the connectivity of the graph rather than the problem dimension. This sparsification can be performed iteratively and minimises the Kullback Leibler Divergence (KLD) between the original and approximate distributions. This new technique is called Conservative Sparsification (CS). For large sparse graphs employing a Junction Tree (JT) for estimation, efficiency is related to the size of the largest clique. Conservative Sparsification can be applied to clique splitting in JTs, enabling approximate and efficient estimation in JTs with the same conservative guarantees as CS for information matrices. In distributed estimation scenarios which use JTs, CS can be performed in parallel and asynchronously on JT cliques. This approach usually results in a larger KLD compared with the optimal CS approach, but an upper bound on this increased divergence can be calculated with information locally available to each clique. This work has applications in large scale distributed linear estimation problems where the size of the problem or communication overheads make optimal linear estimation difficult
Proximity Search for Maximal Subgraph Enumeration
International audienc
Fully dynamic recognition of proper circular-arc graphs
We present a fully dynamic algorithm for the recognition of proper
circular-arc (PCA) graphs. The allowed operations on the graph involve the
insertion and removal of vertices (together with its incident edges) or edges.
Edge operations cost O(log n) time, where n is the number of vertices of the
graph, while vertex operations cost O(log n + d) time, where d is the degree of
the modified vertex. We also show incremental and decremental algorithms that
work in O(1) time per inserted or removed edge. As part of our algorithm, fully
dynamic connectivity and co-connectivity algorithms that work in O(log n) time
per operation are obtained. Also, an O(\Delta) time algorithm for determining
if a PCA representation corresponds to a co-bipartite graph is provided, where
\Delta\ is the maximum among the degrees of the vertices. When the graph is
co-bipartite, a co-bipartition of each of its co-components is obtained within
the same amount of time.Comment: 60 pages, 15 figure
- …