12 research outputs found
Efficient Computation of the Kauffman Bracket
This paper bounds the computational cost of computing the Kauffman bracket of
a link in terms of the crossing number of that link. Specifically, it is shown
that the image of a tangle with boundary points and crossings in the
Kauffman bracket skein module is a linear combination of basis
elements, with each coefficient a polynomial with at most nonzero terms,
each with integer coefficients, and that the link can be built one crossing at
a time as a sequence of tangles with maximum number of boundary points bounded
by for some From this it follows that the computation of the
Kauffman bracket of the link takes time and memory a polynomial in times
$2^{C\sqrt{n}}.
What Makes a Good Plan? An Efficient Planning Approach to Control Diffusion Processes in Networks
In this paper, we analyze the quality of a large class of simple dynamic
resource allocation (DRA) strategies which we name priority planning. Their aim
is to control an undesired diffusion process by distributing resources to the
contagious nodes of the network according to a predefined priority-order. In
our analysis, we reduce the DRA problem to the linear arrangement of the nodes
of the network. Under this perspective, we shed light on the role of a
fundamental characteristic of this arrangement, the maximum cutwidth, for
assessing the quality of any priority planning strategy. Our theoretical
analysis validates the role of the maximum cutwidth by deriving bounds for the
extinction time of the diffusion process. Finally, using the results of our
analysis, we propose a novel and efficient DRA strategy, called Maximum
Cutwidth Minimization, that outperforms other competing strategies in our
simulations.Comment: 18 pages, 3 figure
Graph and String Parameters: Connections Between Pathwidth, Cutwidth and the Locality Number
We investigate the locality number, a recently introduced structural parameter for strings (with applications in pattern matching with variables), and its connection to two important graph-parameters, cutwidth and pathwidth. These connections allow us to show that computing the locality number is NP-hard but fixed-parameter tractable (when the locality number or the alphabet size is treated as a parameter), and can be approximated with ratio O(sqrt{log{opt}} log n). As a by-product, we also relate cutwidth via the locality number to pathwidth, which is of independent interest, since it improves the best currently known approximation algorithm for cutwidth. In addition to these main results, we also consider the possibility of greedy-based approximation algorithms for the locality number
Synchronous Context-Free Grammars and Optimal Linear Parsing Strategies
Synchronous Context-Free Grammars (SCFGs), also known as syntax-directed
translation schemata, are unlike context-free grammars in that they do not have
a binary normal form. In general, parsing with SCFGs takes space and time
polynomial in the length of the input strings, but with the degree of the
polynomial depending on the permutations of the SCFG rules. We consider linear
parsing strategies, which add one nonterminal at a time. We show that for a
given input permutation, the problems of finding the linear parsing strategy
with the minimum space and time complexity are both NP-hard
Tight Approximations for Graphical House Allocation
The Graphical House Allocation (GHA) problem asks: how can houses (each
with a fixed non-negative value) be assigned to the vertices of an undirected
graph , so as to minimize the sum of absolute differences along the edges of
? This problem generalizes the classical Minimum Linear Arrangement problem,
as well as the well-known House Allocation Problem from Economics. Recent work
has studied the computational aspects of GHA and observed that the problem is
NP-hard and inapproximable even on particularly simple classes of graphs, such
as vertex disjoint unions of paths. However, the dependence of any
approximations on the structural properties of the underlying graph had not
been studied.
In this work, we give a nearly complete characterization of the
approximability of GHA. We present algorithms to approximate the optimal envy
on general graphs, trees, planar graphs, bounded-degree graphs, and
bounded-degree planar graphs. For each of these graph classes, we then prove
matching lower bounds, showing that in each case, no significant improvement
can be attained unless P = NP. We also present general approximation ratios as
a function of structural parameters of the underlying graph, such as treewidth;
these match the tight upper bounds in general, and are significantly better
approximations for many natural subclasses of graphs. Finally, we investigate
the special case of bounded-degree trees in some detail. We first refute a
conjecture by Hosseini et al. [2023] about the structural properties of exact
optimal allocations on binary trees by means of a counterexample on a depth-
complete binary tree. This refutation, together with our hardness results on
trees, might suggest that approximating the optimal envy even on complete
binary trees is infeasible. Nevertheless, we present a linear-time algorithm
that attains a -approximation on complete binary trees
A satisfiability procedure for quantified Boolean formulae
We present a satisfiability tester QSAT for quantified Boolean formulae and a restriction of QSAT to unquantified conjunctive normal form formulae. QSAT makes use of procedures which replace subformulae of a formula by equivalent formulae. By a sequence of such replacements, the original formula can be simplified to or . It may also be necessary to transform the original formula to generate a subformula to replace. eliminates collections of variables from an unquantified clause form formula until all variables have been eliminated. QSAT and can be applied to hardware verification and symbolic model checking. Results of an implementation of are described, as well as some complexity results for QSAT and . QSAT runs in linear time on a class of quantified Boolean formulae related to symbolic model checking. We present the class of “long and thin” unquantified formulae and give evidence that this class is common in applications. We also give theoretical and empirical evidence that is often faster than Davis and Putnam-type satisfiability checkers and ordered binary decision diagrams (OBDDs) on this class of formulae. We give an example where is exponentially faster than BDDs
Related Orderings of AT-Free Graphs
An ordering of a graph G is a bijection of V(G) to {1, . . . , |V(G)|}. In this thesis, we consider the complexity of two types of ordering problems. The first type of problem we consider aims at minimizing objective functions related to an ordering of the graph. We consider the problems Cutwidth, Imbalance, and Optimal Linear Arrangement. We also consider a problem of another type: S-End-Vertex, where S is one of the following search algorithms: breadth-first search (BFS), lexicographic breadth-first search (LBFS), depth-first search (DFS), and maximal neighbourhood search (MNS). This problem asks if a specified vertex can be the last vertex in an ordering generated by S. We show that, for each type of problem, orderings for one problem may be related to orderings for another problem of that type.
We show that there is always a cutwidth-minimal ordering where equivalence classes of true twins are grouped for any graph, where true twins are vertices with the same closed neighbourhood. This enables a fixed-parameter tractable (FPT) algorithm for Cutwidth on graphs parameterized by the edge clique cover number of the graph and a new parameter, the restricted twin cover number of the graph. The restricted twin cover number of the graph generalizes the vertex cover number of a graph, and is the smallest value k ≥ 0 such that there is a twin cover of the graph T and k−|T| non-trivial components of G−T.
We show that there is also always an imbalance-minimal ordering where equivalence classes of true twins are grouped for any graph. We show a polynomial time algorithm for this problem on superfragile graphs and subsets of proper interval graphs, both subsets of AT-free graphs. An asteroidal triple (AT) is a triple of independent vertices x, y, z such that between every pair of vertices in the triple, there is a path that does not intersect the closed neighbourhood of the third. A graph without an asteroidal triple is said to be AT-free. We also provide closed formulas for Imbalance on some small graph classes.
In the FPT setting, we improve algorithms for Imbalance parameterized by the vertex cover number of the input graph and show that the problem does not have a polynomially sized kernel for the same parameter number unless NP ⊆ coNP/poly.
We show that Optimal Linear Arrangement also has a polynomial algorithm for superfragile graphs and an FPT algorithm with respect to the restricted twin cover number.
Finally, we consider S-End-Vertex, for BFS, LBFS, DFS, and MNS. We perform the first systematic study of the problem on bipartite permutation graphs, a subset of AT-free graphs. We show that for BFS and MNS, the problem has a polynomial time solution. We improve previous results for LBFS, obtaining a linear time algorithm. For DFS, we establish a linear time algorithm. All the results follow from the linear structure of bipartite permutation graphs
Structural issues and energy efficiency in data centers
Mención Internacional en el título de doctorWith the rise of cloud computing, data centers have been called to play a main role in the Internet scenario nowadays. Despite this relevance, they are probably far from their zenith yet due to the ever increasing demand of contents to be stored in and distributed by the cloud, the need of computing power or the larger and larger amounts of data being analyzed by top companies such as Google, Microsoft or Amazon.
However, everything is not always a bed of roses. Having a data center entails two major issues: they are terribly expensive to build, and they consume huge amounts of power being, therefore, terribly expensive to maintain. For this reason, cutting down the cost of building and increasing the energy efficiency (and hence reducing the carbon footprint) of data centers has been one of the hottest research topics during the last years. In this thesis we propose different techniques that can have an impact in both the building and the maintenance costs of data centers of any size, from small scale to large flagship data centers.
The first part of the thesis is devoted to structural issues. We start by analyzing the bisection (band)width of a topology, of product graphs in particular, a useful parameter to compare and choose among different data center topologies. In that same part we describe the problem of deploying the servers in a data center as a Multidimensional Arrangement Problem (MAP) and propose a heuristic to reduce the deployment and wiring costs.
We target energy efficiency in data centers in the second part of the thesis. We first propose a method to reduce the energy consumption in the data center network: rate adaptation. Rate adaptation is based on the idea of energy proportionality and aims to consume power on network devices proportionally to the load on their links. Our analysis proves that just using rate adaptation we may achieve average energy savings in the order of a 30-40% and up to a 60% depending on the network topology.
We continue by characterizing the power requirements of a data center server given that, in order to properly increase the energy efficiency of a data center, we first need to understand how energy is being consumed. We present an exhaustive empirical characterization of the power requirements of multiple components of data center servers, namely, the CPU, the disks, and the network card. To do so, we devise different experiments to stress these components, taking into account the multiple available frequencies as well as the fact that we are working with multicore servers. In these experiments, we measure their energy consumption and identify their optimal operational points. Our study proves that the curve that defines the minimal power consumption of the CPU, as a function of the load in Active Cycles Per Second (ACPS), is neither concave nor purely convex. Moreover, it definitively has a superlinear dependence on the load. We also validate the accuracy of the model derived from our characterization by running different Hadoop applications in diverse scenarios obtaining an error below 4:1% on average.
The last topic we study is the Virtual Machine Assignment problem (VMA), i.e., optimizing how virtual machines (VMs) are assigned to physical machines (PMs) in data centers. Our optimization target is to minimize the power consumed by all the PMs when considering that power consumption depends superlinearly on the load. We study four different VMA problems, depending on whether the number of PMs and their capacity are bounded or not. We study their complexity and perform an offline and online analysis of these problems. The online analysis is complemented with simulations that show that the online algorithms we propose consume substantially less power than other state of the art assignment algorithms.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Joerg Widmer.- Secretario: José Manuel Moya Fernández.- Vocal: Shmuel Zak