45,203 research outputs found
Graph Complexity and Slice Functions
Abstract. A graph-theoretic approach to study the complexity of Boolean functions was initiated by Pudlák, Rödl, and Savický [PRS] by defining models of computation on graphs. These models generalize well-known models of Boolean complexity such as circuits, branching programs, and two-party communication complexity.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/42364/1/30360071.pd
Reconciling Synthesis and Decomposition: A Composite Approach to Capability Identification
Stakeholders' expectations and technology constantly evolve during the
lengthy development cycles of a large-scale computer based system.
Consequently, the traditional approach of baselining requirements results in an
unsatisfactory system because it is ill-equipped to accommodate such change. In
contrast, systems constructed on the basis of Capabilities are more
change-tolerant; Capabilities are functional abstractions that are neither as
amorphous as user needs nor as rigid as system requirements. Alternatively,
Capabilities are aggregates that capture desired functionality from the users'
needs, and are designed to exhibit desirable software engineering
characteristics of high cohesion, low coupling and optimum abstraction levels.
To formulate these functional abstractions we develop and investigate two
algorithms for Capability identification: Synthesis and Decomposition. The
synthesis algorithm aggregates detailed rudimentary elements of the system to
form Capabilities. In contrast, the decomposition algorithm determines
Capabilities by recursively partitioning the overall mission of the system into
more detailed entities. Empirical analysis on a small computer based library
system reveals that neither approach is sufficient by itself. However, a
composite algorithm based on a complementary approach reconciling the two polar
perspectives results in a more feasible set of Capabilities. In particular, the
composite algorithm formulates Capabilities using the cohesion and coupling
measures as defined by the decomposition algorithm and the abstraction level as
determined by the synthesis algorithm.Comment: This paper appears in the 14th Annual IEEE International Conference
and Workshop on the Engineering of Computer Based Systems (ECBS); 10 pages, 9
figure
One-way permutations, computational asymmetry and distortion
Computational asymmetry, i.e., the discrepancy between the complexity of
transformations and the complexity of their inverses, is at the core of one-way
transformations. We introduce a computational asymmetry function that measures
the amount of one-wayness of permutations. We also introduce the word-length
asymmetry function for groups, which is an algebraic analogue of computational
asymmetry. We relate boolean circuits to words in a Thompson monoid, over a
fixed generating set, in such a way that circuit size is equal to word-length.
Moreover, boolean circuits have a representation in terms of elements of a
Thompson group, in such a way that circuit size is polynomially equivalent to
word-length. We show that circuits built with gates that are not constrained to
have fixed-length inputs and outputs, are at most quadratically more compact
than circuits built from traditional gates (with fixed-length inputs and
outputs). Finally, we show that the computational asymmetry function is closely
related to certain distortion functions: The computational asymmetry function
is polynomially equivalent to the distortion of the path length in Schreier
graphs of certain Thompson groups, compared to the path length in Cayley graphs
of certain Thompson monoids. We also show that the results of Razborov and
others on monotone circuit complexity lead to exponential lower bounds on
certain distortions.Comment: 33 page
Attribute Value Reordering For Efficient Hybrid OLAP
The normalization of a data cube is the ordering of the attribute values. For
large multidimensional arrays where dense and sparse chunks are stored
differently, proper normalization can lead to improved storage efficiency. We
show that it is NP-hard to compute an optimal normalization even for 1x3
chunks, although we find an exact algorithm for 1x2 chunks. When dimensions are
nearly statistically independent, we show that dimension-wise attribute
frequency sorting is an optimal normalization and takes time O(d n log(n)) for
data cubes of size n^d. When dimensions are not independent, we propose and
evaluate several heuristics. The hybrid OLAP (HOLAP) storage mechanism is
already 19%-30% more efficient than ROLAP, but normalization can improve it
further by 9%-13% for a total gain of 29%-44% over ROLAP
The model checking problem for intuitionistic propositional logic with one variable is AC1-complete
We show that the model checking problem for intuitionistic propositional
logic with one variable is complete for logspace-uniform AC1. As basic tool we
use the connection between intuitionistic logic and Heyting algebra, and
investigate its complexity theoretical aspects. For superintuitionistic logics
with one variable, we obtain NC1-completeness for the model checking problem.Comment: A preliminary version of this work was presented at STACS 2011. 19
pages, 3 figure
The model checking fingerprints of CTL operators
The aim of this study is to understand the inherent expressive power of CTL
operators. We investigate the complexity of model checking for all CTL
fragments with one CTL operator and arbitrary Boolean operators. This gives us
a fingerprint of each CTL operator. The comparison between the fingerprints
yields a hierarchy of the operators that mirrors their strength with respect to
model checking
Identifiability and transportability in dynamic causal networks
In this paper we propose a causal analog to the purely observational Dynamic Bayesian Networks, which we call Dynamic Causal Networks.
We provide a sound and complete algorithm for identification of Dynamic Causal Networks, namely, for computing the effect of an intervention or experiment, based on passive observations only, whenever possible. We note the existence of two types of confounder variables that affect in substantially different ways the identification
procedures, a distinction with no analog in either Dynamic Bayesian Networks or standard causal graphs. We further propose a procedure
for the transportability of causal effects in Dynamic Causal Network settings, where the result of causal experiments in a source domain may be used for the identification of causal effects in a target domain.Preprin
Time and Parallelizability Results for Parity Games with Bounded Tree and DAG Width
Parity games are a much researched class of games in NP intersect CoNP that
are not known to be in P. Consequently, researchers have considered specialised
algorithms for the case where certain graph parameters are small. In this
paper, we study parity games on graphs with bounded treewidth, and graphs with
bounded DAG width. We show that parity games with bounded DAG width can be
solved in O(n^(k+3) k^(k + 2) (d + 1)^(3k + 2)) time, where n, k, and d are the
size, treewidth, and number of priorities in the parity game. This is an
improvement over the previous best algorithm, given by Berwanger et al., which
runs in n^O(k^2) time. We also show that, if a tree decomposition is provided,
then parity games with bounded treewidth can be solved in O(n k^(k + 5) (d +
1)^(3k + 5)) time. This improves over previous best algorithm, given by
Obdrzalek, which runs in O(n d^(2(k+1)^2)) time. Our techniques can also be
adapted to show that the problem of solving parity games with bounded treewidth
lies in the complexity class NC^2, which is the class of problems that can be
efficiently parallelized. This is in stark contrast to the general parity game
problem, which is known to be P-hard, and thus unlikely to be contained in NC
- …