8,960 research outputs found
Automated sequence and motion planning for robotic spatial extrusion of 3D trusses
While robotic spatial extrusion has demonstrated a new and efficient means to
fabricate 3D truss structures in architectural scale, a major challenge remains
in automatically planning extrusion sequence and robotic motion for trusses
with unconstrained topologies. This paper presents the first attempt in the
field to rigorously formulate the extrusion sequence and motion planning (SAMP)
problem, using a CSP encoding. Furthermore, this research proposes a new
hierarchical planning framework to solve the extrusion SAMP problems that
usually have a long planning horizon and 3D configuration complexity. By
decoupling sequence and motion planning, the planning framework is able to
efficiently solve the extrusion sequence, end-effector poses, joint
configurations, and transition trajectories for spatial trusses with
nonstandard topologies. This paper also presents the first detailed computation
data to reveal the runtime bottleneck on solving SAMP problems, which provides
insight and comparing baseline for future algorithmic development. Together
with the algorithmic results, this paper also presents an open-source and
modularized software implementation called Choreo that is machine-agnostic. To
demonstrate the power of this algorithmic framework, three case studies,
including real fabrication and simulation results, are presented.Comment: 24 pages, 16 figure
Faster Algorithms for Weighted Recursive State Machines
Pushdown systems (PDSs) and recursive state machines (RSMs), which are
linearly equivalent, are standard models for interprocedural analysis. Yet RSMs
are more convenient as they (a) explicitly model function calls and returns,
and (b) specify many natural parameters for algorithmic analysis, e.g., the
number of entries and exits. We consider a general framework where RSM
transitions are labeled from a semiring and path properties are algebraic with
semiring operations, which can model, e.g., interprocedural reachability and
dataflow analysis problems.
Our main contributions are new algorithms for several fundamental problems.
As compared to a direct translation of RSMs to PDSs and the best-known existing
bounds of PDSs, our analysis algorithm improves the complexity for
finite-height semirings (that subsumes reachability and standard dataflow
properties). We further consider the problem of extracting distance values from
the representation structures computed by our algorithm, and give efficient
algorithms that distinguish the complexity of a one-time preprocessing from the
complexity of each individual query. Another advantage of our algorithm is that
our improvements carry over to the concurrent setting, where we improve the
best-known complexity for the context-bounded analysis of concurrent RSMs.
Finally, we provide a prototype implementation that gives a significant
speed-up on several benchmarks from the SLAM/SDV project
The Fine-Grained Complexity of Multi-Dimensional Ordering Properties
We define a class of problems whose input is an n-sized set of d-dimensional vectors, and where the problem is first-order definable using comparisons between coordinates. This class captures a wide variety of tasks, such as complex types of orthogonal range search, model-checking first-order properties on geometric intersection graphs, and elementary questions on multidimensional data like verifying Pareto optimality of a choice of data points.
Focusing on constant dimension d, we show that any k-quantifier, d-dimensional such problem is solvable in O(n^{k-1} log^{d-1} n) time. Furthermore, this algorithm is conditionally tight up to subpolynomial factors: we show that assuming the 3-uniform hyperclique hypothesis, there is a k-quantifier, (3k-3)-dimensional problem in this class that requires time ?(n^{k-1-o(1)}).
Towards identifying a single representative problem for this class, we study the existence of complete problems for the 3-quantifier setting (since 2-quantifier problems can already be solved in near-linear time O(nlog^{d-1} n), and k-quantifier problems with k > 3 reduce to the 3-quantifier case). We define a problem Vector Concatenated Non-Domination VCND_d (Given three sets of vectors X,Y and Z of dimension d,d and 2d, respectively, is there an x ? X and a y ? Y so that their concatenation x?y is not dominated by any z ? Z, where vector u is dominated by vector v if u_i ? v_i for each coordinate 1 ? i ? d), and determine it as the "unique" candidate to be complete for this class (under fine-grained assumptions)
Algorithmic linear dimension reduction in the l_1 norm for sparse vectors
This paper develops a new method for recovering m-sparse signals that is
simultaneously uniform and quick. We present a reconstruction algorithm whose
run time, O(m log^2(m) log^2(d)), is sublinear in the length d of the signal.
The reconstruction error is within a logarithmic factor (in m) of the optimal
m-term approximation error in l_1. In particular, the algorithm recovers
m-sparse signals perfectly and noisy signals are recovered with polylogarithmic
distortion. Our algorithm makes O(m log^2 (d)) measurements, which is within a
logarithmic factor of optimal. We also present a small-space implementation of
the algorithm. These sketching techniques and the corresponding reconstruction
algorithms provide an algorithmic dimension reduction in the l_1 norm. In
particular, vectors of support m in dimension d can be linearly embedded into
O(m log^2 d) dimensions with polylogarithmic distortion. We can reconstruct a
vector from its low-dimensional sketch in time O(m log^2(m) log^2(d)).
Furthermore, this reconstruction is stable and robust under small
perturbations
Hamilton decompositions of regular expanders: applications
In a recent paper, we showed that every sufficiently large regular digraph G
on n vertices whose degree is linear in n and which is a robust outexpander has
a decomposition into edge-disjoint Hamilton cycles. The main consequence of
this theorem is that every regular tournament on n vertices can be decomposed
into (n-1)/2 edge-disjoint Hamilton cycles, whenever n is sufficiently large.
This verified a conjecture of Kelly from 1968. In this paper, we derive a
number of further consequences of our result on robust outexpanders, the main
ones are the following: (i) an undirected analogue of our result on robust
outexpanders; (ii) best possible bounds on the size of an optimal packing of
edge-disjoint Hamilton cycles in a graph of minimum degree d for a large range
of values for d. (iii) a similar result for digraphs of given minimum
semidegree; (iv) an approximate version of a conjecture of Nash-Williams on
Hamilton decompositions of dense regular graphs; (v) the observation that dense
quasi-random graphs are robust outexpanders; (vi) a verification of the `very
dense' case of a conjecture of Frieze and Krivelevich on packing edge-disjoint
Hamilton cycles in random graphs; (vii) a proof of a conjecture of Erdos on the
size of an optimal packing of edge-disjoint Hamilton cycles in a random
tournament.Comment: final version, to appear in J. Combinatorial Theory
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
Sparse multivariate polynomial interpolation in the basis of Schubert polynomials
Schubert polynomials were discovered by A. Lascoux and M. Sch\"utzenberger in
the study of cohomology rings of flag manifolds in 1980's. These polynomials
generalize Schur polynomials, and form a linear basis of multivariate
polynomials. In 2003, Lenart and Sottile introduced skew Schubert polynomials,
which generalize skew Schur polynomials, and expand in the Schubert basis with
the generalized Littlewood-Richardson coefficients.
In this paper we initiate the study of these two families of polynomials from
the perspective of computational complexity theory. We first observe that skew
Schubert polynomials, and therefore Schubert polynomials, are in \CountP
(when evaluating on non-negative integral inputs) and \VNP.
Our main result is a deterministic algorithm that computes the expansion of a
polynomial of degree in in the basis of Schubert
polynomials, assuming an oracle computing Schubert polynomials. This algorithm
runs in time polynomial in , , and the bit size of the expansion. This
generalizes, and derandomizes, the sparse interpolation algorithm of symmetric
polynomials in the Schur basis by Barvinok and Fomin (Advances in Applied
Mathematics, 18(3):271--285). In fact, our interpolation algorithm is general
enough to accommodate any linear basis satisfying certain natural properties.
Applications of the above results include a new algorithm that computes the
generalized Littlewood-Richardson coefficients.Comment: 20 pages; some typos correcte
Conditionally Optimal Algorithms for Generalized B\"uchi Games
Games on graphs provide the appropriate framework to study several central
problems in computer science, such as the verification and synthesis of
reactive systems. One of the most basic objectives for games on graphs is the
liveness (or B\"uchi) objective that given a target set of vertices requires
that some vertex in the target set is visited infinitely often. We study
generalized B\"uchi objectives (i.e., conjunction of liveness objectives), and
implications between two generalized B\"uchi objectives (known as GR(1)
objectives), that arise in numerous applications in computer-aided
verification. We present improved algorithms and conditional super-linear lower
bounds based on widely believed assumptions about the complexity of (A1)
combinatorial Boolean matrix multiplication and (A2) CNF-SAT. We consider graph
games with vertices, edges, and generalized B\"uchi objectives with
conjunctions. First, we present an algorithm with running time , improving the previously known and worst-case bounds. Our algorithm is optimal for dense graphs under (A1).
Second, we show that the basic algorithm for the problem is optimal for sparse
graphs when the target sets have constant size under (A2). Finally, we consider
GR(1) objectives, with conjunctions in the antecedent and
conjunctions in the consequent, and present an -time algorithm, improving the previously known -time algorithm for
- …