955 research outputs found
Minimum d-dimensional arrangement with fixed points
In the Minimum -Dimensional Arrangement Problem (d-dimAP) we are given a
graph with edge weights, and the goal is to find a 1-1 map of the vertices into
(for some fixed dimension ) minimizing the total
weighted stretch of the edges. This problem arises in VLSI placement and chip
design.
Motivated by these applications, we consider a generalization of d-dimAP,
where the positions of some of the vertices (pins) is fixed and specified as
part of the input. We are asked to extend this partial map to a map of all the
vertices, again minimizing the weighted stretch of edges. This generalization,
which we refer to as d-dimAP+, arises naturally in these application domains
(since it can capture blocked-off parts of the board, or the requirement of
power-carrying pins to be in certain locations, etc.). Perhaps surprisingly,
very little is known about this problem from an approximation viewpoint.
For dimension , we obtain an -approximation
algorithm, based on a strengthening of the spreading-metric LP for 2-dimAP. The
integrality gap for this LP is shown to be . We also show that
it is NP-hard to approximate 2-dimAP+ within a factor better than
\Omega(k^{1/4-\eps}). We also consider a (conceptually harder, but
practically even more interesting) variant of 2-dimAP+, where the target space
is the grid , instead of
the entire integer lattice . For this problem, we obtain a -approximation using the same LP relaxation. We complement
this upper bound by showing an integrality gap of , and an
\Omega(k^{1/2-\eps})-inapproximability result.
Our results naturally extend to the case of arbitrary fixed target dimension
Convex Relaxations for Permutation Problems
Seriation seeks to reconstruct a linear order between variables using
unsorted, pairwise similarity information. It has direct applications in
archeology and shotgun gene sequencing for example. We write seriation as an
optimization problem by proving the equivalence between the seriation and
combinatorial 2-SUM problems on similarity matrices (2-SUM is a quadratic
minimization problem over permutations). The seriation problem can be solved
exactly by a spectral algorithm in the noiseless case and we derive several
convex relaxations for 2-SUM to improve the robustness of seriation solutions
in noisy settings. These convex relaxations also allow us to impose structural
constraints on the solution, hence solve semi-supervised seriation problems. We
derive new approximation bounds for some of these relaxations and present
numerical experiments on archeological data, Markov chains and DNA assembly
from shotgun gene sequencing data.Comment: Final journal version, a few typos and references fixe
Vertex Sparsifiers: New Results from Old Techniques
Given a capacitated graph and a set of terminals ,
how should we produce a graph only on the terminals so that every
(multicommodity) flow between the terminals in could be supported in
with low congestion, and vice versa? (Such a graph is called a
flow-sparsifier for .) What if we want to be a "simple" graph? What if
we allow to be a convex combination of simple graphs?
Improving on results of Moitra [FOCS 2009] and Leighton and Moitra [STOC
2010], we give efficient algorithms for constructing: (a) a flow-sparsifier
that maintains congestion up to a factor of , where , (b) a convex combination of trees over the terminals that maintains
congestion up to a factor of , and (c) for a planar graph , a
convex combination of planar graphs that maintains congestion up to a constant
factor. This requires us to give a new algorithm for the 0-extension problem,
the first one in which the preimages of each terminal are connected in .
Moreover, this result extends to minor-closed families of graphs.
Our improved bounds immediately imply improved approximation guarantees for
several terminal-based cut and ordering problems.Comment: An extended abstract appears in the 13th International Workshop on
Approximation Algorithms for Combinatorial Optimization Problems (APPROX),
2010. Final version to appear in SIAM J. Computin
Multi Layer Peeling for Linear Arrangement and Hierarchical Clustering
We present a new multi-layer peeling technique to cluster points in a metric space. A well-known non-parametric objective is to embed the metric space into a simpler structured metric space such as a line (i.e., Linear Arrangement) or a binary tree (i.e., Hierarchical Clustering). Points which are close in the metric space should be mapped to close points/leaves in the line/tree; similarly, points which are far in the metric space should be far in the line or on the tree. In particular we consider the Maximum Linear Arrangement problem [Refael Hassin and Shlomi Rubinstein, 2001] and the Maximum Hierarchical Clustering problem [Vincent Cohen-Addad et al., 2018] applied to metrics.
We design approximation schemes (1-? approximation for any constant ? > 0) for these objectives. In particular this shows that by considering metrics one may significantly improve former approximations (0.5 for Max Linear Arrangement and 0.74 for Max Hierarchical Clustering). Our main technique, which is called multi-layer peeling, consists of recursively peeling off points which are far from the "core" of the metric space. The recursion ends once the core becomes a sufficiently densely weighted metric space (i.e. the average distance is at least a constant times the diameter) or once it becomes negligible with respect to its inner contribution to the objective. Interestingly, the algorithm in the Linear Arrangement case is much more involved than that in the Hierarchical Clustering case, and uses a significantly more delicate peeling
Doctor of Philosophy
dissertationDeep Neural Networks (DNNs) are the state-of-art solution in a growing number of tasks including computer vision, speech recognition, and genomics. However, DNNs are computationally expensive as they are carefully trained to extract and abstract features from raw data using multiple layers of neurons with millions of parameters. In this dissertation, we primarily focus on inference, e.g., using a DNN to classify an input image. This is an operation that will be repeatedly performed on billions of devices in the datacenter, in self-driving cars, in drones, etc. We observe that DNNs spend a vast majority of their runtime to runtime performing matrix-by-vector multiplications (MVM). MVMs have two major bottlenecks: fetching the matrix and performing sum-of-product operations. To address these bottlenecks, we use in-situ computing, where the matrix is stored in programmable resistor arrays, called crossbars, and sum-of-product operations are performed using analog computing. In this dissertation, we propose two hardware units, ISAAC and Newton.In ISAAC, we show that in-situ computing designs can outperform DNN digital accelerators, if they leverage pipelining, smart encodings, and can distribute a computation in time and space, within crossbars, and across crossbars. In the ISAAC design, roughly half the chip area/power can be attributed to the analog-to-digital conversion (ADC), i.e., it remains the key design challenge in mixed-signal accelerators for deep networks. In spite of the ADC bottleneck, ISAAC is able to out-perform the computational efficiency of the state-of-the-art design (DaDianNao) by 8x. In Newton, we take advantage of a number of techniques to address ADC inefficiency. These techniques exploit matrix transformations, heterogeneity, and smart mapping of computation to the analog substrate. We show that Newton can increase the efficiency of in-situ computing by an additional 2x. Finally, we show that in-situ computing, unfortunately, cannot be easily adapted to handle training of deep networks, i.e., it is only suitable for inference of already-trained networks. By improving the efficiency of DNN inference with ISAAC and Newton, we move closer to low-cost deep learning that in turn will have societal impact through self-driving cars, assistive systems for the disabled, and precision medicine
- …