871 research outputs found
Optimal Orthogonal Graph Drawing with Convex Bend Costs
Traditionally, the quality of orthogonal planar drawings is quantified by
either the total number of bends, or the maximum number of bends per edge.
However, this neglects that in typical applications, edges have varying
importance. Moreover, as bend minimization over all planar embeddings is
NP-hard, most approaches focus on a fixed planar embedding.
We consider the problem OptimalFlexDraw that is defined as follows. Given a
planar graph G on n vertices with maximum degree 4 and for each edge e a cost
function cost_e : N_0 --> R defining costs depending on the number of bends on
e, compute an orthogonal drawing of G of minimum cost. Note that this optimizes
over all planar embeddings of the input graphs, and the cost functions allow
fine-grained control on the bends of edges.
In this generality OptimalFlexDraw is NP-hard. We show that it can be solved
efficiently if 1) the cost function of each edge is convex and 2) the first
bend on each edge does not cause any cost (which is a condition similar to the
positive flexibility for the decision problem FlexDraw). Moreover, we show the
existence of an optimal solution with at most three bends per edge except for a
single edge per block (maximal biconnected component) with up to four bends.
For biconnected graphs we obtain a running time of O(n T_flow(n)), where
T_flow(n) denotes the time necessary to compute a minimum-cost flow in a planar
flow network with multiple sources and sinks. For connected graphs that are not
biconnected we need an additional factor of O(n).Comment: 31 pages, 14 figure
The Stochastic Firefighter Problem
The dynamics of infectious diseases spread is crucial in determining their
risk and offering ways to contain them. We study sequential vaccination of
individuals in networks. In the original (deterministic) version of the
Firefighter problem, a fire breaks out at some node of a given graph. At each
time step, b nodes can be protected by a firefighter and then the fire spreads
to all unprotected neighbors of the nodes on fire. The process ends when the
fire can no longer spread. We extend the Firefighter problem to a probabilistic
setting, where the infection is stochastic. We devise a simple policy that only
vaccinates neighbors of infected nodes and is optimal on regular trees and on
general graphs for a sufficiently large budget. We derive methods for
calculating upper and lower bounds of the expected number of infected
individuals, as well as provide estimates on the budget needed for containment
in expectation. We calculate these explicitly on trees, d-dimensional grids,
and Erd\H{o}s R\'{e}nyi graphs. Finally, we construct a state-dependent budget
allocation strategy and demonstrate its superiority over constant budget
allocation on real networks following a first order acquaintance vaccination
policy
Link Failure Recovery over Very Large Arbitrary Networks: The Case of Coding
Network coding-based link failure recovery techniques provide near-hitless
recovery and offer high capacity efficiency. Diversity coding is the first
technique to incorporate coding in this field and is easy to implement over
small arbitrary networks. However, its capacity efficiency is restricted by its
systematic coding and high design complexity even though its design complexity
is lower than the other coding-based recovery techniques. Alternative
techniques mitigate some of these limitations, but they are difficult to
implement over arbitrary networks. In this paper, we propose a simple column
generation-based design algorithm and a novel advanced diversity coding
technique to achieve near-hitless recovery over arbitrary networks. The design
framework consists of two parts: a main problem and subproblem. Main problem is
realized with Linear Programming (LP) and Integer Linear Programming (ILP),
whereas the subproblem can be realized with different methods. The simulation
results suggest that both the novel coding structure and the novel design
algorithm lead to higher capacity efficiency for near-hitless recovery. The
novel design algorithm simplifies the capacity placement problem which enables
implementing diversity coding-based techniques on very large arbitrary
networks.Comment: To be submitted to IEEE Transactions on Communication
A Distributed Clustering Algorithm for Dynamic Networks
We propose an algorithm that builds and maintains clusters over a network
subject to mobility. This algorithm is fully decentralized and makes all the
different clusters grow concurrently. The algorithm uses circulating tokens
that collect data and move according to a random walk traversal scheme. Their
task consists in (i) creating a cluster with the nodes it discovers and (ii)
managing the cluster expansion; all decisions affecting the cluster are taken
only by a node that owns the token. The size of each cluster is maintained
higher than nodes ( is a parameter of the algorithm). The obtained
clustering is locally optimal in the sense that, with only a local view of each
clusters, it computes the largest possible number of clusters (\emph{ie} the
sizes of the clusters are as close to as possible). This algorithm is
designed as a decentralized control algorithm for large scale networks and is
mobility-adaptive: after a series of topological changes, the algorithm
converges to a clustering. This recomputation only affects nodes in clusters in
which topological changes happened, and in adjacent clusters
Sub-families of Baxter Permutations Based on Pattern Avoidance
Baxter permutations are a class of permutations which are in bijection with a
class of floorplans that arise in chip design called mosaic floorplans. We
study a subclass of mosaic floorplans called defined from mosaic
floorplans by placing certain geometric restrictions. This naturally leads to
studying a subclass of Baxter permutations. This subclass of Baxter
permutations are characterized by pattern avoidance. We establish a bijection,
between the subclass of floorplans we study and a subclass of Baxter
permutations, based on the analogy between decomposition of a floorplan into
smaller blocks and block decomposition of permutations. Apart from the
characterization, we also answer combinatorial questions on these classes. We
give an algebraic generating function (but without a closed form solution) for
the number of permutations, an exponential lower bound on growth rate, and a
linear time algorithm for deciding membership in each subclass. Based on the
recurrence relation describing the class, we also give a polynomial time
algorithm for enumeration. We finally prove that Baxter permutations are closed
under inverse based on an argument inspired from the geometry of the
corresponding mosaic floorplans. This proof also establishes that the subclass
of Baxter permutations we study are also closed under inverse. Characterizing
permutations instead of the corresponding floorplans can be helpful in
reasoning about the solution space and in designing efficient algorithms for
floorplanning
A Uniform Self-Stabilizing Minimum Diameter Spanning Tree Algorithm
We present a uniform self-stabilizing algorithm, which solves the problem of
distributively finding a minimum diameter spanning tree of an arbitrary
positively real-weighted graph. Our algorithm consists in two stages of
stabilizing protocols. The first stage is a uniform randomized stabilizing {\em
unique naming} protocol, and the second stage is a stabilizing {\em MDST}
protocol, designed as a {\em fair composition} of Merlin--Segall's stabilizing
protocol and a distributed deterministic stabilizing protocol solving the
(MDST) problem. The resulting randomized distributed algorithm presented herein
is a composition of the two stages; it stabilizes in expected time, and uses memory bits
(where is the order of the graph, is the maximum degree of the
network, is the diameter in terms of hops, and is the largest edge
weight). To our knowledge, our protocol is the very first distributed algorithm
for the (MDST) problem. Moreover, it is fault-tolerant and works for any
anonymous arbitrary network.Comment: 14 pages; International conf\'erence; Uniform self-stabilizing
variant of the problem, 9th International Workshop on Distributed Algorithms
(WDAG'95), Mont-Saint-Michel : France (1995
Calibration of Phone Likelihoods in Automatic Speech Recognition
In this paper we study the probabilistic properties of the posteriors in a
speech recognition system that uses a deep neural network (DNN) for acoustic
modeling. We do this by reducing Kaldi's DNN shared pdf-id posteriors to phone
likelihoods, and using test set forced alignments to evaluate these using a
calibration sensitive metric. Individual frame posteriors are in principle
well-calibrated, because the DNN is trained using cross entropy as the
objective function, which is a proper scoring rule. When entire phones are
assessed, we observe that it is best to average the log likelihoods over the
duration of the phone. Further scaling of the average log likelihoods by the
logarithm of the duration slightly improves the calibration, and this
improvement is retained when tested on independent test data.Comment: Rejected by Interspeech 2016. I would love to include the reviews,
but there is no space for that here (400 characters
A Universal Grammar-Based Code For Lossless Compression of Binary Trees
We consider the problem of lossless compression of binary trees, with the aim
of reducing the number of code bits needed to store or transmit such trees. A
lossless grammar-based code is presented which encodes each binary tree into a
binary codeword in two steps. In the first step, the tree is transformed into a
context-free grammar from which the tree can be reconstructed. In the second
step, the context-free grammar is encoded into a binary codeword. The decoder
of the grammar-based code decodes the original tree from its codeword by
reversing the two encoding steps. It is shown that the resulting grammar-based
binary tree compression code is a universal code on a family of probabilistic
binary tree source models satisfying certain weak restrictions
Self-Stabilizing Wavelets and r-Hops Coordination
We introduce a simple tool called the wavelet (or, r-wavelet) scheme.
Wavelets deals with coordination among processes which are at most r hops away
of each other. We present a selfstabilizing solution for this scheme. Our
solution requires no underlying structure and works in arbritrary anonymous
networks, i.e., no process identifier is required. Moreover, our solution works
under any (even unfair) daemon. Next, we use the wavelet scheme to design
self-stabilizing layer clocks. We show that they provide an efficient device in
the design of local coordination problems at distance r, i.e., r-barrier
synchronization and r-local resource allocation (LRA) such as r-local mutual
exclusion (LME), r-group mutual exclusion (GME), and r-Reader/Writers. Some
solutions to the r-LRA problem (e.g., r-LME) also provide transformers to
transform algorithms written assuming any r-central daemon into algorithms
working with any distributed daemon
FPGA-based Accelerators of Deep Learning Networks for Learning and Classification: A Review
Due to recent advances in digital technologies, and availability of credible
data, an area of artificial intelligence, deep learning, has emerged, and has
demonstrated its ability and effectiveness in solving complex learning problems
not possible before. In particular, convolution neural networks (CNNs) have
demonstrated their effectiveness in image detection and recognition
applications. However, they require intensive CPU operations and memory
bandwidth that make general CPUs fail to achieve desired performance levels.
Consequently, hardware accelerators that use application specific integrated
circuits (ASICs), field programmable gate arrays (FPGAs), and graphic
processing units (GPUs) have been employed to improve the throughput of CNNs.
More precisely, FPGAs have been recently adopted for accelerating the
implementation of deep learning networks due to their ability to maximize
parallelism as well as due to their energy efficiency. In this paper, we review
recent existing techniques for accelerating deep learning networks on FPGAs. We
highlight the key features employed by the various techniques for improving the
acceleration performance. In addition, we provide recommendations for enhancing
the utilization of FPGAs for CNNs acceleration. The techniques investigated in
this paper represent the recent trends in FPGA-based accelerators of deep
learning networks. Thus, this review is expected to direct the future advances
on efficient hardware accelerators and to be useful for deep learning
researchers.Comment: This article has been accepted for publication in IEEE Access
(December, 2018
- …