37,450 research outputs found
Minimum d-dimensional arrangement with fixed points
In the Minimum -Dimensional Arrangement Problem (d-dimAP) we are given a
graph with edge weights, and the goal is to find a 1-1 map of the vertices into
(for some fixed dimension ) minimizing the total
weighted stretch of the edges. This problem arises in VLSI placement and chip
design.
Motivated by these applications, we consider a generalization of d-dimAP,
where the positions of some of the vertices (pins) is fixed and specified as
part of the input. We are asked to extend this partial map to a map of all the
vertices, again minimizing the weighted stretch of edges. This generalization,
which we refer to as d-dimAP+, arises naturally in these application domains
(since it can capture blocked-off parts of the board, or the requirement of
power-carrying pins to be in certain locations, etc.). Perhaps surprisingly,
very little is known about this problem from an approximation viewpoint.
For dimension , we obtain an -approximation
algorithm, based on a strengthening of the spreading-metric LP for 2-dimAP. The
integrality gap for this LP is shown to be . We also show that
it is NP-hard to approximate 2-dimAP+ within a factor better than
\Omega(k^{1/4-\eps}). We also consider a (conceptually harder, but
practically even more interesting) variant of 2-dimAP+, where the target space
is the grid , instead of
the entire integer lattice . For this problem, we obtain a -approximation using the same LP relaxation. We complement
this upper bound by showing an integrality gap of , and an
\Omega(k^{1/2-\eps})-inapproximability result.
Our results naturally extend to the case of arbitrary fixed target dimension
Seeding with Costly Network Information
We study the task of selecting nodes in a social network of size , to
seed a diffusion with maximum expected spread size, under the independent
cascade model with cascade probability . Most of the previous work on this
problem (known as influence maximization) focuses on efficient algorithms to
approximate the optimal seed set with provable guarantees, given the knowledge
of the entire network. However, in practice, obtaining full knowledge of the
network is very costly. To address this gap, we first study the achievable
guarantees using influence samples. We provide an approximation
algorithm with a tight (1-1/e){\mbox{OPT}}-\epsilon n guarantee, using
influence samples and show that this dependence on
is asymptotically optimal. We then propose a probing algorithm that queries
edges from the graph and use them to find a seed set with the
same almost tight approximation guarantee. We also provide a matching (up to
logarithmic factors) lower-bound on the required number of edges. To address
the dependence of our probing algorithm on the independent cascade probability
, we show that it is impossible to maintain the same approximation
guarantees by controlling the discrepancy between the probing and seeding
cascade probabilities. Instead, we propose to down-sample the probed edges to
match the seeding cascade probability, provided that it does not exceed that of
probing. Finally, we test our algorithms on real world data to quantify the
trade-off between the cost of obtaining more refined network information and
the benefit of the added information for guiding improved seeding strategies
Learning and Designing Stochastic Processes from Logical Constraints
Stochastic processes offer a flexible mathematical formalism to model and
reason about systems. Most analysis tools, however, start from the premises
that models are fully specified, so that any parameters controlling the
system's dynamics must be known exactly. As this is seldom the case, many
methods have been devised over the last decade to infer (learn) such parameters
from observations of the state of the system. In this paper, we depart from
this approach by assuming that our observations are {\it qualitative}
properties encoded as satisfaction of linear temporal logic formulae, as
opposed to quantitative observations of the state of the system. An important
feature of this approach is that it unifies naturally the system identification
and the system design problems, where the properties, instead of observations,
represent requirements to be satisfied. We develop a principled statistical
estimation procedure based on maximising the likelihood of the system's
parameters, using recent ideas from statistical machine learning. We
demonstrate the efficacy and broad applicability of our method on a range of
simple but non-trivial examples, including rumour spreading in social networks
and hybrid models of gene regulation
The grid-dose-spreading algorithm for dose distribution calculation in heavy charged particle radiotherapy
A new variant of the pencil-beam (PB) algorithm for dose distribution
calculation for radiotherapy with protons and heavier ions, the grid-dose
spreading (GDS) algorithm, is proposed. The GDS algorithm is intrinsically
faster than conventional PB algorithms due to approximations in convolution
integral, where physical calculations are decoupled from simple grid-to-grid
energy transfer. It was effortlessly implemented to a carbon-ion radiotherapy
treatment planning system to enable realistic beam blurring in the field, which
was absent with the broad-beam (BB) algorithm. For a typical prostate
treatment, the slowing factor of the GDS algorithm relative to the BB algorithm
was 1.4, which is a great improvement over the conventional PB algorithms with
a typical slowing factor of several tens. The GDS algorithm is mathematically
equivalent to the PB algorithm for horizontal and vertical coplanar beams
commonly used in carbon-ion radiotherapy while dose deformation within the size
of the pristine spread occurs for angled beams, which was within 3 mm for a
single proton pencil beam of incidence, and needs to be assessed
against the clinical requirements and tolerances in practical situations.Comment: 7 pages, 3 figure
Dynamic Exploration of Networks: from general principles to the traceroute process
Dynamical processes taking place on real networks define on them evolving
subnetworks whose topology is not necessarily the same of the underlying one.
We investigate the problem of determining the emerging degree distribution,
focusing on a class of tree-like processes, such as those used to explore the
Internet's topology. A general theory based on mean-field arguments is
proposed, both for single-source and multiple-source cases, and applied to the
specific example of the traceroute exploration of networks. Our results provide
a qualitative improvement in the understanding of dynamical sampling and of the
interplay between dynamics and topology in large networks like the Internet.Comment: 13 pages, 6 figure
- …