16,441 research outputs found
Outage Constrained Robust Secure Transmission for MISO Wiretap Channels
In this paper we consider the robust secure beamformer design for MISO
wiretap channels. Assume that the eavesdroppers' channels are only partially
available at the transmitter, we seek to maximize the secrecy rate under the
transmit power and secrecy rate outage probability constraint. The outage
probability constraint requires that the secrecy rate exceeds certain threshold
with high probability. Therefore including such constraint in the design
naturally ensures the desired robustness. Unfortunately, the presence of the
probabilistic constraints makes the problem non-convex and hence difficult to
solve. In this paper, we investigate the outage probability constrained secrecy
rate maximization problem using a novel two-step approach. Under a wide range
of uncertainty models, our developed algorithms can obtain high-quality
solutions, sometimes even exact global solutions, for the robust secure
beamformer design problem. Simulation results are presented to verify the
effectiveness and robustness of the proposed algorithms
Engineering design applications of surrogate-assisted optimization techniques
The construction of models aimed at learning the behaviour of a system whose responses to inputs are expensive to measure is a branch of statistical science that has been around for a very long time. Geostatistics has pioneered a drive over the last half century towards a better understanding of the accuracy of such ‘surrogate’ models of the expensive function. Of particular interest to us here are some of the even more recent advances related to exploiting such formulations in an optimization context. While the classic goal of the modelling process has been to achieve a uniform prediction accuracy across the domain, an economical optimization process may aim to bias the distribution of the learning budget towards promising basins of attraction. This can only happen, of course, at the expense of the global exploration of the space and thus finding the best balance may be viewed as an optimization problem in itself. We examine here a selection of the state of-the-art solutions to this type of balancing exercise through the prism of several simple, illustrative problems, followed by two ‘real world’ applications: the design of a regional airliner wing and the multi-objective search for a low environmental impact hous
Smooth Parametrizations in Dynamics, Analysis, Diophantine and Computational Geometry
Smooth parametrization consists in a subdivision of the mathematical objects
under consideration into simple pieces, and then parametric representation of
each piece, while keeping control of high order derivatives. The main goal of
the present paper is to provide a short overview of some results and open
problems on smooth parametrization and its applications in several apparently
rather separated domains: Smooth Dynamics, Diophantine Geometry, Approximation
Theory, and Computational Geometry.
The structure of the results, open problems, and conjectures in each of these
domains shows in many cases a remarkable similarity, which we try to stress.
Sometimes this similarity can be easily explained, sometimes the reasons remain
somewhat obscure, and it motivates some natural questions discussed in the
paper. We present also some new results, stressing interconnection between
various types and various applications of smooth parametrization
Decremental Single-Source Shortest Paths on Undirected Graphs in Near-Linear Total Update Time
In the decremental single-source shortest paths (SSSP) problem we want to
maintain the distances between a given source node and every other node in
an -node -edge graph undergoing edge deletions. While its static
counterpart can be solved in near-linear time, this decremental problem is much
more challenging even in the undirected unweighted case. In this case, the
classic total update time of Even and Shiloach [JACM 1981] has been the
fastest known algorithm for three decades. At the cost of a
-approximation factor, the running time was recently improved to
by Bernstein and Roditty [SODA 2011]. In this paper, we bring the
running time down to near-linear: We give a -approximation
algorithm with expected total update time, thus obtaining
near-linear time. Moreover, we obtain time for the weighted
case, where the edge weights are integers from to . The only prior work
on weighted graphs in time is the -time algorithm by
Henzinger et al. [STOC 2014, ICALP 2015] which works for directed graphs with
quasi-polynomial edge weights. The expected running time bound of our algorithm
holds against an oblivious adversary.
In contrast to the previous results which rely on maintaining a sparse
emulator, our algorithm relies on maintaining a so-called sparse -hop set introduced by Cohen [JACM 2000] in the PRAM literature. An
-hop set of a graph is a set of weighted edges
such that the distance between any pair of nodes in can be
-approximated by their -hop distance (given by a path
containing at most edges) on . Our algorithm can maintain
an -hop set of near-linear size in near-linear time under
edge deletions.Comment: Accepted to Journal of the ACM. A preliminary version of this paper
was presented at the 55th IEEE Symposium on Foundations of Computer Science
(FOCS 2014). Abstract shortened to respect the arXiv limit of 1920 character
On the Discrepancy of Jittered Sampling
We study the discrepancy of jittered sampling sets: such a set is generated for fixed by partitioning
into axis aligned cubes of equal measure and placing a random
point inside each of the cubes. We prove that, for sufficiently
large, where the upper bound with an unspecified constant
was proven earlier by Beck. Our proof makes crucial use of the sharp
Dvoretzky-Kiefer-Wolfowitz inequality and a suitably taylored Bernstein
inequality; we have reasons to believe that the upper bound has the sharp
scaling in . Additional heuristics suggest that jittered sampling should be
able to improve known bounds on the inverse of the star-discrepancy in the
regime . We also prove a partition principle showing that every
partition of combined with a jittered sampling construction gives
rise to a set whose expected squared discrepancy is smaller than that of
purely random points
- …