174 research outputs found
Fast Fourier Optimization: Sparsity Matters
Many interesting and fundamentally practical optimization problems, ranging
from optics, to signal processing, to radar and acoustics, involve constraints
on the Fourier transform of a function. It is well-known that the {\em fast
Fourier transform} (fft) is a recursive algorithm that can dramatically improve
the efficiency for computing the discrete Fourier transform. However, because
it is recursive, it is difficult to embed into a linear optimization problem.
In this paper, we explain the main idea behind the fast Fourier transform and
show how to adapt it in such a manner as to make it encodable as constraints in
an optimization problem. We demonstrate a real-world problem from the field of
high-contrast imaging. On this problem, dramatic improvements are translated to
an ability to solve problems with a much finer grid of discretized points. As
we shall show, in general, the "fast Fourier" version of the optimization
constraints produces a larger but sparser constraint matrix and therefore one
can think of the fast Fourier transform as a method of sparsifying the
constraints in an optimization problem, which is usually a good thing.Comment: 16 pages, 8 figure
A Probabilistic Model For the Time to Unravel a Strand of DNA
A common model for the time σL (sec) taken by a DNA strand of length L (cm) to unravel is to assume that new points of unraveling occur along the strand as a Poisson process of rate λ 1/(cm x sec) in space-time and that the unraveling propagates at speed v/2 (cm/sec) in each direction until time σL. We solve the open problem to determine the distribution of σL by finding its Laplace transform and using it to show that as x = L2λ/v → ∞, σL is nearly a constant:σL=1λvlogL2λv12We also derive (modulo some small gaps) the more precise limiting asymptotic formula: for - ∞ \u3c θ \u3c ∞,PσL\u3c1λvψ12[log(L2λv)]+θψ12[log(L2λv)]→e-e-θwhere ψ is defined by the equation: ψ(x) = log ψ(x)+x, x⩾1. These results are obtained by interchanging the role of space and time to uncover an underlying Markov process which can be studied in detail
Optimal pupil apodizations for arbitrary apertures
We present here fully optimized two-dimensional pupil apodizations for which
no specific geometric constraints are put on the pupil plane apodization, apart
from the shape of the aperture itself. Masks for circular and segmented
apertures are displayed, with and without central obstruction and spiders.
Examples of optimal masks are shown for Subaru, SPICA and JWST. Several
high-contrast regions are considered with different sizes, positions, shapes
and contrasts. It is interesting to note that all the masks that result from
these optimizations tend to have a binary transmission profile.Comment: 16 pages, 10 figure
Estimates of the optimal density and kissing number of sphere packings in high dimensions
The problem of finding the asymptotic behavior of the maximal density of
sphere packings in high Euclidean dimensions is one of the most fascinating and
challenging problems in discrete geometry. One century ago, Minkowski obtained
a rigorous lower bound that is controlled asymptotically by , where
is the Euclidean space dimension. An indication of the difficulty of the
problem can be garnered from the fact that exponential improvement of
Minkowski's bound has proved to be elusive, even though existing upper bounds
suggest that such improvement should be possible. Using a
statistical-mechanical procedure to optimize the density associated with a
"test" pair correlation function and a conjecture concerning the existence of
disordered sphere packings [S. Torquato and F. H. Stillinger, Experimental
Math. {\bf 15}, 307 (2006)], the putative exponential improvement was found
with an asymptotic behavior controlled by . Using the same
methods, we investigate whether this exponential improvement can be further
improved by exploring other test pair correlation functions correponding to
disordered packings. We demonstrate that there are simpler test functions that
lead to the same asymptotic result. More importantly, we show that there is a
wide class of test functions that lead to precisely the same exponential
improvement and therefore the asymptotic form is much
more general than previously surmised.Comment: 23 pages, 4 figures, submitted to Phys. Rev.
Spiderweb Masks for High-Contrast Imaging
Motivated by the desire to image exosolar planets, recent work by us and
others has shown that high-contrast imaging can be achieved using specially
shaped pupil masks. To date, the masks we have designed have been symmetric
with respect to a cartesian coordinate system but were not rotationally
invariant, thus requiring that one take multiple images at different angles of
rotation about the central point in order to obtain high-contrast in all
directions. In this paper, we present a new class of masks that have rotational
symmetry and provide high-contrast in all directions with just one image. These
masks provide the required 10^{-10} level of contrast to within 4 lambda/D, and
in some cases 3 lambda/D, of the central point, which is deemed necessary for
exosolar planet finding/imaging. They are also well-suited for use on
ground-based telescopes, and perhaps NGST too, since they can accommodate
central obstructions and associated support spiders.Comment: 20 pages, 9 figures, to appear in Ap
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Deep neural networks have emerged as a widely used and effective means for
tackling complex, real-world problems. However, a major obstacle in applying
them to safety-critical systems is the great difficulty in providing formal
guarantees about their behavior. We present a novel, scalable, and efficient
technique for verifying properties of deep neural networks (or providing
counter-examples). The technique is based on the simplex method, extended to
handle the non-convex Rectified Linear Unit (ReLU) activation function, which
is a crucial ingredient in many modern neural networks. The verification
procedure tackles neural networks as a whole, without making any simplifying
assumptions. We evaluated our technique on a prototype deep neural network
implementation of the next-generation airborne collision avoidance system for
unmanned aircraft (ACAS Xu). Results show that our technique can successfully
prove properties of networks that are an order of magnitude larger than the
largest networks verified using existing methods.Comment: This is the extended version of a paper with the same title that
appeared at CAV 201
Regularizing Portfolio Optimization
The optimization of large portfolios displays an inherent instability to
estimation error. This poses a fundamental problem, because solutions that are
not stable under sample fluctuations may look optimal for a given sample, but
are, in effect, very far from optimal with respect to the average risk. In this
paper, we approach the problem from the point of view of statistical learning
theory. The occurrence of the instability is intimately related to over-fitting
which can be avoided using known regularization methods. We show how
regularized portfolio optimization with the expected shortfall as a risk
measure is related to support vector regression. The budget constraint dictates
a modification. We present the resulting optimization problem and discuss the
solution. The L2 norm of the weight vector is used as a regularizer, which
corresponds to a diversification "pressure". This means that diversification,
besides counteracting downward fluctuations in some assets by upward
fluctuations in others, is also crucial because it improves the stability of
the solution. The approach we provide here allows for the simultaneous
treatment of optimization and diversification in one framework that enables the
investor to trade-off between the two, depending on the size of the available
data set
- …