223 research outputs found
Fast Fourier Optimization: Sparsity Matters
Many interesting and fundamentally practical optimization problems, ranging
from optics, to signal processing, to radar and acoustics, involve constraints
on the Fourier transform of a function. It is well-known that the {\em fast
Fourier transform} (fft) is a recursive algorithm that can dramatically improve
the efficiency for computing the discrete Fourier transform. However, because
it is recursive, it is difficult to embed into a linear optimization problem.
In this paper, we explain the main idea behind the fast Fourier transform and
show how to adapt it in such a manner as to make it encodable as constraints in
an optimization problem. We demonstrate a real-world problem from the field of
high-contrast imaging. On this problem, dramatic improvements are translated to
an ability to solve problems with a much finer grid of discretized points. As
we shall show, in general, the "fast Fourier" version of the optimization
constraints produces a larger but sparser constraint matrix and therefore one
can think of the fast Fourier transform as a method of sparsifying the
constraints in an optimization problem, which is usually a good thing.Comment: 16 pages, 8 figure
Estimates of the optimal density and kissing number of sphere packings in high dimensions
The problem of finding the asymptotic behavior of the maximal density of
sphere packings in high Euclidean dimensions is one of the most fascinating and
challenging problems in discrete geometry. One century ago, Minkowski obtained
a rigorous lower bound that is controlled asymptotically by , where 
is the Euclidean space dimension. An indication of the difficulty of the
problem can be garnered from the fact that exponential improvement of
Minkowski's bound has proved to be elusive, even though existing upper bounds
suggest that such improvement should be possible. Using a
statistical-mechanical procedure to optimize the density associated with a
"test" pair correlation function and a conjecture concerning the existence of
disordered sphere packings [S. Torquato and F. H. Stillinger, Experimental
Math. {\bf 15}, 307 (2006)], the putative exponential improvement was found
with an asymptotic behavior controlled by . Using the same
methods, we investigate whether this exponential improvement can be further
improved by exploring other test pair correlation functions correponding to
disordered packings. We demonstrate that there are simpler test functions that
lead to the same asymptotic result. More importantly, we show that there is a
wide class of test functions that lead to precisely the same exponential
improvement and therefore the asymptotic form  is much
more general than previously surmised.Comment: 23 pages, 4 figures, submitted to Phys. Rev. 
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Deep neural networks have emerged as a widely used and effective means for
tackling complex, real-world problems. However, a major obstacle in applying
them to safety-critical systems is the great difficulty in providing formal
guarantees about their behavior. We present a novel, scalable, and efficient
technique for verifying properties of deep neural networks (or providing
counter-examples). The technique is based on the simplex method, extended to
handle the non-convex Rectified Linear Unit (ReLU) activation function, which
is a crucial ingredient in many modern neural networks. The verification
procedure tackles neural networks as a whole, without making any simplifying
assumptions. We evaluated our technique on a prototype deep neural network
implementation of the next-generation airborne collision avoidance system for
unmanned aircraft (ACAS Xu). Results show that our technique can successfully
prove properties of networks that are an order of magnitude larger than the
largest networks verified using existing methods.Comment: This is the extended version of a paper with the same title that
  appeared at CAV 201
Numerical Construction of LISS Lyapunov Functions under a Small Gain Condition
In the stability analysis of large-scale interconnected systems it is
frequently desirable to be able to determine a decay point of the gain
operator, i.e., a point whose image under the monotone operator is strictly
smaller than the point itself. The set of such decay points plays a crucial
role in checking, in a semi-global fashion, the local input-to-state stability
of an interconnected system and in the numerical construction of a LISS
Lyapunov function. We provide a homotopy algorithm that computes a decay point
of a monotone op- erator. For this purpose we use a fixed point algorithm and
provide a function whose fixed points correspond to decay points of the
monotone operator. The advantage to an earlier algorithm is demonstrated.
Furthermore an example is given which shows how to analyze a given perturbed
interconnected system.Comment: 30 pages, 7 figures, 4 table
Telescope to Observe Planetary Systems (TOPS): a high throughput 1.2-m visible telescope with a small inner working angle
The Telescope to Observe Planetary Systems (TOPS) is a proposed space mission
to image in the visible (0.4-0.9 micron) planetary systems of nearby stars
simultaneously in 16 spectral bands (resolution R~20). For the ~10 most
favorable stars, it will have the sensitivity to discover 2 R_E rocky planets
within habitable zones and characterize their surfaces or atmospheres through
spectrophotometry. Many more massive planets and debris discs will be imaged
and characterized for the first time. With a 1.2m visible telescope, the
proposed mission achieves its power by exploiting the most efficient and robust
coronagraphic and wavefront control techniques. The Phase-Induced Amplitude
Apodization (PIAA) coronagraph used by TOPS allows planet detection at 2
lambda/d with nearly 100% throughput and preserves the telescope angular
resolution. An efficient focal plane wavefront sensing scheme accurately
measures wavefront aberrations which are fed back to the telescope active
primary mirror. Fine wavefront control is also performed independently in each
of 4 spectral channels, resulting in a system that is robust to wavefront
chromaticity.Comment: 12 pages, SPIE conference proceeding, May 2006, Orlando, Florid
Regularizing Portfolio Optimization
The optimization of large portfolios displays an inherent instability to
estimation error. This poses a fundamental problem, because solutions that are
not stable under sample fluctuations may look optimal for a given sample, but
are, in effect, very far from optimal with respect to the average risk. In this
paper, we approach the problem from the point of view of statistical learning
theory. The occurrence of the instability is intimately related to over-fitting
which can be avoided using known regularization methods. We show how
regularized portfolio optimization with the expected shortfall as a risk
measure is related to support vector regression. The budget constraint dictates
a modification. We present the resulting optimization problem and discuss the
solution. The L2 norm of the weight vector is used as a regularizer, which
corresponds to a diversification "pressure". This means that diversification,
besides counteracting downward fluctuations in some assets by upward
fluctuations in others, is also crucial because it improves the stability of
the solution. The approach we provide here allows for the simultaneous
treatment of optimization and diversification in one framework that enables the
investor to trade-off between the two, depending on the size of the available
data set
A Probabilistic Model For the Time to Unravel a Strand of DNA
A common model for the time σL (sec) taken by a DNA strand of length L (cm) to unravel is to assume that new points of unraveling occur along the strand as a Poisson process of rate λ 1/(cm x sec) in space-time and that the unraveling propagates at speed v/2 (cm/sec) in each direction until time σL. We solve the open problem to determine the distribution of σL by finding its Laplace transform and using it to show that as x = L2λ/v → ∞, σL is nearly a constant:σL=1λvlogL2λv12We also derive (modulo some small gaps) the more precise limiting asymptotic formula: for - ∞ \u3c θ \u3c ∞,PσL\u3c1λvψ12[log(L2λv)]+θψ12[log(L2λv)]→e-e-θwhere ψ is defined by the equation: ψ(x) = log ψ(x)+x, x⩾1. These results are obtained by interchanging the role of space and time to uncover an underlying Markov process which can be studied in detail
- …
