5,828 research outputs found
Robust localization methods for passivity enforcement of linear macromodels
In this paper we solve a non-smooth convex formulation for passivity enforcement of linear macromodels using robust localization based algorithms such as the ellipsoid and the cutting plane methods. Differently from existing perturbation based techniques, we solve the formulation based on the direct ℌ∞ norm minimization through perturbation of state-space model parameters. We provide a systematic way of defining an initial set which is guaranteed to contain the global optimum. We also provide a lower bound on the global minimum, that grows tighter at each iteration and hence guarantees δ - optimality of the computed solution. We demonstrate the robustness of our implementation by generating accurate passive models for challenging examples for which existing algorithms either failed or exhibited extremely slow convergenc
The interaction of helical tip and root vortices in a wind turbine wake
Analysis of the helical vortices measured behind a model wind turbine in a water channel are reported. Phase-locked measurements using planar particle image ve- locimetry are taken behind a Glauert rotor to investigate the evolution and breakdown of the helical vortex structures. Existing linear stability theory predicts helical vortex filaments to be susceptible to three unstable modes. The current work presents tip and root vortex evolution in the wake for varying tip speed ratio and shows a breaking of the helical symmetry and merging of the vortices due to mutual inductance between the vortical filaments. The merging of the vortices is shown to be steady with rotor phase, however, small-scale non-periodic meander of the vortex positions is also ob- served. The generation of the helical wake is demonstrated to be closely coupled with the blade aerodynamics, strongly influencing the vortex properties which are shown to agree with theoretical predictions of the circulation shed into the wake by the blades. The mutual inductance of the helices is shown to occur at the same non-dimensional wake distance
Non-convex Optimization for Machine Learning
A vast majority of machine learning algorithms train their models and perform
inference by solving optimization problems. In order to capture the learning
and prediction problems accurately, structural constraints such as sparsity or
low rank are frequently imposed or else the objective itself is designed to be
a non-convex function. This is especially true of algorithms that operate in
high-dimensional spaces or that train non-linear models such as tensor models
and deep networks.
The freedom to express the learning problem as a non-convex optimization
problem gives immense modeling power to the algorithm designer, but often such
problems are NP-hard to solve. A popular workaround to this has been to relax
non-convex problems to convex ones and use traditional methods to solve the
(convex) relaxed optimization problems. However this approach may be lossy and
nevertheless presents significant challenges for large scale optimization.
On the other hand, direct approaches to non-convex optimization have met with
resounding success in several domains and remain the methods of choice for the
practitioner, as they frequently outperform relaxation-based techniques -
popular heuristics include projected gradient descent and alternating
minimization. However, these are often poorly understood in terms of their
convergence and other properties.
This monograph presents a selection of recent advances that bridge a
long-standing gap in our understanding of these heuristics. The monograph will
lead the reader through several widely used non-convex optimization techniques,
as well as applications thereof. The goal of this monograph is to both,
introduce the rich literature in this area, as well as equip the reader with
the tools and techniques needed to analyze these simple procedures for
non-convex problems.Comment: The official publication is available from now publishers via
http://dx.doi.org/10.1561/220000005
Simple, Efficient, and Neural Algorithms for Sparse Coding
Sparse coding is a basic task in many fields including signal processing,
neuroscience and machine learning where the goal is to learn a basis that
enables a sparse representation of a given set of data, if one exists. Its
standard formulation is as a non-convex optimization problem which is solved in
practice by heuristics based on alternating minimization. Re- cent work has
resulted in several algorithms for sparse coding with provable guarantees, but
somewhat surprisingly these are outperformed by the simple alternating
minimization heuristics. Here we give a general framework for understanding
alternating minimization which we leverage to analyze existing heuristics and
to design new ones also with provable guarantees. Some of these algorithms seem
implementable on simple neural architectures, which was the original motivation
of Olshausen and Field (1997a) in introducing sparse coding. We also give the
first efficient algorithm for sparse coding that works almost up to the
information theoretic limit for sparse recovery on incoherent dictionaries. All
previous algorithms that approached or surpassed this limit run in time
exponential in some natural parameter. Finally, our algorithms improve upon the
sample complexity of existing approaches. We believe that our analysis
framework will have applications in other settings where simple iterative
algorithms are used.Comment: 37 pages, 1 figur
High-contrast imaging at small separation: impact of the optical configuration of two deformable mirrors on dark holes
The direct detection and characterization of exoplanets will be a major
scientific driver over the next decade, involving the development of very large
telescopes and requires high-contrast imaging close to the optical axis. Some
complex techniques have been developed to improve the performance at small
separations (coronagraphy, wavefront shaping, etc). In this paper, we study
some of the fundamental limitations of high contrast at the instrument design
level, for cases that use a combination of a coronagraph and two deformable
mirrors for wavefront shaping. In particular, we focus on small-separation
point-source imaging (around 1 /D). First, we analytically or
semi-analytically analysing the impact of several instrument design parameters:
actuator number, deformable mirror locations and optic aberrations (level and
frequency distribution). Second, we develop in-depth Monte Carlo simulation to
compare the performance of dark hole correction using a generic test-bed model
to test the Fresnel propagation of multiple randomly generated optics static
phase errors. We demonstrate that imaging at small separations requires large
setup and small dark hole size. The performance is sensitive to the optic
aberration amount and spatial frequencies distribution but shows a weak
dependence on actuator number or setup architecture when the dark hole is
sufficiently small (from 1 to 5 /D).Comment: 13 pages, 18 figure
An investigation of new methods for estimating parameter sensitivities
The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity
- …