5,713,516 research outputs found
Leitmann's direct method for fractional optimization problems
Based on a method introduced by Leitmann [Internat. J. Non-Linear Mech. {\bf
2} (1967), 55--59], we exhibit exact solutions for some fractional optimization
problems of the calculus of variations and optimal control.Comment: Submitted June 16, 2009 and accepted March 15, 2010 for publication
in Applied Mathematics and Computation
Optimization of Linear Differential Systems by Lyapunov's Direct Method
Two approaches to solving optimization problems of dynamic systems are well-known. The first approach needs to find a fixed control (program control) for which the system described by differential equations reaches a predetermined value and minimizes an integral quality criterion. Proposed by L.S. Pontryagin, this method was in essence a further development of general optimization methods for dynamical systems. The second method consists in finding a control function (in the form of a feedback) guaranteeing that, simultaneously, the zero solution is asymptotically stable and an integral quality criterion attains a minimum value. This method is based on what is called the second Lyapunov method and its founder is N.N. Krasovskii. In the paper, the latter method is applied to linear differential equations and systems with integral quality criteria
Direct sampling method for anomaly imaging from S-parameter
In this paper, we develop a fast imaging technique for small anomalies
located in homogeneous media from S-parameter data measured at dipole antennas.
Based on the representation of S-parameters when an anomaly exists, we design a
direct sampling method (DSM) for imaging an anomaly and establishing a
relationship between the indicator function of DSM and an infinite series of
Bessel functions of integer order. Simulation results using synthetic data at
f=1GHz of angular frequency are illustrated to support the identified structure
of the indicator function.Comment: 6 pages, 6 figure
Cycle-based Cluster Variational Method for Direct and Inverse Inference
We elaborate on the idea that loop corrections to belief propagation could be
dealt with in a systematic way on pairwise Markov random fields, by using the
elements of a cycle basis to define region in a generalized belief propagation
setting. The region graph is specified in such a way as to avoid dual loops as
much as possible, by discarding redundant Lagrange multipliers, in order to
facilitate the convergence, while avoiding instabilities associated to minimal
factor graph construction. We end up with a two-level algorithm, where a belief
propagation algorithm is run alternatively at the level of each cycle and at
the inter-region level. The inverse problem of finding the couplings of a
Markov random field from empirical covariances can be addressed region wise. It
turns out that this can be done efficiently in particular in the Ising context,
where fixed point equations can be derived along with a one-parameter log
likelihood function to minimize. Numerical experiments confirm the
effectiveness of these considerations both for the direct and inverse MRF
inference.Comment: 47 pages, 16 figure
- …
