1,603 research outputs found
A Comparison between Fixed-Basis and Variable-Basis Schemes for Function Approximation and Functional Optimization
Fixed-basis and variable-basis approximation schemes are compared for the problems of function approximation and functional optimization (also known as infinite programming). Classes of problems are investigated for which variable-basis schemes with sigmoidal computational
units perform better than fixed-basis ones, in terms of the minimum number of computational units needed to achieve a desired error in function approximation or approximate optimization. Previously known bounds on the accuracy are extended, with better rates, to families o
Lipschitz continuity of the solutions to team optimization problems revisited
Sufficient conditions for the existence and Lipschitz
continuity of optimal strategies for static team optimization problems are studied. Revised statements and proofs of some results in âKim K.H., Roush F.W., Team Theory. Ellis Horwood Limited Publishers, Chichester, UK, 1987â are presented
Suboptimal solutions to network team optimization problems
Smoothness of the solutions to network team optimization problems with statistical information structure is investigated. Suboptimal solutions expressed as linear combinations of elements from sets of basis functions containing adjustable parameters are considered. Estimates of their accuracy are derived, for basis functions represented by sinusoids with variable frequencies and phases and
Gaussians with variable centers and widthss
Estimates of the Approximation Error Using Rademacher Complexity: Learning Vector-Valued Functions
For certain families of multivariable vector-valued functions to be approximated, the accuracy of approximation schemes made up of linear combinations of computational units containing adjustable parameters is investigated. Upper bounds on the approximation error are derived that depend on the Rademacher complexities of the families. The estimates exploit possible relationships among the components of the multivariable vector-valued functions. All such components are approximated simultaneously in such a way to use, for a desired approximation accuracy, less computational units than those required by componentwise approximation. An application to -stage optimization problems is discussed
Approximation Error Bounds via Rademacher's Complexity
Approximation properties of some connectionistic models, commonly used to construct approximation schemes for optimization problems with multivariable functions as admissible solutions, are investigated. Such models are made up of linear combinations of computational units
with adjustable parameters. The relationship between model complexity (number of computational units) and approximation error is investigated using tools from Statistical Learning Theory, such as Talagrand's
inequality, fat-shattering dimension, and Rademacher's complexity. For some families of multivariable functions, estimates of the approximation accuracy of models with certain computational units are derived in dependence of the Rademacher's complexities of the families. The
estimates improve previously-available ones, which were expressed in terms of V C dimension and derived by exploiting union-bound techniques. The results are applied to approximation schemes with certain radial-basis-functions as computational units, for which it is shown that
the estimates do not exhibit the curse of dimensionality with respect to the number of variables
Structural properties of optimal coordinate-convex policies for CAC with nonlinearly-constrained feasibility regions
Necessary optimality conditions for Call Admission Control (CAC) problems with nonlinearly-constrained feasibility regions and two classes of users are derived. The policies are restricted to the class of coordinate-convex policies. Two kinds of structural properties of the optimal policies and their robustness with respect to changes of the feasibility region are investigated: 1) general properties not depending on the revenue ratio associated with the two classes of users and 2) more specific properties depending on such a ratio. The results allow one to narrow the search for the optimal policies to a suitable subset of the set of coordinate-convex policies
LQG Online Learning
Optimal control theory and machine learning techniques are combined to
formulate and solve in closed form an optimal control formulation of online
learning from supervised examples with regularization of the updates. The
connections with the classical Linear Quadratic Gaussian (LQG) optimal control
problem, of which the proposed learning paradigm is a non-trivial variation as
it involves random matrices, are investigated. The obtained optimal solutions
are compared with the Kalman-filter estimate of the parameter vector to be
learned. It is shown that the proposed algorithm is less sensitive to outliers
with respect to the Kalman estimate (thanks to the presence of the
regularization term), thus providing smoother estimates with respect to time.
The basic formulation of the proposed online-learning framework refers to a
discrete-time setting with a finite learning horizon and a linear model.
Various extensions are investigated, including the infinite learning horizon
and, via the so-called "kernel trick", the case of nonlinear models
- âŠ