109,009 research outputs found
Output-input stability and minimum-phase nonlinear systems
This paper introduces and studies the notion of output-input stability, which
represents a variant of the minimum-phase property for general smooth nonlinear
control systems. The definition of output-input stability does not rely on a
particular choice of coordinates in which the system takes a normal form or on
the computation of zero dynamics. In the spirit of the ``input-to-state
stability'' philosophy, it requires the state and the input of the system to be
bounded by a suitable function of the output and derivatives of the output,
modulo a decaying term depending on initial conditions. The class of
output-input stable systems thus defined includes all affine systems in global
normal form whose internal dynamics are input-to-state stable and also all
left-invertible linear systems whose transmission zeros have negative real
parts. As an application, we explain how the new concept enables one to develop
a natural extension to nonlinear systems of a basic result from linear adaptive
control.Comment: Revised version, to appear in IEEE Transactions on Automatic Control.
See related work in http://www.math.rutgers.edu/~sontag and
http://black.csl.uiuc.edu/~liberzo
Comparison of data-driven uncertainty quantification methods for a carbon dioxide storage benchmark scenario
A variety of methods is available to quantify uncertainties arising with\-in
the modeling of flow and transport in carbon dioxide storage, but there is a
lack of thorough comparisons. Usually, raw data from such storage sites can
hardly be described by theoretical statistical distributions since only very
limited data is available. Hence, exact information on distribution shapes for
all uncertain parameters is very rare in realistic applications. We discuss and
compare four different methods tested for data-driven uncertainty
quantification based on a benchmark scenario of carbon dioxide storage. In the
benchmark, for which we provide data and code, carbon dioxide is injected into
a saline aquifer modeled by the nonlinear capillarity-free fractional flow
formulation for two incompressible fluid phases, namely carbon dioxide and
brine. To cover different aspects of uncertainty quantification, we incorporate
various sources of uncertainty such as uncertainty of boundary conditions, of
conceptual model definitions and of material properties. We consider recent
versions of the following non-intrusive and intrusive uncertainty
quantification methods: arbitary polynomial chaos, spatially adaptive sparse
grids, kernel-based greedy interpolation and hybrid stochastic Galerkin. The
performance of each approach is demonstrated assessing expectation value and
standard deviation of the carbon dioxide saturation against a reference
statistic based on Monte Carlo sampling. We compare the convergence of all
methods reporting on accuracy with respect to the number of model runs and
resolution. Finally we offer suggestions about the methods' advantages and
disadvantages that can guide the modeler for uncertainty quantification in
carbon dioxide storage and beyond
Adaptive Predictive Control Using Neural Network for a Class of Pure-feedback Systems in Discrete-time
10.1109/TNN.2008.2000446IEEE Transactions on Neural Networks1991599-1614ITNN
Intrinsic gain modulation and adaptive neural coding
In many cases, the computation of a neural system can be reduced to a
receptive field, or a set of linear filters, and a thresholding function, or
gain curve, which determines the firing probability; this is known as a
linear/nonlinear model. In some forms of sensory adaptation, these linear
filters and gain curve adjust very rapidly to changes in the variance of a
randomly varying driving input. An apparently similar but previously unrelated
issue is the observation of gain control by background noise in cortical
neurons: the slope of the firing rate vs current (f-I) curve changes with the
variance of background random input. Here, we show a direct correspondence
between these two observations by relating variance-dependent changes in the
gain of f-I curves to characteristics of the changing empirical
linear/nonlinear model obtained by sampling. In the case that the underlying
system is fixed, we derive relationships relating the change of the gain with
respect to both mean and variance with the receptive fields derived from
reverse correlation on a white noise stimulus. Using two conductance-based
model neurons that display distinct gain modulation properties through a simple
change in parameters, we show that coding properties of both these models
quantitatively satisfy the predicted relationships. Our results describe how
both variance-dependent gain modulation and adaptive neural computation result
from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio
- …