540,858 research outputs found
Fixation, transient landscape and diffusion's dilemma in stochastic evolutionary game dynamics
Agent-based stochastic models for finite populations have recently received
much attention in the game theory of evolutionary dynamics. Both the ultimate
fixation and the pre-fixation transient behavior are important to a full
understanding of the dynamics. In this paper, we study the transient dynamics
of the well-mixed Moran process through constructing a landscape function. It
is shown that the landscape playing a central theoretical "device" that
integrates several lines of inquiries: the stable behavior of the replicator
dynamics, the long-time fixation, and continuous diffusion approximation
associated with asymptotically large population. Several issues relating to the
transient dynamics are discussed: (i) multiple time scales phenomenon
associated with intra- and inter-attractoral dynamics; (ii) discontinuous
transition in stochastically stationary process akin to Maxwell construction in
equilibrium statistical physics; and (iii) the dilemma diffusion approximation
facing as a continuous approximation of the discrete evolutionary dynamics. It
is found that rare events with exponentially small probabilities, corresponding
to the uphill movements and barrier crossing in the landscape with multiple
wells that are made possible by strong nonlinear dynamics, plays an important
role in understanding the origin of the complexity in evolutionary, nonlinear
biological systems.Comment: 34 pages, 4 figure
An optimal-control based integrated model of supply chain
Problems of supply chain scheduling are challenged by high complexity, combination of continuous and discrete processes, integrated production and transportation operations as well as dynamics and resulting requirements for adaptability and stability analysis. A possibility to address the above-named issues opens modern control theory and optimal program control in particular. Based on a combination of fundamental results of modern optimal program control theory and operations research, an original approach to supply chain scheduling is developed in order to answer the challenges of complexity, dynamics, uncertainty, and adaptivity. Supply chain schedule generation is represented as an optimal program control problem in combination with mathematical programming and interpreted as a dynamic process of operations control within an adaptive framework. The calculation procedure is based on applying Pontryagin’s maximum principle and the resulting essential reduction of problem dimensionality that is under solution at each instant of time. With the developed model, important categories of supply chain analysis such as stability and adaptability can be taken into consideration. Besides, the dimensionality of operations research-based problems can be relieved with the help of distributing model elements between an operations research (static aspects) and a control (dynamic aspects) model. In addition, operations control and flow control models are integrated and applicable for both discrete and continuous processes.supply chain, model of supply chain scheduling, optimal program control theory, Pontryagin’s maximum principle, operations research model,
Predictability, complexity and learning
We define {\em predictive information} as the mutual
information between the past and the future of a time series. Three
qualitatively different behaviors are found in the limit of large observation
times : can remain finite, grow logarithmically, or grow
as a fractional power law. If the time series allows us to learn a model with a
finite number of parameters, then grows logarithmically with
a coefficient that counts the dimensionality of the model space. In contrast,
power--law growth is associated, for example, with the learning of infinite
parameter (or nonparametric) models such as continuous functions with
smoothness constraints. There are connections between the predictive
information and measures of complexity that have been defined both in learning
theory and in the analysis of physical systems through statistical mechanics
and dynamical systems theory. Further, in the same way that entropy provides
the unique measure of available information consistent with some simple and
plausible conditions, we argue that the divergent part of
provides the unique measure for the complexity of dynamics underlying a time
series. Finally, we discuss how these ideas may be useful in different problems
in physics, statistics, and biology.Comment: 53 pages, 3 figures, 98 references, LaTeX2
Chaotic dynamics and bifurcation in a macro model
The qualitative dynamics of a discrete time version of a deterministic, continuous time, nonlinear macro model formulated by Haavelmo are fully characterized. Recently developed methods of symbolic dynamics and ergodic theory are shown to provide a simple, effective means of analyzing the behavior of the resulting one-parameter family of first-order, deterministic, nonlinear difference equations. A complex periodic and random “aperiodic” orbit structure dependent on a key structural parameter is present, which contrasts with the total absence of such complexity in Haavelmo’s continuous time version. Several implications for dynamic economic modelling are discussed.
Decentralized adaptive neural network control of interconnected nonlinear dynamical systems with application to power system
Traditional nonlinear techniques cannot be directly applicable to control large scale interconnected nonlinear dynamic systems due their sheer size and unavailability of system dynamics. Therefore, in this dissertation, the decentralized adaptive neural network (NN) control of a class of nonlinear interconnected dynamic systems is introduced and its application to power systems is presented in the form of six papers. In the first paper, a new nonlinear dynamical representation in the form of a large scale interconnected system for a power network free of algebraic equations with multiple UPFCs as nonlinear controllers is presented. Then, oscillation damping for UPFCs using adaptive NN control is discussed by assuming that the system dynamics are known. Subsequently, the dynamic surface control (DSC) framework is proposed in continuous-time not only to overcome the need for the subsystem dynamics and interconnection terms, but also to relax the explosion of complexity problem normally observed in traditional backstepping. The application of DSC-based decentralized control of power system with excitation control is shown in the third paper. On the other hand, a novel adaptive NN-based decentralized controller for a class of interconnected discrete-time systems with unknown subsystem and interconnection dynamics is introduced since discrete-time is preferred for implementation. The application of the decentralized controller is shown on a power network. Next, a near optimal decentralized discrete-time controller is introduced in the fifth paper for such systems in affine form whereas the sixth paper proposes a method for obtaining the L2-gain near optimal control while keeping a tradeoff between accuracy and computational complexity. Lyapunov theory is employed to assess the stability of the controllers --Abstract, page iv
Pricing Inflation and Interest Rates Derivatives with Macroeconomic Foundations
I develop a model to price inflation and interest rates derivatives using continuous-time dynamics linked to monetary macroeconomic models: in this approach the reaction function of the central bank, the bond market liquidity, and expectations play an important role. The model explains the effects of non-standard monetary policies (like quantitative easing or its tapering) on derivatives pricing.
A first adaptation of the discrete-time macroeconomic DSGE model is proposed, and some changes are made to use it for pricing: this is respectful of the original model, but it soon becomes clear that moving to continuous time brings significant benefits.
The continuous-time model is built with no-arbitrage assumptions and economic hypotheses that are inspired by the DSGE model. Interestingly, in the proposed model the short rates dynamics follow a time-varying Hull-White model, which simplifies the calibration. This result is significant from a theoretical perspective as it links the new theory proposed to a well-established model. Further, I obtain closed forms for zero-coupon and year-on-year inflation payoffs. The calibration process is fully separable, which means that it is carried out in many simple steps that do not require intensive computation.
The advantages of this approach become apparent when doing risk analysis on inflation derivatives: because the model explicitly takes into account economic variables, a trader can assess the impact of a change in central bank policy on a complex book of fixed income instruments, which is not straightforward when using standard models.
The analytical tractability of the model makes it a candidate to tackle more complex problems, like inflation skew and counterparty/funding valuation adjustments (known by practitioners as XVA): both problems are interesting from a theoretical and an applied point of view, and, given their computational complexity, benefit from a tractable model.
In both cases the results are promising.Open Acces
Topological generalization bounds for discrete-time stochastic optimization algorithms
We present a novel set of rigorous and computationally efficient topology-based complexity notions that exhibit a strong correlation with the generalization gap in modern deep neural networks (DNNs). DNNs show remarkable generalization properties, yet the source of these capabilities remains elusive, defying the established statistical learning theory. Recent studies have revealed that properties of training trajectories can be indicative of generalization. Building on this insight, state-of-the-art methods have leveraged the topology of these trajectories, particularly their fractal dimension, to quantify generalization. Most existing works compute this quantity by assuming continuous- or infinite-time training dynamics, complicating the development of practical estimators capable of accurately predicting generalization without access to test data. In this paper, we respect the discrete-time nature of training trajectories and investigate the underlying topological quantities that can be amenable to topological data analysis tools. This leads to a new family of reliable topological complexity measures that provably bound the generalization error, eliminating the need for restrictive geometric assumptions. These measures are computationally friendly, enabling us to propose simple yet effective algorithms for computing generalization indices. Moreover, our flexible framework can be extended to different domains, tasks, and architectures. Our experimental results demonstrate that our new complexity measures correlate highly with generalization error in industry-standards architectures such as transformers and deep graph networks. Our approach consistently outperforms existing topological bounds across a wide range of datasets, models, and optimizers, highlighting the practical relevance and effectiveness of our complexity measures
- …