425,874 research outputs found

    Dynamical Behaviour in the Nonlinear Rheology of Surfactant Solutions

    Full text link
    Several surfactant molecules self-assemble in solution to form long, flexible wormlike micelles which get entangled with each other, leading to viscoelastic gel phases. We discuss our recent work on the rheology of such a gel formed in the dilute aqueous solutions of a surfactant CTAT. In the linear rheology regime, the storage modulus G(ω)G^{\prime}(\omega) and loss modulus G(ω)G^{\prime\prime}(\omega) have been measured over a wide frequency range. In the nonlinear regime, the shear stress σ\sigma shows a plateau as a function of the shear rate γ˙\dot\gamma above a certain cutoff shear rate γ˙c\dot\gamma_c. Under controlled shear rate conditions in the plateau regime, the shear stress and the first normal stress difference show oscillatory time-dependence. The analysis of the measured time series of shear stress and normal stress has been done using several methods incorporating state space reconstruction by embedding of time delay vectors.The analysis shows the existence of a finite correlation dimension and a positive Lyapunov exponent, unambiguously implying that the dynamics of the observed mechanical instability can be described by that of a dynamical system with a strange attractor of dimension varying from 2.4 to 2.9.Comment: 12 pages, includes 7 eps figure

    Autoencoders for discovering manifold dimension and coordinates in data from complex dynamical systems

    Full text link
    While many phenomena in physics and engineering are formally high-dimensional, their long-time dynamics often live on a lower-dimensional manifold. The present work introduces an autoencoder framework that combines implicit regularization with internal linear layers and L2L_2 regularization (weight decay) to automatically estimate the underlying dimensionality of a data set, produce an orthogonal manifold coordinate system, and provide the mapping functions between the ambient space and manifold space, allowing for out-of-sample projections. We validate our framework's ability to estimate the manifold dimension for a series of datasets from dynamical systems of varying complexities and compare to other state-of-the-art estimators. We analyze the training dynamics of the network to glean insight into the mechanism of low-rank learning and find that collectively each of the implicit regularizing layers compound the low-rank representation and even self-correct during training. Analysis of gradient descent dynamics for this architecture in the linear case reveals the role of the internal linear layers in leading to faster decay of a "collective weight variable" incorporating all layers, and the role of weight decay in breaking degeneracies and thus driving convergence along directions in which no decay would occur in its absence. We show that this framework can be naturally extended for applications of state-space modeling and forecasting by generating a data-driven dynamic model of a spatiotemporally chaotic partial differential equation using only the manifold coordinates. Finally, we demonstrate that our framework is robust to hyperparameter choices

    On dimension reduction in Gaussian filters

    Full text link
    A priori dimension reduction is a widely adopted technique for reducing the computational complexity of stationary inverse problems. In this setting, the solution of an inverse problem is parameterized by a low-dimensional basis that is often obtained from the truncated Karhunen-Loeve expansion of the prior distribution. For high-dimensional inverse problems equipped with smoothing priors, this technique can lead to drastic reductions in parameter dimension and significant computational savings. In this paper, we extend the concept of a priori dimension reduction to non-stationary inverse problems, in which the goal is to sequentially infer the state of a dynamical system. Our approach proceeds in an offline-online fashion. We first identify a low-dimensional subspace in the state space before solving the inverse problem (the offline phase), using either the method of "snapshots" or regularized covariance estimation. Then this subspace is used to reduce the computational complexity of various filtering algorithms - including the Kalman filter, extended Kalman filter, and ensemble Kalman filter - within a novel subspace-constrained Bayesian prediction-and-update procedure (the online phase). We demonstrate the performance of our new dimension reduction approach on various numerical examples. In some test cases, our approach reduces the dimensionality of the original problem by orders of magnitude and yields up to two orders of magnitude in computational savings

    Equivalence of robust stabilization and robust performance via feedback

    Full text link
    One approach to robust control for linear plants with structured uncertainty as well as for linear parameter-varying (LPV) plants (where the controller has on-line access to the varying plant parameters) is through linear-fractional-transformation (LFT) models. Control issues to be addressed by controller design in this formalism include robust stability and robust performance. Here robust performance is defined as the achievement of a uniform specified L2L^{2}-gain tolerance for a disturbance-to-error map combined with robust stability. By setting the disturbance and error channels equal to zero, it is clear that any criterion for robust performance also produces a criterion for robust stability. Counter-intuitively, as a consequence of the so-called Main Loop Theorem, application of a result on robust stability to a feedback configuration with an artificial full-block uncertainty operator added in feedback connection between the error and disturbance signals produces a result on robust performance. The main result here is that this performance-to-stabilization reduction principle must be handled with care for the case of dynamic feedback compensation: casual application of this principle leads to the solution of a physically uninteresting problem, where the controller is assumed to have access to the states in the artificially-added feedback loop. Application of the principle using a known more refined dynamic-control robust stability criterion, where the user is allowed to specify controller partial-state dimensions, leads to correct robust-performance results. These latter results involve rank conditions in addition to Linear Matrix Inequality (LMI) conditions.Comment: 20 page
    corecore