1,862 research outputs found
Exact steady states of minimal models of nonequilibrium statistical mechanics
Systems out of equilibrium with their environment are ubiquitous in nature. Of particular relevance to biological applications are models in which each microscopic component spontaneously generates its own motion. Known collectively as active matter, such models are natural effective descriptions of many biological systems, from subcellular motors to flocks of birds. One would like to understand such phenomena using the tools of statistical mechanics, yet the inherent nonequilibrium setting means that the most powerful classical results of that field cannot be applied. This circumstance has fuelled interest in exactly solvable models of active matter. The aim in studying such models is twofold. Firstly, as exactly solvable model are often minimal, it makes them good candidates as generic coarse-grained descriptions of real-world processes. Secondly, even if the model in question does not correspond directly to some situation realizable in experiment, its exact solution may suggest some general principles, which could also apply to more complex phenomena.
A typical tool to investigate the properties of a large system is to study the behaviour of a probe particle placed in such an environment. In this context, cases of interest are both an active particle in a passive environment or an active particle in an active environment. One model that has attracted much attention in this regard is the asymmetric simple exclusion process (ASEP), which is a prototypical minimal model of driven diffusive transport. In this thesis, I consider two variations of the ASEP on a ring geometry. The first is a system of symmetrically diffusing particles with one totally asymmetric (driven) defect particle. The second is a system of partially asymmetric particles, with one defect that may overtake the other particles. I analyze the steady states of these systems using two exact methods: the matrix product ansatz, and, for the second model the Bethe ansatz. This allows me to derive the exact density profiles and mean currents for these models, and, for the second model, the diffusion constant. Moreover, I use the Yang-Baxter formalism to study the general class of two-species partially asymmetric processes with overtaking. This allows me to determine conditions under which such models can be solved using the Bethe ansatz
Sublinear scaling in non-Markovian open quantum systems simulations
Funder: M.C. and E.M.G. acknowledge funding from EPSRC grant no. EP/T01377X/1. B.W.L. and J.K. were supported by EPSRC grant no. EP/T014032/1.While several numerical techniques are available for predicting the dynamics of non-Markovian open quantum systems, most struggle with simulations for very long memory and propagation times, e.g., due to superlinear scaling with the number of time steps n. Here, we introduce a numerically exact algorithm to calculate process tensors—compact representations of environmental influences—which provides a scaling advantage over previous algorithms by leveraging self-similarity of the tensor networks that represent the environment. It is applicable to environments with Gaussian statistics, such as for spin-boson-type open quantum systems. Based on a divide-and-conquer strategy, our approach requires only (n log n) singular value decompositions for environments with infinite memory. Where the memory can be truncated after nc time steps, a nominal scaling (nc log nc) is found, which is independent of n. This improved scaling is enabled by identifying process tensors with repeatable blocks. To demonstrate the power and utility of our approach, we provide three examples. (1) We calculate the fluorescence spectra of a quantum dot under both strong driving and strong dot-phonon couplings, a task requiring simulations over millions of time steps, which we are able to perform in minutes. (2) We efficiently find process tensors describing superradiance of multiple emitters. (3) We explore the limits of our algorithm by considering coherence decay with a very strongly coupled environment. The observed computation time is not necessarily proportional to the number of singular value decompositions because the matrix dimensions also depend on the number of time steps. Nevertheless, quasilinear and sublinear scaling of computation time is found in practice for a wide range of parameters. While there are instances where existing methods can achieve comparable nominal scaling by precalculating effective propagators for time-independent or periodic system Hamiltonians, process tensors contain all the information needed to extract arbitrary multitime correlation functions of the system when driven by arbitrary time-dependent system Hamiltonians. The algorithm we present here not only significantly extends the scope of numerically exact techniques to open quantum systems with long memory times, but it also has fundamental implications for the simulation complexity of tensor network approaches.Publisher PDFPeer reviewe
Systemic Circular Economy Solutions for Fiber Reinforced Composites
This open access book provides an overview of the work undertaken within the FiberEUse project, which developed solutions enhancing the profitability of composite recycling and reuse in value-added products, with a cross-sectorial approach. Glass and carbon fiber reinforced polymers, or composites, are increasingly used as structural materials in many manufacturing sectors like transport, constructions and energy due to their better lightweight and corrosion resistance compared to metals. However, composite recycling is still a challenge since no significant added value in the recycling and reprocessing of composites is demonstrated. FiberEUse developed innovative solutions and business models towards sustainable Circular Economy solutions for post-use composite-made products. Three strategies are presented, namely mechanical recycling of short fibers, thermal recycling of long fibers and modular car parts design for sustainable disassembly and remanufacturing. The validation of the FiberEUse approach within eight industrial demonstrators shows the potentials towards new Circular Economy value-chains for composite materials
Contributions to improve the technologies supporting unmanned aircraft operations
Mención Internacional en el título de doctorUnmanned Aerial Vehicles (UAVs), in their smaller versions known as drones, are becoming increasingly important in today's societies. The systems that make them up present a multitude of challenges, of which error can be considered the common denominator. The perception of the environment is measured by sensors that have errors, the models that interpret the information and/or define behaviors are approximations of the world and therefore also have errors. Explaining error allows extending the limits of deterministic models to address real-world problems. The performance of the technologies embedded in drones depends on our ability to understand, model, and control the error of the systems that integrate them, as well as new technologies that may emerge.
Flight controllers integrate various subsystems that are generally dependent on other systems. One example is the guidance systems. These systems provide the engine's propulsion controller with the necessary information to accomplish a desired mission. For this purpose, the flight controller is made up of a control law for the guidance system that reacts to the information perceived by the perception and navigation systems. The error of any of the subsystems propagates through the ecosystem of the controller, so the study of each of them is essential.
On the other hand, among the strategies for error control are state-space estimators, where the Kalman filter has been a great ally of engineers since its appearance in the 1960s. Kalman filters are at the heart of information fusion systems, minimizing the error covariance of the system and allowing the measured states to be filtered and estimated in the absence of observations. State Space Models (SSM) are developed based on a set of hypotheses for modeling the world. Among the assumptions are that the models of the world must be linear, Markovian, and that the error of their models must be Gaussian. In general, systems are not linear, so linearization are performed on models that are already approximations of the world. In other cases, the noise to be controlled is not Gaussian, but it is approximated to that distribution in order to be able to deal with it. On the other hand, many systems are not Markovian, i.e., their states do not depend only on the previous state, but there are other dependencies that state space models cannot handle.
This thesis deals a collection of studies in which error is formulated and reduced. First, the error in a computer vision-based precision landing system is studied, then estimation and filtering problems from the deep learning approach are addressed. Finally, classification concepts with deep learning over trajectories are studied. The first case of the collection xviiistudies
the consequences of error propagation in a machine vision-based precision landing system. This paper proposes a set of strategies to reduce the impact on the guidance system, and ultimately reduce the error. The next two studies approach the estimation and filtering problem from the deep learning approach, where error is a function to be minimized by learning. The last case of the collection deals with a trajectory classification problem with real data. This work completes the two main fields in deep learning, regression and classification, where the error is considered as a probability function of class membership.Los vehículos aéreos no tripulados (UAV) en sus versiones de pequeño tamaño conocidos como drones, van tomando protagonismo en las sociedades actuales. Los sistemas que los componen presentan multitud de retos entre los cuales el error se puede considerar como el denominador común. La percepción del entorno se mide mediante sensores que tienen error, los modelos que interpretan la información y/o definen comportamientos son aproximaciones del mundo y por consiguiente también presentan error. Explicar el error permite extender los límites de los modelos deterministas para abordar problemas del mundo real. El rendimiento de las tecnologías embarcadas en los drones, dependen de nuestra capacidad de comprender, modelar y controlar el error de los sistemas que los integran, así como de las nuevas tecnologías que puedan surgir.
Los controladores de vuelo integran diferentes subsistemas los cuales generalmente son dependientes de otros sistemas. Un caso de esta situación son los sistemas de guiado. Estos sistemas son los encargados de proporcionar al controlador de los motores información necesaria para cumplir con una misión deseada. Para ello se componen de una ley de control de guiado que reacciona a la información percibida por los sistemas de percepción y navegación. El error de cualquiera de estos sistemas se propaga por el ecosistema del controlador siendo vital su estudio.
Por otro lado, entre las estrategias para abordar el control del error se encuentran los estimadores en espacios de estados, donde el filtro de Kalman desde su aparición en los años 60, ha sido y continúa siendo un gran aliado para los ingenieros. Los filtros de Kalman son el corazón de los sistemas de fusión de información, los cuales minimizan la covarianza del error del sistema, permitiendo filtrar los estados medidos y estimarlos cuando no se tienen observaciones. Los modelos de espacios de estados se desarrollan en base a un conjunto de hipótesis para modelar el mundo. Entre las hipótesis se encuentra que los modelos del mundo han de ser lineales, markovianos y que el error de sus modelos ha de ser gaussiano. Generalmente los sistemas no son lineales por lo que se realizan linealizaciones sobre modelos que a su vez ya son aproximaciones del mundo. En otros casos el ruido que se desea controlar no es gaussiano, pero se aproxima a esta distribución para poder abordarlo. Por otro lado, multitud de sistemas no son markovianos, es decir, sus estados no solo dependen del estado anterior, sino que existen otras dependencias que los modelos de espacio de estados no son capaces de abordar. Esta tesis aborda un compendio de estudios sobre los que se formula y reduce el error. En primer lugar, se estudia el error en un sistema de aterrizaje de precisión basado en visión por computador. Después se plantean problemas de estimación y filtrado desde la aproximación del aprendizaje profundo. Por último, se estudian los conceptos de clasificación con aprendizaje profundo sobre trayectorias. El primer caso del compendio estudia las consecuencias de la propagación del error de un sistema de aterrizaje de precisión basado en visión artificial. En este trabajo se propone un conjunto de estrategias para reducir el impacto sobre el sistema de guiado, y en última instancia reducir el error. Los siguientes dos estudios abordan el problema de estimación y filtrado desde la perspectiva del aprendizaje profundo, donde el error es una función que minimizar mediante aprendizaje. El último caso del compendio aborda un problema de clasificación de trayectorias con datos reales. Con este trabajo se completan los dos campos principales en aprendizaje profundo, regresión y clasificación, donde se plantea el error como una función de probabilidad de pertenencia a una clase.I would like to thank the Ministry of Science and Innovation for granting me the funding with reference PRE2018-086793, associated to the project TEC2017-88048-C2-2-R, which provide me the opportunity to carry out all my PhD. activities, including completing an international research internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Antonio Berlanga de Jesús.- Secretario: Daniel Arias Medina.- Vocal: Alejandro Martínez Cav
Markov field models of molecular kinetics
Computer simulations such as molecular dynamics (MD) provide a possible means to understand protein dynamics and mechanisms on an atomistic scale. The resulting simulation data can be analyzed with Markov state models (MSMs), yielding a quantitative kinetic model that, e.g., encodes state populations and transition rates. However, the larger an investigated system, the more data is required to estimate a valid kinetic model. In this work, we show that this scaling problem can be escaped when decomposing a system into smaller ones, leveraging weak couplings between local domains. Our approach, termed independent Markov decomposition (IMD), is a first-order approximation neglecting couplings, i.e., it represents a decomposition of the underlying global dynamics into a set of independent local ones. We demonstrate that for truly independent systems, IMD can reduce the sampling by three orders of magnitude. IMD is applied to two biomolecular systems. First, synaptotagmin-1 is analyzed, a rapid calcium switch from the neurotransmitter release machinery. Within its C2A domain, local conformational switches are identified and modeled with independent MSMs, shedding light on the mechanism of its calcium-mediated activation. Second, the catalytic site of the serine protease TMPRSS2 is analyzed with a local drug-binding model. Equilibrium populations of different drug-binding modes are derived for three inhibitors, mirroring experimentally determined drug efficiencies. IMD is subsequently extended to an end-to-end deep learning framework called iVAMPnets, which learns a domain decomposition from simulation data and simultaneously models the kinetics in the local domains. We finally classify IMD and iVAMPnets as Markov field models (MFM), which we define as a class of models that describe dynamics by decomposing systems into local domains. Overall, this thesis introduces a local approach to Markov modeling that enables to quantitatively assess the kinetics of large macromolecular complexes, opening up possibilities to tackle current and future computational molecular biology questions
Nonequilibrium steady-state of trapped active particles
We consider an overdamped particle with a general physical mechanism that
creates noisy active movement (e.g., a run-and-tumble particle or active
Brownian particle etc.), that is confined by an external potential. Focusing on
the limit in which the correlation time of the active noise is small, we
find the nonequilibrium steady-state distribution
of the particle's position .
While typical fluctuations of follow a Boltzmann distribution with
an effective temperature that is not difficult to found, the tails of
deviate from a Boltzmann behavior: In
the limit , they scale as . We calculate the large-deviation function
exactly for arbitrary trapping potential and active
noise in dimension , by relating it to the rate function that describes
large deviations of the position of the same active particle in absence of an
external potential at long times. We then extend our results to assuming
rotational symmetry.Comment: Main text: 8 pages, 1 figure. Supplemental material: 5 pages, 1
figur
Reconstructing Dynamical Systems From Stochastic Differential Equations to Machine Learning
Die Modellierung komplexer Systeme mit einer großen Anzahl von Freiheitsgraden ist in den letzten Jahrzehnten zu einer großen Herausforderung geworden. In der Regel werden nur einige wenige Variablen komplexer Systeme in Form von gemessenen Zeitreihen beobachtet, während die meisten von ihnen - die möglicherweise mit den beobachteten Variablen interagieren - verborgen bleiben. In dieser Arbeit befassen wir uns mit dem Problem der Rekonstruktion und Vorhersage der zugrunde liegenden Dynamik komplexer Systeme mit Hilfe verschiedener datengestützter Ansätze. Im ersten Teil befassen wir uns mit dem umgekehrten Problem der Ableitung einer unbekannten Netzwerkstruktur komplexer Systeme, die Ausbreitungsphänomene widerspiegelt, aus beobachteten Ereignisreihen. Wir untersuchen die paarweise statistische Ähnlichkeit zwischen den Sequenzen von Ereigniszeitpunkten an allen Knotenpunkten durch Ereignissynchronisation (ES) und Ereignis-Koinzidenz-Analyse (ECA), wobei wir uns auf die Idee stützen, dass funktionale Konnektivität als Stellvertreter für strukturelle Konnektivität dienen kann. Im zweiten Teil konzentrieren wir uns auf die Rekonstruktion der zugrunde liegenden Dynamik komplexer Systeme anhand ihrer dominanten makroskopischen Variablen unter Verwendung verschiedener stochastischer Differentialgleichungen (SDEs). In dieser Arbeit untersuchen wir die Leistung von drei verschiedenen SDEs - der Langevin-Gleichung (LE), der verallgemeinerten Langevin-Gleichung (GLE) und dem Ansatz der empirischen Modellreduktion (EMR). Unsere Ergebnisse zeigen, dass die LE bessere Ergebnisse für Systeme mit schwachem Gedächtnis zeigt, während sie die zugrunde liegende Dynamik von Systemen mit Gedächtniseffekten und farbigem Rauschen nicht rekonstruieren kann. In diesen Situationen sind GLE und EMR besser geeignet, da die Wechselwirkungen zwischen beobachteten und unbeobachteten Variablen in Form von Speichereffekten berücksichtigt werden. Im letzten Teil dieser Arbeit entwickeln wir ein Modell, das auf dem Echo State Network (ESN) basiert und mit der PNF-Methode (Past Noise Forecasting) kombiniert wird, um komplexe Systeme in der realen Welt vorherzusagen. Unsere Ergebnisse zeigen, dass das vorgeschlagene Modell die entscheidenden Merkmale der zugrunde liegenden Dynamik der Klimavariabilität erfasst.Modeling complex systems with large numbers of degrees of freedom have become a grand challenge over the past decades. Typically, only a few variables of complex systems are observed in terms of measured time series, while the majority of them – which potentially interact with the observed ones - remain hidden. Throughout this thesis, we tackle the problem of reconstructing and predicting the underlying dynamics of complex systems using different data-driven approaches. In the first part, we address the inverse problem of inferring an unknown network structure of complex systems, reflecting spreading phenomena, from observed event series. We study the pairwise statistical similarity between the sequences of event timings at all nodes through event synchronization (ES) and event coincidence analysis (ECA), relying on the idea that functional connectivity can serve as a proxy for structural connectivity. In the second part, we focus on reconstructing the underlying dynamics of complex systems from their dominant macroscopic variables using different Stochastic Differential Equations (SDEs). We investigate the performance of three different SDEs – the Langevin Equation (LE), Generalized Langevin Equation (GLE), and the Empirical Model Reduction (EMR) approach in this thesis. Our results reveal that LE demonstrates better results for systems with weak memory while it fails to reconstruct underlying dynamics of systems with memory effects and colored-noise forcing. In these situations, the GLE and EMR are more suitable candidates since the interactions between observed and unobserved variables are considered in terms of memory effects. In the last part of this thesis, we develop a model based on the Echo State Network (ESN), combined with the past noise forecasting (PNF) method, to predict real-world complex systems. Our results show that the proposed model captures the crucial features of the underlying dynamics of climate variability
Applying Deep Learning to Calibrate Stochastic Volatility Models
Stochastic volatility models, where the volatility is a stochastic process,
can capture most of the essential stylized facts of implied volatility surfaces
and give more realistic dynamics of the volatility smile/skew. However, they
come with the significant issue that they take too long to calibrate.
Alternative calibration methods based on Deep Learning (DL) techniques have
been recently used to build fast and accurate solutions to the calibration
problem. Huge and Savine developed a Differential Machine Learning (DML)
approach, where Machine Learning models are trained on samples of not only
features and labels but also differentials of labels to features. The present
work aims to apply the DML technique to price vanilla European options (i.e.
the calibration instruments), more specifically, puts when the underlying asset
follows a Heston model and then calibrate the model on the trained network. DML
allows for fast training and accurate pricing. The trained neural network
dramatically reduces Heston calibration's computation time.
In this work, we also introduce different regularisation techniques, and we
apply them notably in the case of the DML. We compare their performance in
reducing overfitting and improving the generalisation error. The DML
performance is also compared to the classical DL (without differentiation) one
in the case of Feed-Forward Neural Networks. We show that the DML outperforms
the DL.
The complete code for our experiments is provided in the GitHub repository:
https://github.com/asridi/DML-Calibration-Heston-Mode
- …