71,647 research outputs found

    Associative bond swaps in molecular dynamics

    Get PDF
    We implement a three-body potential to model associative bond swaps, and release it as part of the HOOMD-blue software. The use of a three-body potential to model swaps has been proven to be effective and has recently provided useful insights into the mechanics and dynamics of adaptive network materials such as vitrimers. It is elegant because it can be used in plain molecular dynamics simulations without the need for topology-altering Monte Carlo steps, and naturally represents typical physical features such as slip-bond behavior. It is easily tunable with a single parameter to control the average swap rate. Here, we show how associative bond swaps can be used to speed up the equilibration of systems that self-assemble by avoiding traps and pitfalls, corresponding to long-lived metastable configurations. Our results demonstrate the possibilities of these swaps not only for modeling systems that are associative by nature, but also for increasing simulation efficiency in other systems that are modellable in HOOMD-blue

    Associative bond swaps in molecular dynamics

    Get PDF
    We implement a three-body potential to model associative bond swaps, and release it as part of the HOOMD-blue software. The use of a three-body potential to model swaps has been proven to be effective and has recently provided useful insights into the mechanics and dynamics of adaptive network materials such as vitrimers. It is elegant because it can be used in plain molecular dynamics simulations without the need for topology-altering Monte Carlo steps, and naturally represents typical physical features such as slip-bond behavior. It is easily tunable with a single parameter to control the average swap rate. Here, we show how associative bond swaps can be used to speed up the equilibration of systems that self-assemble by avoiding traps and pitfalls, corresponding to long-lived metastable configurations. Our results demonstrate the possibilities of these swaps not only for modeling systems that are associative by nature, but also for increasing simulation efficiency in other systems that are modellable in HOOMD-blue

    Branes, Quantum Nambu Brackets, and the Hydrogen Atom

    Full text link
    The Nambu Bracket quantization of the Hydrogen atom is worked out as an illustration of the general method. The dynamics of topological open branes is controlled classically by Nambu Brackets. Such branes then may be quantized through the consistent quantization of the underlying Nambu brackets: properly defined, the Quantum Nambu Brackets comprise an associative structure, although the naive derivation property is mooted through operator entwinement. For superintegrable systems, such as the Hydrogen atom, the results coincide with those furnished by Hamiltonian quantization--but the method is not limited to Hamiltonian systems.Comment: 6 pages, LateX2e. Invited talk by CZ at the XIII International Colloquium on Integrable Systems and Quantum Groups, Prague, June 18, 200

    Signatures of Associative Memory Behavior in a Multimode Dicke Model

    Get PDF
    © 2020 American Physical Society. Dicke-like models can describe a variety of physical systems, such as atoms in a cavity or vibrating ion chains. In equilibrium these systems often feature a radical change in their behavior when switching from weak to strong spin-boson interaction. This usually manifests in a transition from a "dark"to a "superradiant"phase. However, understanding the out-of-equilibrium physics of these models is extremely challenging, and even more so for strong spin-boson coupling. Here we show that the nonequilibrium strongly interacting multimode Dicke model can mimic some fundamental properties of an associative memory - a system which permits the recognition of patterns, such as letters of an alphabet. Patterns are encoded in the couplings between spins and bosons, and we discuss the dynamics of the spins from the perspective of pattern retrieval in associative memory models. We identify two phases, a "paramagnetic"and a "ferromagnetic"one, and a crossover behavior between these regimes. The "ferromagnetic"phase is reminiscent of pattern retrieval. We highlight similarities and differences with the thermal dynamics of a Hopfield associative memory and show that indeed elements of "machine learning behavior"emerge in the strongly coupled multimode Dicke model

    Protein Structure Prediction Using Basin-Hopping

    Full text link
    Associative memory Hamiltonian structure prediction potentials are not overly rugged, thereby suggesting their landscapes are like those of actual proteins. In the present contribution we show how basin-hopping global optimization can identify low-lying minima for the corresponding mildly frustrated energy landscapes. For small systems the basin-hopping algorithm succeeds in locating both lower minima and conformations closer to the experimental structure than does molecular dynamics with simulated annealing. For large systems the efficiency of basin-hopping decreases for our initial implementation, where the steps consist of random perturbations to the Cartesian coordinates. We implemented umbrella sampling using basin-hopping to further confirm when the global minima are reached. We have also improved the energy surface by employing bioinformatic techniques for reducing the roughness or variance of the energy surface. Finally, the basin-hopping calculations have guided improvements in the excluded volume of the Hamiltonian, producing better structures. These results suggest a novel and transferable optimization scheme for future energy function development

    Transient dynamics for sequence processing neural networks

    Full text link
    An exact solution of the transient dynamics for a sequential associative memory model is discussed through both the path-integral method and the statistical neurodynamics. Although the path-integral method has the ability to give an exact solution of the transient dynamics, only stationary properties have been discussed for the sequential associative memory. We have succeeded in deriving an exact macroscopic description of the transient dynamics by analyzing the correlation of crosstalk noise. Surprisingly, the order parameter equations of this exact solution are completely equivalent to those of the statistical neurodynamics, which is an approximation theory that assumes crosstalk noise to obey the Gaussian distribution. In order to examine our theoretical findings, we numerically obtain cumulants of the crosstalk noise. We verify that the third- and fourth-order cumulants are equal to zero, and that the crosstalk noise is normally distributed even in the non-retrieval case. We show that the results obtained by our theory agree with those obtained by computer simulations. We have also found that the macroscopic unstable state completely coincides with the separatrix.Comment: 21 pages, 4 figure

    Autonomous Dynamics in Neural networks: The dHAN Concept and Associative Thought Processes

    Full text link
    The neural activity of the human brain is dominated by self-sustained activities. External sensory stimuli influence this autonomous activity but they do not drive the brain directly. Most standard artificial neural network models are however input driven and do not show spontaneous activities. It constitutes a challenge to develop organizational principles for controlled, self-sustained activity in artificial neural networks. Here we propose and examine the dHAN concept for autonomous associative thought processes in dense and homogeneous associative networks. An associative thought-process is characterized, within this approach, by a time-series of transient attractors. Each transient state corresponds to a stored information, a memory. The subsequent transient states are characterized by large associative overlaps, which are identical to acquired patterns. Memory states, the acquired patterns, have such a dual functionality. In this approach the self-sustained neural activity has a central functional role. The network acquires a discrimination capability, as external stimuli need to compete with the autonomous activity. Noise in the input is readily filtered-out. Hebbian learning of external patterns occurs coinstantaneous with the ongoing associative thought process. The autonomous dynamics needs a long-term working-point optimization which acquires within the dHAN concept a dual functionality: It stabilizes the time development of the associative thought process and limits runaway synaptic growth, which generically occurs otherwise in neural networks with self-induced activities and Hebbian-type learning rules

    Optimisation in ‘Self-modelling’ Complex Adaptive Systems

    No full text
    When a dynamical system with multiple point attractors is released from an arbitrary initial condition it will relax into a configuration that locally resolves the constraints or opposing forces between interdependent state variables. However, when there are many conflicting interdependencies between variables, finding a configuration that globally optimises these constraints by this method is unlikely, or may take many attempts. Here we show that a simple distributed mechanism can incrementally alter a dynamical system such that it finds lower energy configurations, more reliably and more quickly. Specifically, when Hebbian learning is applied to the connections of a simple dynamical system undergoing repeated relaxation, the system will develop an associative memory that amplifies a subset of its own attractor states. This modifies the dynamics of the system such that its ability to find configurations that minimise total system energy, and globally resolve conflicts between interdependent variables, is enhanced. Moreover, we show that the system is not merely ‘recalling’ low energy states that have been previously visited but ‘predicting’ their location by generalising over local attractor states that have already been visited. This ‘self-modelling’ framework, i.e. a system that augments its behaviour with an associative memory of its own attractors, helps us better-understand the conditions under which a simple locally-mediated mechanism of self-organisation can promote significantly enhanced global resolution of conflicts between the components of a complex adaptive system. We illustrate this process in random and modular network constraint problems equivalent to graph colouring and distributed task allocation problems
    • …
    corecore