3 research outputs found

    Emergent learning in physical systems as feedback-based aging in a glassy landscape

    Full text link
    By training linear physical networks to learn linear transformations, we discern how their physical properties evolve due to weight update rules. Our findings highlight a striking similarity between the learning behaviors of such networks and the processes of aging and memory formation in disordered and glassy systems. We show that the learning dynamics resembles an aging process, where the system relaxes in response to repeated application of the feedback boundary forces in presence of an input force, thus encoding a memory of the input-output relationship. With this relaxation comes an increase in the correlation length, which is indicated by the two-point correlation function for the components of the network. We also observe that the square root of the mean-squared error as a function of epoch takes on a non-exponential form, which is a typical feature of glassy systems. This physical interpretation suggests that by encoding more detailed information into input and feedback boundary forces, the process of emergent learning can be rather ubiquitous and, thus, serve as a very early physical mechanism, from an evolutionary standpoint, for learning in biological systems.Comment: 11 pages, 7 figure

    Rectification of Random Walkers Induced by Energy Flow at Boundaries

    Full text link
    We explore rectification phenomena in a system where two-dimensional random walkers interact with a funnel-shaped ratchet under two distinct classes of reflection rules. The two classes include the angle of reflection exceeding the angle of incidence (θreflect>θincident\theta_{reflect} > \theta_{incident}), or vice versa (θreflect<θincident\theta_{reflect} < \theta_{incident}). These generalized boundary reflection rules are indicative of non-equilibrium conditions due to the introduction of energy flows at the boundary. Our findings reveal that the nature of such particle-wall interactions dictates the system's behavior: the funnel either acts as a pump, directing flow, or as a collector, demonstrating a ratchet reversal. Importantly, we provide a geometric proof elucidating the underlying mechanism of rectification, thereby offering insights into why certain interactions lead to directed motion, while others do not.Comment: 5 pages, 6 figure

    Learning by non-interfering feedback chemical signaling in physical networks

    No full text
    Both non-neural and neural biological systems can learn. So rather than focusing on purely brain-like learning, efforts are underway to study learning in physical systems. Such efforts include equilibrium propagation (EP) and coupled learning (CL), which require storage of two different states - the free state and the perturbed state - during the learning process to retain information about gradients. Here, we propose a learning algorithm rooted in chemical signaling that does not require storage of two different states. Rather, the output error information is encoded in a chemical signal that diffuses into the network in a similar way as the activation/feedforward signal. The steady-state feedback chemical concentration, along with the activation signal, stores the required gradient information locally. We apply our algorithm using a physical, linear flow network and test it using the Iris data set with 93% accuracy. We also prove that our algorithm performs gradient descent. Finally, in addition to comparing our algorithm directly with EP and CL, we address the biological plausibility of the algorithm.ISSN:2643-156
    corecore