213,356 research outputs found

    Liquid Time-constant Networks

    Full text link
    We introduce a new class of time-continuous recurrent neural network models. Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems modulated via nonlinear interlinked gates. The resulting models represent dynamical systems with varying (i.e., liquid) time-constants coupled to their hidden state, with outputs being computed by numerical differential equation solvers. These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations, and give rise to improved performance on time-series prediction tasks. To demonstrate these properties, we first take a theoretical approach to find bounds over their dynamics and compute their expressive power by the trajectory length measure in latent trajectory space. We then conduct a series of time-series prediction experiments to manifest the approximation capability of Liquid Time-Constant Networks (LTCs) compared to classical and modern RNNs. Code and data are available at https://github.com/raminmh/liquid_time_constant_networksComment: Accepted to the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21

    Closed-form continuous-time neural networks

    Get PDF
    Continuous-time neural networks are a class of machine learning systems that can tackle representation learning on spatiotemporal decision-making tasks. These models are typically represented by continuous differential equations. However, their expressive power when they are deployed on computers is bottlenecked by numerical differential equation solvers. This limitation has notably slowed down the scaling and understanding of numerous natural physical phenomena such as the dynamics of nervous systems. Ideally, we would circumvent this bottleneck by solving the given dynamical system in closed form. This is known to be intractable in general. Here, we show that it is possible to closely approximate the interaction between neurons and synapses—the building blocks of natural and artificial neural networks—constructed by liquid time-constant networks efficiently in closed form. To this end, we compute a tightly bounded approximation of the solution of an integral appearing in liquid time-constant dynamics that has had no known closed-form solution so far. This closed-form solution impacts the design of continuous-time and continuous-depth neural models. For instance, since time appears explicitly in closed form, the formulation relaxes the need for complex numerical solvers. Consequently, we obtain models that are between one and five orders of magnitude faster in training and inference compared with differential equation-based counterparts. More importantly, in contrast to ordinary differential equation-based continuous networks, closed-form networks can scale remarkably well compared with other deep learning instances. Lastly, as these models are derived from liquid networks, they show good performance in time-series modelling compared with advanced recurrent neural network models

    Multilayer Aggregation with Statistical Validation: Application to Investor Networks

    Get PDF
    Multilayer networks are attracting growing attention in many fields, including finance. In this paper, we develop a new tractable procedure for multilayer aggregation based on statistical validation, which we apply to investor networks. Moreover, we propose two other improvements to their analysis: transaction bootstrapping and investor categorization. The aggregation procedure can be used to integrate security-wise and time-wise information about investor trading networks, but it is not limited to finance. In fact, it can be used for different applications, such as gene, transportation, and social networks, were they inferred or observable. Additionally, in the investor network inference, we use transaction bootstrapping for better statistical validation. Investor categorization allows for constant size networks and having more observations for each node, which is important in the inference especially for less liquid securities. Furthermore, we observe that the window size used for averaging has a substantial effect on the number of inferred relationships. We apply this procedure by analyzing a unique data set of Finnish shareholders during the period 2004-2009. We find that households in the capital have high centrality in investor networks, which, under the theory of information channels in investor networks suggests that they are well-informed investors

    Physics-Informed Calibration of Aeromagnetic Compensation in Magnetic Navigation Systems using Liquid Time-Constant Networks

    Full text link
    Magnetic navigation (MagNav) is a rising alternative to the Global Positioning System (GPS) and has proven useful for aircraft navigation. Traditional aircraft navigation systems, while effective, face limitations in precision and reliability in certain environments and against attacks. Airborne MagNav leverages the Earth's magnetic field to provide accurate positional information. However, external magnetic fields induced by aircraft electronics and Earth's large-scale magnetic fields disrupt the weaker signal of interest. We introduce a physics-informed approach using Tolles-Lawson coefficients for compensation and Liquid Time-Constant Networks (LTCs) to remove complex, noisy signals derived from the aircraft's magnetic sources. Using real flight data with magnetometer measurements and aircraft measurements, we observe up to a 64% reduction in aeromagnetic compensation error (RMSE nT), outperforming conventional models. This significant improvement underscores the potential of a physics-informed, machine learning approach for extracting clean, reliable, and accurate magnetic signals for MagNav positional estimation.Comment: Accepted at the NeurIPS 2023 Machine Learning and the Physical Sciences workshop, 7 pages, 4 figures, see code here: https://github.com/fnerrise/LNNs_MagNav

    LTC-SE: Expanding the Potential of Liquid Time-Constant Neural Networks for Scalable AI and Embedded Systems

    Full text link
    We present LTC-SE, an improved version of the Liquid Time-Constant (LTC) neural network algorithm originally proposed by Hasani et al. in 2021. This algorithm unifies the Leaky-Integrate-and-Fire (LIF) spiking neural network model with Continuous-Time Recurrent Neural Networks (CTRNNs), Neural Ordinary Differential Equations (NODEs), and bespoke Gated Recurrent Units (GRUs). The enhancements in LTC-SE focus on augmenting flexibility, compatibility, and code organization, targeting the unique constraints of embedded systems with limited computational resources and strict performance requirements. The updated code serves as a consolidated class library compatible with TensorFlow 2.x, offering comprehensive configuration options for LTCCell, CTRNN, NODE, and CTGRU classes. We evaluate LTC-SE against its predecessors, showcasing the advantages of our optimizations in user experience, Keras function compatibility, and code clarity. These refinements expand the applicability of liquid neural networks in diverse machine learning tasks, such as robotics, causality analysis, and time-series prediction, and build on the foundational work of Hasani et al.Comment: 11 pages, 5 figures, 5 tables, This research work is partially drawn from the MSc thesis of Michael B. Khani. arXiv admin note: text overlap with arXiv:2006.04439 by other author

    Spatially Resolved Monitoring of Drying of Hierarchical Porous Organic Networks

    Get PDF
    Evaporation kinetics of water confined in hierarchal polymeric porous media is studied by low field nuclear magnetic resonance (NMR). Systems synthesized with various degrees of cross-linker density render networks with similar pore sizes but different response when soaked with water. Polymeric networks with low percentage of cross-linker can undergo swelling, which affects the porosity as well as the drying kinetics. The drying process is monitored macroscopically by single-sided NMR, with spatial resolution of 100 μm, while microscopic information is obtained by measurements of spin?spin relaxation times (T2). Transition from a funicular to a pendular regime, where hydraulic connectivity is lost and the capillary flow cannot compensate for the surface evaporation, can be observed from inspection of the water content in different sample layers. Relaxation measurements indicate that even when the larger pore structures are depleted of water, capillary flow occurs through smaller voids.Fil: Velasco, Manuel Isaac. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Física Enrique Gaviola. Universidad Nacional de Córdoba. Instituto de Física Enrique Gaviola; Argentina. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía y Física; ArgentinaFil: Silletta, Emilia Victoria. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Física Enrique Gaviola. Universidad Nacional de Córdoba. Instituto de Física Enrique Gaviola; Argentina. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía y Física; ArgentinaFil: Gomez, Cesar Gerardo. Universidad Nacional de Córdoba. Facultad de Ciencias Químicas. Departamento de Química Orgánica; Argentina. Universidad Nacional de Córdoba. Instituto de Investigación y Desarrollo En Ingeniería de Procesos y Química Aplicada. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Investigación y Desarrollo En Ingeniería de Procesos y Química Aplicada.; ArgentinaFil: Strumia, Miriam Cristina. Universidad Nacional de Córdoba. Facultad de Ciencias Químicas. Departamento de Química Orgánica; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto Multidisciplinario de Biología Vegetal. Universidad Nacional de Córdoba. Facultad de Ciencias Exactas Físicas y Naturales. Instituto Multidisciplinario de Biología Vegetal; ArgentinaFil: Stapf, Siegfried. Ilmenau University of Technology; AlemaniaFil: Monti, Gustavo Alberto. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Física Enrique Gaviola. Universidad Nacional de Córdoba. Instituto de Física Enrique Gaviola; Argentina. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía y Física; ArgentinaFil: Mattea, Carlos. Ilmenau University of Technology; AlemaniaFil: Acosta, Rodolfo Hector. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Física Enrique Gaviola. Universidad Nacional de Córdoba. Instituto de Física Enrique Gaviola; Argentina. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía y Física; Argentin

    Liquid State Machine with Dendritically Enhanced Readout for Low-power, Neuromorphic VLSI Implementations

    Full text link
    In this paper, we describe a new neuro-inspired, hardware-friendly readout stage for the liquid state machine (LSM), a popular model for reservoir computing. Compared to the parallel perceptron architecture trained by the p-delta algorithm, which is the state of the art in terms of performance of readout stages, our readout architecture and learning algorithm can attain better performance with significantly less synaptic resources making it attractive for VLSI implementation. Inspired by the nonlinear properties of dendrites in biological neurons, our readout stage incorporates neurons having multiple dendrites with a lumped nonlinearity. The number of synaptic connections on each branch is significantly lower than the total number of connections from the liquid neurons and the learning algorithm tries to find the best 'combination' of input connections on each branch to reduce the error. Hence, the learning involves network rewiring (NRW) of the readout network similar to structural plasticity observed in its biological counterparts. We show that compared to a single perceptron using analog weights, this architecture for the readout can attain, even by using the same number of binary valued synapses, up to 3.3 times less error for a two-class spike train classification problem and 2.4 times less error for an input rate approximation task. Even with 60 times larger synapses, a group of 60 parallel perceptrons cannot attain the performance of the proposed dendritically enhanced readout. An additional advantage of this method for hardware implementations is that the 'choice' of connectivity can be easily implemented exploiting address event representation (AER) protocols commonly used in current neuromorphic systems where the connection matrix is stored in memory. Also, due to the use of binary synapses, our proposed method is more robust against statistical variations.Comment: 14 pages, 19 figures, Journa

    Chirality transfer and stereo-selectivity of imprinted cholesteric networks

    Full text link
    Imprinting of cholesteric textures in a polymer network is a method of preserving a macroscopically chiral phase in a system with no molecular chirality. By modifying the elastics properties of the network, the resulting stored helical twist can be manipulated within a wide range since the imprinting efficiency depends on the balance between the elastics constants and twisting power at network formation. One spectacular property of phase chirality imprinting is the created ability of the network to adsorb preferentially one stereo-component from a racemic mixture. In this paper we explore this property of chirality transfer from a macroscopic to the molecular scale. In particular, we focus on the competition between the phase chirality and the local nematic order. We demonstrate that it is possible to control the subsequent release of chiral solvent component from the imprinting network and the reversibility of the stereo-selective swelling by racemic solvents
    corecore