138,306 research outputs found

    Working Memory Networks for Learning Temporal Order, with Application to 3-D Visual Object Recognition

    Full text link
    Working memory neural networks are characterized which encode the invariant temporal order of sequential events. Inputs to the networks, called Sustained Temporal Order REcurrent (STORE) models, may be presented at widely differing speeds, durations, and interstimulus intervals. The STORE temporal order code is designed to enable all emergent groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed in neural architectures which self-organize learned codes for variable-rate speech perception, sensory-motor planning, or 3-D visual object recognition. Using such a working memory, a self-organizing architecture for invariant 3-D visual object recognition is described. The new model is based on the model of Seibert and Waxman (1990a), which builds a 3-D representation of an object from a temporally ordered sequence of its 2-D aspect graphs. The new model, called an ARTSTORE model, consists of the following cascade of processing modules: Invariant Preprocessor --> ART 2 --> STORE Model --> ART 2 --> Outstar Network.Defense Advanced Research Projects Agency (90-0083); British Petroleum (89-A1-1204); National Science Foundation (IRI 90-00530, IRI 87-16960); Air Force Office of Scientific Research (90-128, 90-0175

    Recurrent neural networks and proper orthogonal decomposition with interval data for real-time predictions of mechanised tunnelling processes

    Get PDF
    A surrogate modelling strategy for predictions of interval settlement fields in real time during machine driven construction of tunnels, accounting for uncertain geotechnical parameters in terms of intervals, is presented in the paper. Artificial Neural Network and Proper Orthogonal Decomposition approaches are combined to approximate and predict tunnelling induced time variant surface settlement fields computed by a process-oriented finite element simulation model. The surrogate models are generated, trained and tested in the design (offline) stage of a tunnel project based on finite element analyses to compute the surface settlements for selected scenarios of the tunnelling process steering parameters taking uncertain geotechnical parameters by means of possible ranges (intervals) into account. The resulting mappings of time constant geotechnical interval parameters and time variant deterministic steering parameters onto the time variant interval settlement field are solved offline by optimisation and online by interval analyses approaches using the midpoint-radius representation of interval data. During the tunnel construction, the surrogate model is designed to be used in real-time to predict interval fields of the surface settlements in each stage of the advancement of the tunnel boring machine for selected realisations of the steering parameters to support the steering decisions of the machine driver

    Data-Driven Reduced-Order Modeling of Unsteady Nonlinear Shock Wave using Physics-Informed Neural Network (PINN) Based Solution

    Get PDF
    This article presents a preliminary study on data-driven reduced-order modeling (ROM) of unsteady nonlinear shock wave. A basic form of such problem can be modeled using the Burgers’ equation. The physics-informed neural networks (PINN) approach is used to obtain numerical solutions to the problem at certain time steps. PINN is a cutting-edge computational framework that seamlessly integrates deep neural networks with the governing physics of the problem and is turning out to be promising for enhancing the accuracy and efficiency of numerical solutions in a wide array of scientific and engineering applications. Next, extraction of the Proper Orthogonal Decomposition (POD) modes from the solution field is carried out, providing a compact representation of the system’s dominant spatial patterns. Subsequently, temporal coefficients are computed at specific time intervals, allowing for a reduced-order representation of the temporal evolution of the system. These temporal coefficients are then employed as input data to train a deep neural network (DNN) model designed to predict the temporal coefficient at various time steps. The predicted coefficient can be used to form the solution. The synergy between the POD-based spatial decomposition and the data-driven capabilities of DNN results in an efficient and accurate model for approximating the solution. The trained ANN subsequently takes the value of the Reynolds number and historical POD coefficients as inputs, generating predictions for future temporal coefficients. The study demonstrates the potential of combining model reduction techniques with machine learning approaches for solving complex partial differential equations. It showcases the use of physics-informed deep learning for obtaining numerical solutions. The idea presented can be extended to solve more complicated problems involving Navier-Stokes equations

    A sinusoidal signal reconstruction method for the inversion of the mel-spectrogram

    Get PDF
    The synthesis of sound via deep learning methods has recently received much attention. Some problems for deep learning approaches to sound synthesis relate to the amount of data needed to specify an audio signal and the necessity of preserving both the long and short time coherence of the synthesised signal. Visual time-frequency representations such as the log-mel-spectrogram have gained in popularity. The log- mel-spectrogram is a perceptually informed representation of audio that greatly compresses the amount of information required for the description of the sound. However, because of this compression, this representation is not directly invertible. Both signal processing and machine learning techniques have previ- ously been applied to the inversion of the log-mel-spectrogram but they both caused audible distortions in the synthesised sounds due to issues of temporal and spectral coherence. In this paper, we outline the application of a sinusoidal model to the ‘inversion’ of the log-mel-spectrogram for pitched musical instrument sounds outperforming state-of-the-art deep learning methods. The approach could be later used as a general decoding step from spectral to time intervals in neural applications

    Continuous Depth Recurrent Neural Differential Equations

    Full text link
    Recurrent neural networks (RNNs) have brought a lot of advancements in sequence labeling tasks and sequence data. However, their effectiveness is limited when the observations in the sequence are irregularly sampled, where the observations arrive at irregular time intervals. To address this, continuous time variants of the RNNs were introduced based on neural ordinary differential equations (NODE). They learn a better representation of the data using the continuous transformation of hidden states over time, taking into account the time interval between the observations. However, they are still limited in their capability as they use the discrete transformations and a fixed discrete number of layers (depth) over an input in the sequence to produce the output observation. We intend to address this limitation by proposing RNNs based on differential equations which model continuous transformations over both depth and time to predict an output for a given input in the sequence. Specifically, we propose continuous depth recurrent neural differential equations (CDR-NDE) which generalizes RNN models by continuously evolving the hidden states in both the temporal and depth dimensions. CDR-NDE considers two separate differential equations over each of these dimensions and models the evolution in the temporal and depth directions alternatively. We also propose the CDR-NDE-heat model based on partial differential equations which treats the computation of hidden states as solving a heat equation over time. We demonstrate the effectiveness of the proposed models by comparing against the state-of-the-art RNN models on real world sequence labeling problems and data
    • …
    corecore