579 research outputs found

    Hidden Markov Models and their Application for Predicting Failure Events

    Full text link
    We show how Markov mixed membership models (MMMM) can be used to predict the degradation of assets. We model the degradation path of individual assets, to predict overall failure rates. Instead of a separate distribution for each hidden state, we use hierarchical mixtures of distributions in the exponential family. In our approach the observation distribution of the states is a finite mixture distribution of a small set of (simpler) distributions shared across all states. Using tied-mixture observation distributions offers several advantages. The mixtures act as a regularization for typically very sparse problems, and they reduce the computational effort for the learning algorithm since there are fewer distributions to be found. Using shared mixtures enables sharing of statistical strength between the Markov states and thus transfer learning. We determine for individual assets the trade-off between the risk of failure and extended operating hours by combining a MMMM with a partially observable Markov decision process (POMDP) to dynamically optimize the policy for when and how to maintain the asset.Comment: Will be published in the proceedings of ICCS 2020; @Booklet{EasyChair:3183, author = {Paul Hofmann and Zaid Tashman}, title = {Hidden Markov Models and their Application for Predicting Failure Events}, howpublished = {EasyChair Preprint no. 3183}, year = {EasyChair, 2020}

    Bidirectional Learning in Recurrent Neural Networks Using Equilibrium Propagation

    Get PDF
    Neurobiologically-plausible learning algorithms for recurrent neural networks that can perform supervised learning are a neglected area of study. Equilibrium propagation is a recent synthesis of several ideas in biological and artificial neural network research that uses a continuous-time, energy-based neural model with a local learning rule. However, despite dealing with recurrent networks, equilibrium propagation has only been applied to discriminative categorization tasks. This thesis generalizes equilibrium propagation to bidirectional learning with asymmetric weights. Simultaneously learning the discriminative as well as generative transformations for a set of data points and their corresponding category labels, bidirectional equilibrium propagation utilizes recurrence and weight asymmetry to share related but non-identical representations within the network. Experiments on an artificial dataset demonstrate the ability to learn both transformations, as well as the ability for asymmetric-weight networks to generalize their discriminative training to the untrained generative task

    Approximate information state based convergence analysis of recurrent Q-learning

    Full text link
    In spite of the large literature on reinforcement learning (RL) algorithms for partially observable Markov decision processes (POMDPs), a complete theoretical understanding is still lacking. In a partially observable setting, the history of data available to the agent increases over time so most practical algorithms either truncate the history to a finite window or compress it using a recurrent neural network leading to an agent state that is non-Markovian. In this paper, it is shown that in spite of the lack of the Markov property, recurrent Q-learning (RQL) converges in the tabular setting. Moreover, it is shown that the quality of the converged limit depends on the quality of the representation which is quantified in terms of what is known as an approximate information state (AIS). Based on this characterization of the approximation error, a variant of RQL with AIS losses is presented. This variant performs better than a strong baseline for RQL that does not use AIS losses. It is demonstrated that there is a strong correlation between the performance of RQL over time and the loss associated with the AIS representation.Comment: 25 pages, 6 figure

    Extracting Symbolic Representations Learned by Neural Networks

    Get PDF
    Understanding what neural networks learn from training data is of great interest in data mining, data analysis, and critical applications, and in evaluating neural network models. Unfortunately, the product of neural network training is typically opaque matrices of floating point numbers that are not obviously understandable. This difficulty has inspired substantial past research on how to extract symbolic, human-readable representations from a trained neural network, but the results obtained so far are very limited (e.g., large rule sets produced). This problem occurs in part due to the distributed hidden layer representation created during learning. Most past symbolic knowledge extraction algorithms have focused on progressively more sophisticated ways to cluster this distributed representation. In contrast, in this dissertation, I take a different approach. I develop ways to alter the error backpropagation neural network training process itself so that it creates a representation of what has been learned in the hidden layer activation space that is more amenable to existing symbolic representation extraction methods. In this context, this dissertation research makes four main contributions. First, modifications to the backpropagation learning procedure are derived mathematically, and it is shown that these modifications can be accomplished as local computations. Second, the effectiveness of the modified learning procedure for feedforward networks is established by showing that, on a set of benchmark tasks, it produces rule sets that are substantially simpler than those produced by standard backpropagation learning. Third, this approach is extended to simple recurrent networks, and experimental evaluation shows remarkable reduction in the sizes of the finite state machines extracted from the recurrent networks trained using this approach. Finally, this method is further modified to work on echo state networks, and computational experiments again show significant improvement in finite state machine extraction from these networks. These results clearly establish that principled modification of error backpropagation so that it constructs a better separated hidden layer representation is an effective way to improve contemporary symbolic extraction methods

    Spike encoding techniques for IoT time-varying signals benchmarked on a neuromorphic classification task

    Get PDF
    Spiking Neural Networks (SNNs), known for their potential to enable low energy consumption and computational cost, can bring significant advantages to the realm of embedded machine learning for edge applications. However, input coming from standard digital sensors must be encoded into spike trains before it can be elaborated with neuromorphic computing technologies. We present here a detailed comparison of available spike encoding techniques for the translation of time-varying signals into the event-based signal domain, tested on two different datasets both acquired through commercially available digital devices: the Free Spoken Digit dataset (FSD), consisting of 8-kHz audio files, and the WISDM dataset, composed of 20-Hz recordings of human activity through mobile and wearable inertial sensors. We propose a complete pipeline to benchmark these encoding techniques by performing time-dependent signal classification through a Spiking Convolutional Neural Network (sCNN), including a signal preprocessing step consisting of a bank of filters inspired by the human cochlea, feature extraction by production of a sonogram, transfer learning via an equivalent ANN, and model compression schemes aimed at resource optimization. The resulting performance comparison and analysis provides a powerful practical tool, empowering developers to select the most suitable coding method based on the type of data and the desired processing algorithms, and further expands the applicability of neuromorphic computational paradigms to embedded sensor systems widely employed in the IoT and industrial domains

    Multiagent Deep Reinforcement Learning: Challenges and Directions Towards Human-Like Approaches

    Full text link
    This paper surveys the field of multiagent deep reinforcement learning. The combination of deep neural networks with reinforcement learning has gained increased traction in recent years and is slowly shifting the focus from single-agent to multiagent environments. Dealing with multiple agents is inherently more complex as (a) the future rewards depend on the joint actions of multiple players and (b) the computational complexity of functions increases. We present the most common multiagent problem representations and their main challenges, and identify five research areas that address one or more of these challenges: centralised training and decentralised execution, opponent modelling, communication, efficient coordination, and reward shaping. We find that many computational studies rely on unrealistic assumptions or are not generalisable to other settings; they struggle to overcome the curse of dimensionality or nonstationarity. Approaches from psychology and sociology capture promising relevant behaviours such as communication and coordination. We suggest that, for multiagent reinforcement learning to be successful, future research addresses these challenges with an interdisciplinary approach to open up new possibilities for more human-oriented solutions in multiagent reinforcement learning.Comment: 37 pages, 6 figure

    Output Feedback Fractional-Order Nonsingular Terminal Sliding Mode Control of Underwater Remotely Operated Vehicles

    Get PDF
    For the 4-DOF (degrees of freedom) trajectory tracking control problem of underwater remotely operated vehicles (ROVs) in the presence of model uncertainties and external disturbances, a novel output feedback fractional-order nonsingular terminal sliding mode control (FO-NTSMC) technique is introduced in light of the equivalent output injection sliding mode observer (SMO) and TSMC principle and fractional calculus technology. The equivalent output injection SMO is applied to reconstruct the full states in finite time. Meanwhile, the FO-NTSMC algorithm, based on a new proposed fractional-order switching manifold, is designed to stabilize the tracking error to equilibrium points in finite time. The corresponding stability analysis of the closed-loop system is presented using the fractional-order version of the Lyapunov stability theory. Comparative numerical simulation results are presented and analyzed to demonstrate the effectiveness of the proposed method. Finally, it is noteworthy that the proposed output feedback FO-NTSMC technique can be used to control a broad range of nonlinear second-order dynamical systems in finite time
    corecore