26 research outputs found

    Predicting battery depletion of neighboring wireless sensor nodes

    Get PDF
    With a view to prolong the duration of the wireless sensor network, many battery lifetime prediction algorithms run on individual nodes. If not properly designed, this approach may be detrimental and even accelerate battery depletion. Herein, we provide a comparative analysis of various machine-learning algorithms to offload the energy inference task to the most energy-rich nodes, to alleviate the nodes that are entering the critical state. Taken to its extreme, our approach may be used to divert the energy-intensive tasks to a monitoring station, enabling a cloud-based approach to sensor network management. Experiments conducted in a controlled environment with real hardware have shown that RSSI can be used to infer the state of a remote wireless node once it is approaching the cutoff point. The ADWIN algorithm was used for smoothing the input data and for helping a variety of machine learning algorithms particularly to speed up and improve their prediction accuracy

    Online contrastive divergence with generative replay: experience replay without storing data

    Get PDF
    Conceived in the early 1990s, Experience Replay (ER) has been shown to be a successful mechanism to allow online learning algorithms to reuse past experiences. Traditionally, ER can be applied to all machine learning paradigms (i.e., unsupervised, supervised, and reinforcement learning). Recently, ER has contributed to improving the performance of deep reinforcement learning. Yet, its application to many practical settings is still limited by the memory requirements of ER, necessary to explicitly store previous observations. To remedy this issue, we explore a novel approach, Online Contrastive Divergence with Generative Replay (OCD_GR), which uses the generative capability of Restricted Boltzmann Machines (RBMs) instead of recorded past experiences. The RBM is trained online, and does not require the system to store any of the observed data points. We compare OCD_GR to ER on 9 real-world datasets, considering a worst-case scenario (data points arriving in sorted order) as well as a more realistic one (sequential random-order data points). Our results show that in 64.28% of the cases OCD_GR outperforms ER and in the remaining 35.72% it has an almost equal performance, while having a considerably reduced space complexity (i.e., memory usage) at a comparable time complexity

    Comparative study of deep learning methods for one-shot image classification (abstract)

    Get PDF
    Training deep learning models for images classification requires large amount of labeled data to overcome the challenges of overfitting and underfitting. Usually, in many practical applications, these labeled data are not available. In an attempt to solve this problem, the one-shot learning paradigm tries to create machine learning models capable to learn well from one or (maximum) few labeled examples per class. To understand better the behavior of various deep learning models and approaches for one-shot learning, in this abstract, we perform a comparative study of the most used ones, on a challenging real-world dataset, i.e Fashion-MNIST

    Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science

    Get PDF
    Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős–Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible

    On-Line Building Energy Optimization Using Deep Reinforcement Learning

    Get PDF
    Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power systems and to help customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using deep reinforcement learning, a hybrid type of methods that combines reinforcement learning with deep learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and deep policy gradient, both of which have been extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly dimensional database includes information about photovoltaic power generation, electric vehicles and buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide realtime feedback to consumers to encourage more efficient use of electricity

    Soluble forms of tau are toxic in Alzheimer's disease

    Get PDF
    Accumulation of neurofibrillary tangles (NFT), intracellular inclusions of fibrillar forms of tau, is a hallmark of Alzheimer Disease. NFT have been considered causative of neuronal death, however, recent evidence challenges this idea. Other species of tau, such as soluble misfolded, hyperphosphorylated, and mislocalized forms, are now being implicated as toxic. Here we review the data supporting soluble tau as toxic to neurons and synapses in the brain and the implications of these data for development of therapeutic strategies for Alzheimer’s disease and other tauopathies

    Network computations in artificial intelligence

    Get PDF

    Big IoT data mining for real-time energy disaggregation in buildings

    No full text
    In the smart grid context, the identification and prediction of building energy flexibility is a challenging open question, thus paving the way for new optimized behaviors from the demand side. At the same time, the latest smart meters developments allow us to monitor in real-time the power consumption level of the home appliances, aiming at a very accurate energy disaggregation. However, due to practical constraints is infeasible in the near future to attach smart meter devices on all home appliances, which is the problem addressed herein. We propose a hybrid approach, which combines sparse smart meters with machine learning methods. Using a subset of buildings equipped with subset of smart meters we can create a database on which we train two deep learning models, i.e. Factored Four-Way Conditional Restricted Boltzmann Machines (FFW-CRBMs) and Disjunctive FFW-CRBM. We show how our method may be used to accurately predict and identify the energy flexibility of buildings unequipped with smart meters, starting from their aggregated energy values. The proposed approach was validated on a real database, namely the Reference Energy Disaggregation Dataset. The results show that for the flexibility prediction problem solved here, Disjunctive FFW-CRBM outperforms the FFW-CRBMs approach, where for classification task their capabilities are comparable

    A topological insight into restricted Boltzmann machines (extented abstract)

    Get PDF
    Restricted Boltzmann Machines (RBMs) and models derived from them have been successfully used as basic building blocks in deep neural networks for automatic features extraction, unsupervised weights initialization, but also as standalone models for density estimation, activity recognition and so on. Thus, their generative and discriminative capabilities, but also their computational time are instrumental to a wide range of applications. The main contribution of his paper is to study the above problems by looking at RBMs and Gaussian RBMs (GRBMs) from a topological perspective, bringing insights from network science, an extension of graph theory which analyzes real world complex networks

    One-shot learning using Mixture of Variational Autoencoders: A generalization learning approach

    No full text
    Deep learning, even if it is very successful nowadays, traditionally needs very large amounts of labeled data to perform excellent on the classification task. In an attempt to solve this problem, the one-shot learning paradigm, which makes use of just one labeled sample per class and prior knowledge, becomes increasingly important. In this paper, we propose a new one-shot learning method, dubbed MoVAE (Mixture of Variational AutoEncoders), to perform classification. Complementary to prior studies, MoVAE represents a shift of paradigm in comparison with the usual one-shot learning methods, as it does not use any prior knowledge. Instead, it starts from zero knowledge and one labeled sample per class. Afterward, by using unlabeled data and the generalization learning concept (in a way, more as humans do), it is capable to gradually improve by itself its performance. Even more, if there are no unlabeled data available MoVAE can still perform well in one-shot learning classification. We demonstrate empirically the efficiency of our proposed approach on three datasets, i.e. the handwritten digits (MNIST), fashion products (Fashion-MNIST), and handwritten characters (Omniglot), showing that MoVAE outperforms state-of-the-art oneshot learning algorithms
    corecore