1,969 research outputs found

    A Novel Predictive-Coding-Inspired Variational RNN Model for Online Prediction and Recognition

    Get PDF
    This study introduces PV-RNN, a novel variational RNN inspired by the predictive-coding ideas. The model learns to extract the probabilistic structures hidden in fluctuating temporal patterns by dynamically changing the stochasticity of its latent states. Its architecture attempts to address two major concerns of variational Bayes RNNs: how can latent variables learn meaningful representations and how can the inference model transfer future observations to the latent variables. PV-RNN does both by introducing adaptive vectors mirroring the training data, whose values can then be adapted differently during evaluation. Moreover, prediction errors during backpropagation, rather than external inputs during the forward computation, are used to convey information to the network about the external data. For testing, we introduce error regression for predicting unseen sequences as inspired by predictive coding that leverages those mechanisms. The model introduces a weighting parameter, the meta-prior, to balance the optimization pressure placed on two terms of a lower bound on the marginal likelihood of the sequential data. We test the model on two datasets with probabilistic structures and show that with high values of the meta-prior the network develops deterministic chaos through which the data's randomness is imitated. For low values, the model behaves as a random process. The network performs best on intermediate values, and is able to capture the latent probabilistic structure with good generalization. Analyzing the meta-prior's impact on the network allows to precisely study the theoretical value and practical benefits of incorporating stochastic dynamics in our model. We demonstrate better prediction performance on a robot imitation task with our model using error regression compared to a standard variational Bayes model lacking such a procedure.Comment: The paper is accepted in Neural Computatio

    Predictive modelling of building energy consumption based on a hybrid nature-inspired optimization algorithm

    Get PDF
    Overall energy consumption has expanded over the previous decades because of rapid population, urbanization and industrial growth rates. The high demand for energy leads to higher cost per unit of energy, which, can impact on the running costs of commercial and residential dwellings. Hence, there is a need for more effective predictive techniques that can be used to measure and optimize energy usage of large arrays of connected Internet of Things (IoT) devices and control points that constitute modern built environments. In this paper, we propose a lightweight IoT framework for predicting energy usage at a localized level for optimal configuration of building-wide energy dissemination policies. Autoregressive Integrated Moving Average (ARIMA) as a statistical liner model could be used for this purpose; however, it is unable to model the dynamic nonlinear relationships in nonstationary fluctuating power consumption data. Therefore, we have developed an improved hybrid model based on the ARIMA, Support Vector Regression (SVRs) and Particle Swarm Optimization (PSO) to predict precision energy usage from supplied data. The proposed model is evaluated using power consumption data acquired from environmental actuator devices controlling a large functional space in a building. Results show that the proposed hybrid model out-performs other alternative techniques in forecasting power consumption. The approach is appropriate in building energy policy implementations due to its precise estimations of energy consumption and lightweight monitoring infrastructure which can lead to reducing the cost on energy consumption. Moreover, it provides an accurate tool to optimize the energy consumption strategies in wider built environments such as smart cities

    Multilevel Combinatorial Optimization Across Quantum Architectures

    Get PDF
    Emerging quantum processors provide an opportunity to explore new approaches for solving traditional problems in the post Moore's law supercomputing era. However, the limited number of qubits makes it infeasible to tackle massive real-world datasets directly in the near future, leading to new challenges in utilizing these quantum processors for practical purposes. Hybrid quantum-classical algorithms that leverage both quantum and classical types of devices are considered as one of the main strategies to apply quantum computing to large-scale problems. In this paper, we advocate the use of multilevel frameworks for combinatorial optimization as a promising general paradigm for designing hybrid quantum-classical algorithms. In order to demonstrate this approach, we apply this method to two well-known combinatorial optimization problems, namely, the Graph Partitioning Problem, and the Community Detection Problem. We develop hybrid multilevel solvers with quantum local search on D-Wave's quantum annealer and IBM's gate-model based quantum processor. We carry out experiments on graphs that are orders of magnitudes larger than the current quantum hardware size, and we observe results comparable to state-of-the-art solvers in terms of quality of the solution

    Paraiso : An Automated Tuning Framework for Explicit Solvers of Partial Differential Equations

    Full text link
    We propose Paraiso, a domain specific language embedded in functional programming language Haskell, for automated tuning of explicit solvers of partial differential equations (PDEs) on GPUs as well as multicore CPUs. In Paraiso, one can describe PDE solving algorithms succinctly using tensor equations notation. Hydrodynamic properties, interpolation methods and other building blocks are described in abstract, modular, re-usable and combinable forms, which lets us generate versatile solvers from little set of Paraiso source codes. We demonstrate Paraiso by implementing a compressive hydrodynamics solver. A single source code less than 500 lines can be used to generate solvers of arbitrary dimensions, for both multicore CPUs and GPUs. We demonstrate both manual annotation based tuning and evolutionary computing based automated tuning of the program.Comment: 52 pages, 14 figures, accepted for publications in Computational Science and Discover
    corecore