3,535 research outputs found

    A review on analysis and synthesis of nonlinear stochastic systems with randomly occurring incomplete information

    Get PDF
    Copyright q 2012 Hongli Dong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.In the context of systems and control, incomplete information refers to a dynamical system in which knowledge about the system states is limited due to the difficulties in modeling complexity in a quantitative way. The well-known types of incomplete information include parameter uncertainties and norm-bounded nonlinearities. Recently, in response to the development of network technologies, the phenomenon of randomly occurring incomplete information has become more and more prevalent. Such a phenomenon typically appears in a networked environment. Examples include, but are not limited to, randomly occurring uncertainties, randomly occurring nonlinearities, randomly occurring saturation, randomly missing measurements and randomly occurring quantization. Randomly occurring incomplete information, if not properly handled, would seriously deteriorate the performance of a control system. In this paper, we aim to survey some recent advances on the analysis and synthesis problems for nonlinear stochastic systems with randomly occurring incomplete information. The developments of the filtering, control and fault detection problems are systematically reviewed. Latest results on analysis and synthesis of nonlinear stochastic systems are discussed in great detail. In addition, various distributed filtering technologies over sensor networks are highlighted. Finally, some concluding remarks are given and some possible future research directions are pointed out. © 2012 Hongli Dong et al.This work was supported in part by the National Natural Science Foundation of China under Grants 61273156, 61134009, 61273201, 61021002, and 61004067, the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Royal Society of the UK, the National Science Foundation of the USA under Grant No. HRD-1137732, and the Alexander von Humboldt Foundation of German

    Kalman Filtering Over A Packet Dropping Network: A Probabilistic Approach

    Get PDF
    We consider the problem of state estimation of a discrete time process over a packet dropping network. Previous pioneering work on Kalman filtering with intermittent observations is concerned with the asymptotic behavior of E[P_k], i.e., the expected value of the error covariance, for a given packet arrival rate. We consider a different performance metric, Pr[P_k ≤ M], i.e., the probability that P_k is bounded by a given M, and we derive lower and upper bounds on Pr[P_k ≤ M]. We are also able to recover the results in the literature when using Pr[P_k ≤ M] as a metric for scalar systems. Examples are provided to illustrate the theory developed in the paper

    Kalman Filtering Over a Packet-Dropping Network: A Probabilistic Perspective

    Get PDF
    We consider the problem of state estimation of a discrete time process over a packet-dropping network. Previous work on Kalman filtering with intermittent observations is concerned with the asymptotic behavior of E[P_k], i.e., the expected value of the error covariance, for a given packet arrival rate. We consider a different performance metric, Pr[P_k ≤ M], i.e., the probability that P_k is bounded by a given M. We consider two scenarios in the paper. In the first scenario, when the sensor sends its measurement data to the remote estimator via a packet-dropping network, we derive lower and upper bounds on Pr[P_k ≤ M]. In the second scenario, when the sensor preprocesses the measurement data and sends its local state estimate to the estimator, we show that the previously derived lower and upper bounds are equal to each other, hence we are able to provide a closed form expression for Pr[P_k ≤ M]. We also recover the results in the literature when using Pr[P_k ≤ M] as a metric for scalar systems. Examples are provided to illustrate the theory developed in the paper

    Practical Full Resolution Learned Lossless Image Compression

    Full text link
    We propose the first practical learned lossless image compression system, L3C, and show that it outperforms the popular engineered codecs, PNG, WebP and JPEG 2000. At the core of our method is a fully parallelizable hierarchical probabilistic model for adaptive entropy coding which is optimized end-to-end for the compression task. In contrast to recent autoregressive discrete probabilistic models such as PixelCNN, our method i) models the image distribution jointly with learned auxiliary representations instead of exclusively modeling the image distribution in RGB space, and ii) only requires three forward-passes to predict all pixel probabilities instead of one for each pixel. As a result, L3C obtains over two orders of magnitude speedups when sampling compared to the fastest PixelCNN variant (Multiscale-PixelCNN). Furthermore, we find that learning the auxiliary representation is crucial and outperforms predefined auxiliary representations such as an RGB pyramid significantly.Comment: Updated preprocessing and Table 1, see A.1 in supplementary. Code and models: https://github.com/fab-jul/L3C-PyTorc

    Full Resolution Image Compression with Recurrent Neural Networks

    Full text link
    This paper presents a set of full-resolution lossy image compression methods based on neural networks. Each of the architectures we describe can provide variable compression rates during deployment without requiring retraining of the network: each network need only be trained once. All of our architectures consist of a recurrent neural network (RNN)-based encoder and decoder, a binarizer, and a neural network for entropy coding. We compare RNN types (LSTM, associative LSTM) and introduce a new hybrid of GRU and ResNet. We also study "one-shot" versus additive reconstruction architectures and introduce a new scaled-additive framework. We compare to previous work, showing improvements of 4.3%-8.8% AUC (area under the rate-distortion curve), depending on the perceptual metric used. As far as we know, this is the first neural network architecture that is able to outperform JPEG at image compression across most bitrates on the rate-distortion curve on the Kodak dataset images, with and without the aid of entropy coding.Comment: Updated with content for CVPR and removed supplemental material to an external link for size limitation

    Semantic Compression of Episodic Memories

    Get PDF
    Storing knowledge of an agent's environment in the form of a probabilistic generative model has been established as a crucial ingredient in a multitude of cognitive tasks. Perception has been formalised as probabilistic inference over the state of latent variables, whereas in decision making the model of the environment is used to predict likely consequences of actions. Such generative models have earlier been proposed to underlie semantic memory but it remained unclear if this model also underlies the efficient storage of experiences in episodic memory. We formalise the compression of episodes in the normative framework of information theory and argue that semantic memory provides the distortion function for compression of experiences. Recent advances and insights from machine learning allow us to approximate semantic compression in naturalistic domains and contrast the resulting deviations in compressed episodes with memory errors observed in the experimental literature on human memory.Comment: CogSci201
    corecore