130 research outputs found

    Parameters Design for Logarithmic Quantizer Based on Zoom Strategy

    Get PDF
    This paper is concerned with the problem of designing suitable parameters for logarithmic quantizer such that the closed-loop system is asymptotic convergent. Based on zoom strategy, we propose two methods for quantizer parameters design, under which it ensures that the state of the closed-loop system can load in the invariant sets after some certain moments. Then we obtain that the quantizer is unsaturated, and thus the quantization errors are bounded under the time-varying logarithm quantization strategy. On that basis, we obtain that the closed-loop system is asymptotic convergent. A benchmark example is given to show the usefulness of the proposed methods, and the comparison results are illustrated

    Self-triggered Stabilization of Contracting Systems under Quantization

    Full text link
    We propose self-triggered control schemes for nonlinear systems with quantized state measurements. Our focus lies on scenarios where both the controller and the self-triggering mechanism receive only the quantized state measurement at each sampling time. We assume that the ideal closed-loop system without quantization or self-triggered sampling is contracting. Moreover, a growth rate of the open-loop system is assumed to be known. We present two control strategies that yield the closed-loop stability without Zeno behavior. The first strategy is implemented under logarithmic quantization and imposes no time-triggering condition other than setting an upper bound on inter-sampling times. The second one is a joint design of zooming quantization and periodic self-triggered sampling, where the adjustable zoom parameter for quantization changes based on inter-sampling times and is also used for the threshold of self-triggered sampling. In both strategies, we employ a trajectory-based approach for stability analysis, where contraction theory plays a key role.Comment: 26 pages, 10 figure

    Gossip Algorithms for Distributed Signal Processing

    Full text link
    Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and more robust gossip algorithms and deriving theoretical performance guarantees. This article presents an overview of recent work in the area. We describe convergence rate results, which are related to the number of transmitted messages and thus the amount of energy consumed in the network for gossiping. We discuss issues related to gossiping over wireless links, including the effects of quantization and noise, and we illustrate the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression.Comment: Submitted to Proceedings of the IEEE, 29 page

    Event-Driven Control for NCSs with Logarithmic Quantization and Packet Losses

    Get PDF
    The stabilization problem of the networked control systems (NCSs) affected by data quantization, packet losses, and event-driven communication is studied in this paper. By proposing two event-driven schemes and the extended forms of them relying on quantized states, zoom strategy is adopted here to study the system stability with time-varying logarithmic quantization and independent identically distributed (IID) packet losses process. On the basis of that, some sufficient conditions ensuring the mean square stability of the system are obtained here. Although zoom strategy has been utilized by many literatures to study the quantized stabilization problem of NCSs, it has not been adopted to analyze the stability of NCSs with data quantization, IID packet losses, and event-driven communication. Furthermore, the existing literatures relating to zoom strategy employ the quantizer with quantization regions holding arbitrary shapes, but here we use the logarithmic quantizer which holds better performance near the origin. In addition, the detailed comparisons of the system performance under different event-driven schemes are given here, which can guide the strategy selection according to the different design goals. The above three points are the main innovations of this paper. At last, the effectiveness of the proposed methods is illustrated by a benchmark example

    Reachable set-based dynamic quantization for the remote state estimation of linear systems

    Full text link
    We employ reachability analysis in designing dynamic quantization schemes for the remote state estimation of linear systems over a finite date rate communication channel. The quantization region is dynamically updated at each transmission instant, with an approximated reachable set of the linear system. We propose a set-based method using zonotopes and compare it to a norm-based method in dynamically updating the quantization region. For both methods, we guarantee that the quantization error is bounded and consequently, the remote state reconstruction error is also bounded. To the best of our knowledge, the set-based method using zonotopes has no precedent in the literature and admits a larger class of linear systems and communication channels, where the set-based method allows for a longer inter-transmission time and lower bit rate. Finally, we corroborate our theoretical guarantees with a numerical example.Comment: This manuscript was accepted for publication at the 62nd IEEE Conference on Decision and Control (CDC), 202

    Application of Bandelet Transform in Image and Video Compression

    Get PDF
    The need for large-scale storage and transmission of data is growing exponentially With the widespread use of computers so that efficient ways of storing data have become important. With the advancement of technology, the world has found itself amid a vast amount of information. An efficient method has to be generated to deal with such amount of information. Data compression is a technique which minimizes the size of a file keeping the quality same as previous. So more amount of data can be stored in memory space with the help of data compression. There are various image compression standards such as JPEG, which uses discrete cosine transform technique and JPEG 2000 which uses discrete wavelet transform technique. The discrete cosine transform gives excellent compaction for highly correlated information. The computational complexity is very less as it has better information packing ability. However, it produces blocking artifacts, graininess, and blurring in the output which is overcome by the discrete wavelet transform. The image size is reduced by discarding values less than a prespecified quantity without losing much information. But it also has some limitations when the complexity of the image increases. Wavelets are optimal for point singularity however for line singularities and curve singularities these are not optimal. They do not consider the image geometry which is a vital source of redundancy. Here we analyze a new type of bases known as bandelets which can be constructed from the wavelet basis which takes an important source of regularity that is the geometrical redundancy.The image is decomposed along the direction of geometry. It is better as compared to other methods because the geometry is described by a flow vector rather than edges. it indicates the direction in which the intensity of image shows a smooth variation. It gives better compression measure compared to wavelet bases. A fast subband coding is used for the image decomposition in a bandelet basis. It has been extended for video compression. The bandelet transform based image and video compression method compared with the corresponding wavelet scheme. Different performance measure parameters such as peak signal to noise ratio, compression ratio (PSNR), bits per pixel (bpp) and entropy are evaluated for both Image and video compression

    Source Coding Optimization for Distributed Average Consensus

    Full text link
    Consensus is a common method for computing a function of the data distributed among the nodes of a network. Of particular interest is distributed average consensus, whereby the nodes iteratively compute the sample average of the data stored at all the nodes of the network using only near-neighbor communications. In real-world scenarios, these communications must undergo quantization, which introduces distortion to the internode messages. In this thesis, a model for the evolution of the network state statistics at each iteration is developed under the assumptions of Gaussian data and additive quantization error. It is shown that minimization of the communication load in terms of aggregate source coding rate can be posed as a generalized geometric program, for which an equivalent convex optimization can efficiently solve for the global minimum. Optimization procedures are developed for rate-distortion-optimal vector quantization, uniform entropy-coded scalar quantization, and fixed-rate uniform quantization. Numerical results demonstrate the performance of these approaches. For small numbers of iterations, the fixed-rate optimizations are verified using exhaustive search. Comparison to the prior art suggests competitive performance under certain circumstances but strongly motivates the incorporation of more sophisticated coding strategies, such as differential, predictive, or Wyner-Ziv coding.Comment: Master's Thesis, Electrical Engineering, North Carolina State Universit
    corecore