901 research outputs found

    Timely Estimation Using Coded Quantized Samples

    Full text link
    The effects of quantization and coding on the estimation quality of a Gauss-Markov, namely Ornstein-Uhlenbeck, process are considered. Samples are acquired from the process, quantized, and then encoded for transmission using either infinite incremental redundancy or fixed redundancy coding schemes. A fixed processing time is consumed at the receiver for decoding and sending feedback to the transmitter. Decoded messages are used to construct a minimum mean square error (MMSE) estimate of the process as a function of time. This is shown to be an increasing functional of the age-of-information, defined as the time elapsed since the sampling time pertaining to the latest successfully decoded message. Such (age-penalty) functional depends on the quantization bits, codeword lengths and receiver processing time. The goal, for each coding scheme, is to optimize sampling times such that the long term average MMSE is minimized. This is then characterized in the setting of general increasing age-penalty functionals, not necessarily corresponding to MMSE, which may be of independent interest in other contexts.Comment: To appear in ISIT 202

    Sample, Quantize and Encode: Timely Estimation Over Noisy Channels

    Full text link
    The effects of quantization and coding on the estimation quality of Gauss-Markov processes are considered, with a special attention to the Ornstein-Uhlenbeck process. Samples are acquired from the process, quantized, and then encoded for transmission using either infinite incremental redundancy (IIR) or fixed redundancy (FR) coding schemes. A fixed processing time is consumed at the receiver for decoding and sending feedback to the transmitter. Decoded messages are used to construct a minimum mean square error (MMSE) estimate of the process as a function of time. This is shown to be an increasing functional of the age-of-information (AoI), defined as the time elapsed since the sampling time pertaining to the latest successfully decoded message. Such functional depends on the quantization bits, codewords lengths and receiver processing time. The goal, for each coding scheme, is to optimize sampling times such that the long-term average MMSE is minimized. This is then characterized in the setting of general increasing functionals of AoI, not necessarily corresponding to MMSE, which may be of independent interest in other contexts. We first show that the optimal sampling policy for IIR is such that a new sample is generated only if the AoI exceeds a certain threshold, while for FR it is such that a new sample is delivered just-in-time as the receiver finishes processing the previous one. Enhanced transmissions schemes are then developed in order to exploit the processing times to make new data available at the receiver sooner. For both IIR and FR, it is shown that there exists an optimal number of quantization bits that balances AoI and quantization errors, and hence minimizes the MMSE. It is also shown that for longer receiver processing times, the relatively simpler FR scheme outperforms IIR.Comment: Accepted for publication in the IEEE Transactions on Communications. arXiv admin note: substantial text overlap with arXiv:2004.1298

    Road Friction Estimation for Connected Vehicles using Supervised Machine Learning

    Full text link
    In this paper, the problem of road friction prediction from a fleet of connected vehicles is investigated. A framework is proposed to predict the road friction level using both historical friction data from the connected cars and data from weather stations, and comparative results from different methods are presented. The problem is formulated as a classification task where the available data is used to train three machine learning models including logistic regression, support vector machine, and neural networks to predict the friction class (slippery or non-slippery) in the future for specific road segments. In addition to the friction values, which are measured by moving vehicles, additional parameters such as humidity, temperature, and rainfall are used to obtain a set of descriptive feature vectors as input to the classification methods. The proposed prediction models are evaluated for different prediction horizons (0 to 120 minutes in the future) where the evaluation shows that the neural networks method leads to more stable results in different conditions.Comment: Published at IV 201

    Seminario sullo Standard MPEG-4: utilizzo ed aspetti implementativi

    Get PDF
    Una delle tecnologie chiave che hanno permesso il grande sviluppo della televisione digitale è la compressione video. La tecnologia di codifica video nota come MPEG-2, sviluppata nei primi anni novanta, è diventata lo standard di trasmissione DTV (Digital TV) sia satellitare sia terrestre in quasi tutti i paesi del mondo. Da allora la velocità dei microprocessori e le capacità di memoria dei dispositivi hardware per la codifica e la decodifica sono migliorate significativamente rendendo possibile lo sviluppo e l’implementazione di algoritmi di codifica innovativi in grado di abbattere significativamente i limiti di compressione dello standard MPEG-2. Tali innovazioni, sfociate nel 2003 nello standard MPEG-4 AVC (Advanced Video Coding), non hanno permesso di mantenere la compatibilità all’indietro con l’MPEG-2, e questo ha inizialmente costituito un limite alla loro introduzione nei sistemi di trasmissione DTV. Tuttavia, negli ultimi anni la codifica MPEG-4 AVC si è diffusa rapidamente, è stata adottata dal progetto DVB, recentemente dall’ATSC, ed è lo standard di codifica nell’IPTV. L’obiettivo di questo seminario, che si articola in due giornate, è quello di presentare lo standard di codifica MPEG-4 AVC con particolare attenzione agli aspetti implementativi del livello di codifica video.2008-11-18Sardegna Ricerche, Edificio 2, Località Piscinamanna 09010 Pula (CA) - ItaliaSeminario sullo Standard MPEG-4: utilizzo ed aspetti implementativ

    Comparison of CELP speech coder with a wavelet method

    Get PDF
    This thesis compares the speech quality of Code Excited Linear Predictor (CELP, Federal Standard 1016) speech coder with a new wavelet method to compress speech. The performances of both are compared by performing subjective listening tests. The test signals used are clean signals (i.e. with no background noise), speech signals with room noise and speech signals with artificial noise added. Results indicate that for clean signals and signals with predominantly voiced components the CELP standard performs better than the wavelet method but for signals with room noise the wavelet method performs much better than the CELP. For signals with artificial noise added, the results are mixed depending on the level of artificial noise added with CELP performing better for low level noise added signals and the wavelet method performing better for higher noise levels

    Study and simulation of low rate video coding schemes

    Get PDF
    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design

    Unattended acoustic sensor systems for noise monitoring in national parks

    Get PDF
    2017 Spring.Includes bibliographical references.Detection and classification of transient acoustic signals is a difficult problem. The problem is often complicated by factors such as the variety of sources that may be encountered, the presence of strong interference and substantial variations in the acoustic environment. Furthermore, for most applications of transient detection and classification, such as speech recognition and environmental monitoring, online detection and classification of these transient events is required. This is even more crucial for applications such as environmental monitoring as it is often done at remote locations where it is unfeasible to set up a large, general-purpose processing system. Instead, some type of custom-designed system is needed which is power efficient yet able to run the necessary signal processing algorithms in near real-time. In this thesis, we describe a custom-designed environmental monitoring system (EMS) which was specifically designed for monitoring air traffic and other sources of interest in national parks. More specifically, this thesis focuses on the capabilities of the EMS and how transient detection, classification and tracking are implemented on it. The Sparse Coefficient State Tracking (SCST) transient detection and classification algorithm was implemented on the EMS board in order to detect and classify transient events. This algorithm was chosen because it was designed for this particular application and was shown to have superior performance compared to other algorithms commonly used for transient detection and classification. The SCST algorithm was implemented on an Artix 7 FPGA with parts of the algorithm running as dedicated custom logic and other parts running sequentially on a soft-core processor. In this thesis, the partitioning and pipelining of this algorithm is explained. Each of the partitions was tested independently to very their functionality with respect to the overall system. Furthermore, the entire SCST algorithm was tested in the field on actual acoustic data and the performance of this implementation was evaluated using receiver operator characteristic (ROC) curves and confusion matrices. In this test the FPGA implementation of SCST was able to achieve acceptable source detection and classification results despite a difficult data set and limited training data. The tracking of acoustic sources is done through successive direction of arrival (DOA) angle estimation using a wideband extension of the Capon beamforming algorithm. This algorithm was also implemented on the EMS in order to provide real-time DOA estimates for the detected sources. This algorithm was partitioned into several stages with some stages implemented in custom logic while others were implemented as software running on the soft-core processor. Just as with SCST, each partition of this beamforming algorithm was verified independently and then a full system test was conducted to evaluate whether it would be able to track an airborne source. For the full system test, a model airplane was flown at various trajectories relative to the EMS and the trajectories estimated by the system were compared to the ground truth. Although in this test the accuracy of the DOA estimates could not be evaluated, it was show that the algorithm was able to approximately form the general trajectory of a moving source which is sufficient for our application as only a general heading of the acoustic sources is desired

    User-Oriented QoS in Packet Video Delivery

    Get PDF
    We focus on packet video delivery, with an emphasis on the quality of service perceived by the end-user. A video signal passes through several subsystems, such as the source coder, the network and the decoder. Each of these can impair the information, either by data loss or by introducing delay. We describe how each of the subsystems can be tuned to optimize the quality of the delivered signal, for a given available bit rate in the network. The assessment of end-user quality is not trivial. We present recent research results, which rely on a model of the human visual system

    An introduction to the interim digital SAR processor and the characteristics of the associated Seasat SAR imagery

    Get PDF
    Basic engineering data regarding the Interim Digital SAR Processor (IDP) and the digitally correlated Seasat synthetic aperature radar (SAR) imagery are presented. The correlation function and IDP hardware/software configuration are described, and a preliminary performance assessment presented. The geometric and radiometric characteristics, with special emphasis on those peculiar to the IDP produced imagery, are described

    Hardware Implementation of a Novel Image Compression Algorithm

    Get PDF
    Image-related communications are forming an increasingly large part of modern communications, bringing the need for efficient and effective compression. Image compression is important for effective storage and transmission of images. Many techniques have been developed in the past, including transform coding, vector quantization and neural networks. In this thesis, a novel adaptive compression technique is introduced based on adaptive rather than fixed transforms for image compression. The proposed technique is similar to Neural Network (NN)-based image compression and its superiority over other techniques is presented It is shown that the proposed algorithm results in higher image quality for a given compression ratio than existing Neural Network algorithms and that the training of this algorithm is significantly faster than the NN based algorithms. This is also compared to the JPEG in terms of Peak Signal to Noise Ratio (PSNR) for a given compression ratio and computational complexity. Advantages of this idea over JPEG are also presented in this thesis
    corecore