3,737 research outputs found

    Motion compensation and very low bit rate video coding

    Get PDF
    Recently, many activities of the International Telecommunication Union (ITU) and the International Standard Organization (ISO) are leading to define new standards for very low bit-rate video coding, such as H.263 and MPEG-4 after successful applications of the international standards H.261 and MPEG-1/2 for video coding above 64kbps. However, at very low bit-rate the classic block matching based DCT video coding scheme suffers seriously from blocking artifacts which degrade the quality of reconstructed video frames considerably. To solve this problem, a new technique in which motion compensation is based on dense motion field is presented in this dissertation. Four efficient new video coding algorithms based on this new technique for very low bit-rate are proposed. (1) After studying model-based video coding algorithms, we propose an optical flow based video coding algorithm with thresh-olding techniques. A statistic model is established for distribution of intensity difference between two successive frames, and four thresholds are used to control the bit-rate and the quality of reconstructed frames. It outperforms the typical model-based techniques in terms of complexity and quality of reconstructed frames. (2) An efficient algorithm using DCT coded optical flow. It is found that dense motion fields can be modeled as the first order auto-regressive model, and efficiently compressed with DCT technique, hence achieving very low bit-rate and higher visual quality than the H.263/TMN5. (3) A region-based discrete wavelet transform video coding algorithm. This algorithm implements dense motion field and regions are segmented according to their content significance. The DWT is applied to residual images region by region, and bits are adaptively allocated to regions. It improves the visual quality and PSNR of significant regions while maintaining low bit-rate. (4) A segmentation-based video coding algorithm for stereo sequence. A correlation-feedback algorithm with Kalman filter is utilized to improve the accuracy of optical flow fields. Three criteria, which are associated with 3-D information, 2-D connectivity and motion vector fields, respectively, are defined for object segmentation. A chain code is utilized to code the shapes of the segmented objects. it can achieve very high compression ratio up to several thousands

    Deep Learning: Our Miraculous Year 1990-1991

    Full text link
    In 2020, we will celebrate that many of the basic ideas behind the deep learning revolution were published three decades ago within fewer than 12 months in our "Annus Mirabilis" or "Miraculous Year" 1990-1991 at TU Munich. Back then, few people were interested, but a quarter century later, neural networks based on these ideas were on over 3 billion devices such as smartphones, and used many billions of times per day, consuming a significant fraction of the world's compute.Comment: 37 pages, 188 references, based on work of 4 Oct 201

    Introduction: The Third International Conference on Epigenetic Robotics

    Get PDF
    This paper summarizes the paper and poster contributions to the Third International Workshop on Epigenetic Robotics. The focus of this workshop is on the cross-disciplinary interaction of developmental psychology and robotics. Namely, the general goal in this area is to create robotic models of the psychological development of various behaviors. The term "epigenetic" is used in much the same sense as the term "developmental" and while we could call our topic "developmental robotics", developmental robotics can be seen as having a broader interdisciplinary emphasis. Our focus in this workshop is on the interaction of developmental psychology and robotics and we use the phrase "epigenetic robotics" to capture this focus

    A Bayesian Poisson-Gaussian Process Model for Popularity Learning in Edge-Caching Networks

    Get PDF
    Edge-caching is recognized as an efficient technique for future cellular networks to improve network capacity and user-perceived quality of experience. To enhance the performance of caching systems, designing an accurate content request prediction algorithm plays an important role. In this paper, we develop a flexible model, a Poisson regressor based on a Gaussian process, for the content request distribution. The first important advantage of the proposed model is that it encourages the already existing or seen contents with similar features to be correlated in the feature space and therefore it acts as a regularizer for the estimation. Second, it allows to predict the popularities of newly-added or unseen contents whose statistical data is not available in advance. In order to learn the model parameters, which yield the Poisson arrival rates or alternatively the content \textit{popularities}, we invoke the Bayesian approach which is robust against over-fitting. However, the resulting posterior distribution is analytically intractable to compute. To tackle this, we apply a Markov Chain Monte Carlo (MCMC) method to approximate this distribution which is also asymptotically exact. Nevertheless, the MCMC is computationally demanding especially when the number of contents is large. Thus, we employ the Variational Bayes (VB) method as an alternative low complexity solution. More specifically, the VB method addresses the approximation of the posterior distribution through an optimization problem. Subsequently, we present a fast block-coordinate descent algorithm to solve this optimization problem. Finally, extensive simulation results both on synthetic and real-world datasets are provided to show the accuracy of our prediction algorithm and the cache hit ratio (CHR) gain compared to existing methods from the literature

    VIDEO PREPROCESSING BASED ON HUMAN PERCEPTION FOR TELESURGERY

    Get PDF
    Video transmission plays a critical role in robotic telesurgery because of the high bandwidth and high quality requirement. The goal of this dissertation is to find a preprocessing method based on human visual perception for telesurgical video, so that when preprocessed image sequences are passed to the video encoder, the bandwidth can be reallocated from non-essential surrounding regions to the region of interest, ensuring excellent image quality of critical regions (e.g. surgical region). It can also be considered as a quality control scheme that will gracefully degrade the video quality in the presence of network congestion. The proposed preprocessing method can be separated into two major parts. First, we propose a time-varying attention map whose value is highest at the gazing point and falls off progressively towards the periphery. Second, we propose adaptive spatial filtering and the parameters of which are adjusted according to the attention map. By adding visual adaptation to the spatial filtering, telesurgical video data can be compressed efficiently because of the high degree of visual redundancy removal by our algorithm. Our experimental results have shown that with the proposed preprocessing method, over half of the bandwidth can be reduced while there is no significant visual effect for the observer. We have also developed an optimal parameter selecting algorithm, so that when the network bandwidth is limited, the overall visual distortion after preprocessing is minimized
    corecore