30 research outputs found

    Low bit-rate image sequence coding

    Get PDF

    An examination of block motion compensation algorithms for MPEG-2 and prediction of bit rates from video sequence measurements

    Get PDF
    This dissertation examines the following two problems: ‱ Finding a block motion compensation algorithm which is optimum in performance and speed. ‱ Predicting the performance, for complex sequences, of an MPEG-2 encoder. An optimum motion compensation algorithm can lead to optimum temporal compression. For fixed bit-rate encoders finding methods to predict the bit-rate from properties of the video sequence can lead to an optimum use of the transmission bandwidth. The examination of motion compensation algorithms involved examining previous algorithms. Historically, one of three functions are used to evaluate a candidate motion vector, namely, Mean Square Error (MSE), Minimum Absolute Difference (MAD) and cross correlation. The ideal motion vector being the one that minimises MAD and MSE, and maximises cross-correlation. Sub-sampling, hierarchical and feature domain methods were examined. Finally some new algorithms are proposed and further areas of research suggested. The new algorithms suggested perform close to optimum, particularly those algorithms searching feature space

    Image compression techniques using vector quantization

    Get PDF

    Energy efficient enabling technologies for semantic video processing on mobile devices

    Get PDF
    Semantic object-based processing will play an increasingly important role in future multimedia systems due to the ubiquity of digital multimedia capture/playback technologies and increasing storage capacity. Although the object based paradigm has many undeniable benefits, numerous technical challenges remain before the applications becomes pervasive, particularly on computational constrained mobile devices. A fundamental issue is the ill-posed problem of semantic object segmentation. Furthermore, on battery powered mobile computing devices, the additional algorithmic complexity of semantic object based processing compared to conventional video processing is highly undesirable both from a real-time operation and battery life perspective. This thesis attempts to tackle these issues by firstly constraining the solution space and focusing on the human face as a primary semantic concept of use to users of mobile devices. A novel face detection algorithm is proposed, which from the outset was designed to be amenable to be offloaded from the host microprocessor to dedicated hardware, thereby providing real-time performance and reducing power consumption. The algorithm uses an Artificial Neural Network (ANN), whose topology and weights are evolved via a genetic algorithm (GA). The computational burden of the ANN evaluation is offloaded to a dedicated hardware accelerator, which is capable of processing any evolved network topology. Efficient arithmetic circuitry, which leverages modified Booth recoding, column compressors and carry save adders, is adopted throughout the design. To tackle the increased computational costs associated with object tracking or object based shape encoding, a novel energy efficient binary motion estimation architecture is proposed. Energy is reduced in the proposed motion estimation architecture by minimising the redundant operations inherent in the binary data. Both architectures are shown to compare favourable with the relevant prior art

    On the Stability of Region Count in the Parameter Space of Image Analysis Methods

    Get PDF
    In this dissertation a novel bottom-up computer vision approach is proposed. This approach is based upon quantifying the stability of the number of regions or count in a multi-dimensional parameter scale-space. The stability analysis comes from the properties of flat areas in the region count space generated through bottom-up algorithms of thresholding and region growing, hysteresis thresholding, variance-based region growing. The parameters used can be threshold, region growth, intensity statistics and other low-level parameters. The advantages and disadvantages of top-down, bottom-up and hybrid computational models are discussed. The approaches of scale-space, perceptual organization and clustering methods in computer vision are also analyzed, and the difference between our approach and these approaches is clarified. An overview of our stable count idea and implementation of three algorithms derived from this idea are presented. The algorithms are applied to real-world images as well as simulated signals. We have developed three experiments based upon our framework of stable region count. The experiments are using flower detector, peak detector and retinal image lesion detector respectively to process images and signals. The results from these experiments all suggest that our computer vision framework can solve different image and signal problems and provide satisfactory solutions. In the end future research directions and improvements are proposed

    Video post processing architectures

    Get PDF

    Analog parallel processor solutions for video encoding

    Get PDF
    This thesis deals with Cellular Nonlinear Network (CNN) analog parallel processor networks and their implementations in current video coding standards. The target applications are low-power video encoders within 3rd generation mobile terminals. The video codecs of such mobile terminals are defined by either the MPEG-4/H.263 or H.264 video standard. All of these standards are based on the block-based hybrid approach. As block-based motion estimation (ME) is responsible for most of the power consumption of such hybrid video encoders, this thesis deals mostly with low-power ME implementations. Low-power solutions are introduced at both the algorithmic and hardware levels. On the algorithmic level, the introduced implementations are derived from a segmentation algorithm, which has previously been partly realized. The first introduced algorithm reduces the computational complexity of ME within an object-based MPEG-4 encoder. The use of this algorithm enables a 60% drop in the power consumption of Full Search ME. The second algorithm calculates a near-optimal block-size partition for H.264 motion estimation. With this algorithm, the use of computationally complex Lagrange optimization in H.264 ME is not required. The third algorithm reduces the shape bit-rate of an object-based MPEG-4 encoder. On the hardware level a CNN-type ME architecture is introduced. The architecture includes connections and circuitry to fully realize block-based ME. The analog ME implemented with this architecture is capable of lower power than comparable digital realizations. A 9×9 test chip has also been realized. Additionally implemented is a digital predictive ME realization that takes advantage of the introduced partition algorithm. Although the IC layout of the ME algorithm was drawn, the design was verified as an FPGA.reviewe

    Video object segmentation for future multimedia applications

    Get PDF
    An efficient representation of two-dimensional visual objects is specified by an emerging audiovisual compression standard known as MPEG-4. It incorporates the advantages of segmentation-based video compression (whereby objects are encoded independently, facilitating content-based functionalities), and also the advantages of more traditional block-based approaches (such as low delay and compression efficiency). What is not specified, however, is the method of extracting semantic objects from a scene corresponding to a video segmentation task. An accurate, robust and flexible solution to this is essential to enable the future multimedia applications possible with MPEG-4. Two categories of video segmentation approaches can be identified: supervised and unsupervised. A representative set of unsupervised approaches is discussed. These approaches are found to be suitable for real-time MPEG-4 applications. However, they are not suitable for off-line applications which require very accurate segmentations of entire semantic objects. This is because an automatic segmentation process cannot solve the ill-posed problem of extracting semantic meaning from a scene. Supervised segmentation incorporates user interaction so that semantic objects in a scene can be defined. A representative set of supervised approaches with greater or lesser degrees of interaction is discussed. Three new approaches to the problem, each more sophisticated than the last, are presented by the author. The most sophisticated is an object-based approach in which an automatic segmentation and tracking algorithm is used to perform a segmentation of a scene in terms of the semantic objects defined by the user. The approach relies on maximum likelihood estimation of the parameters of mixtures of multimodal multivariate probability distribution functions. The approach is an enhanced and modified version of an existing approach yielding more sophisticated object modelling. The segmentation results obtained are comparable to those of existing approaches and in many cases better. It is concluded that the author’s approach is ideal as a content extraction tool for future off-line MPEG-4 applications

    Selected topics on distributed video coding

    Get PDF
    Distributed Video Coding (DVC) is a new paradigm for video compression based on the information theoretical results of Slepian and Wolf (SW), and Wyner and Ziv (WZ). While conventional coding has a rigid complexity allocation as most of the complex tasks are performed at the encoder side, DVC enables a flexible complexity allocation between the encoder and the decoder. The most novel and interesting case is low complexity encoding and complex decoding, which is the opposite of conventional coding. While the latter is suitable for applications where the cost of the decoder is more critical than the encoder's one, DVC opens the door for a new range of applications where low complexity encoding is required and the decoder's complexity is not critical. This is interesting with the deployment of small and battery-powered multimedia mobile devices all around in our daily life. Further, since DVC operates as a reversed-complexity scheme when compared to conventional coding, DVC also enables the interesting scenario of low complexity encoding and decoding between two ends by transcoding between DVC and conventional coding. More specifically, low complexity encoding is possible by DVC at one end. Then, the resulting stream is decoded and conventionally re-encoded to enable low complexity decoding at the other end. Multiview video is attractive for a wide range of applications such as free viewpoint television, which is a system that allows viewing the scene from a viewpoint chosen by the viewer. Moreover, multiview can be beneficial for monitoring purposes in video surveillance. The increased use of multiview video systems is mainly due to the improvements in video technology and the reduced cost of cameras. While a multiview conventional codec will try to exploit the correlation among the different cameras at the encoder side, DVC allows for separate encoding of correlated video sources. Therefore, DVC requires no communication between the cameras in a multiview scenario. This is an advantage since communication is time consuming (i.e. more delay) and requires complex networking. Another appealing feature of DVC is the fact that it is based on a statistical framework. Moreover, DVC behaves as a natural joint source-channel coding solution. This results in an improved error resilience performance when compared to conventional coding. Further, DVC-based scalable codecs do not require a deterministic knowledge of the lower layers. In other words, the enhancement layers are completely independent from the base layer codec. This is called the codec-independent scalability feature, which offers a high flexibility in the way the various layers are distributed in a network. This thesis addresses the following topics: First, the theoretical foundations of DVC as well as the practical DVC scheme used in this research are presented. The potential applications for DVC are also outlined. DVC-based schemes use conventional coding to compress parts of the data, while the rest is compressed in a distributed fashion. Thus, different conventional codecs are studied in this research as they are compared in terms of compression efficiency for a rich set of sequences. This includes fine tuning the compression parameters such that the best performance is achieved for each codec. Further, DVC tools for improved Side Information (SI) and Error Concealment (EC) are introduced for monoview DVC using a partially decoded frame. The improved SI results in a significant gain in reconstruction quality for video with high activity and motion. This is done by re-estimating the erroneous motion vectors using the partially decoded frame to improve the SI quality. The latter is then used to enhance the reconstruction of the finally decoded frame. Further, the introduced spatio-temporal EC improves the quality of decoded video in the case of erroneously received packets, outperforming both spatial and temporal EC. Moreover, it also outperforms error-concealed conventional coding in different modes. Then, multiview DVC is studied in terms of SI generation, which differentiates it from the monoview case. More specifically, different multiview prediction techniques for SI generation are described and compared in terms of prediction quality, complexity and compression efficiency. Further, a technique for iterative multiview SI is introduced, where the final SI is used in an enhanced reconstruction process. The iterative SI outperforms the other SI generation techniques, especially for high motion video content. Finally, fusion techniques of temporal and inter-view side informations are introduced as well, which improves the performance of multiview DVC over monoview coding. DVC is also used to enable scalability for image and video coding. Since DVC is based on a statistical framework, the base and enhancement layers are completely independent, which is an interesting property called codec-independent scalability. Moreover, the introduced DVC scalable schemes show a good robustness to errors as the quality of decoded video steadily decreases with error rate increase. On the other hand, conventional coding exhibits a cliff effect as the performance drops dramatically after a certain error rate value. Further, the issue of privacy protection is addressed for DVC by transform domain scrambling, which is used to alter regions of interest in video such that the scene is still understood and privacy is preserved as well. The proposed scrambling techniques are shown to provide a good level of security without impairing the performance of the DVC scheme when compared to the one without scrambling. This is particularly attractive for video surveillance scenarios, which is one of the most promising applications for DVC. Finally, a practical DVC demonstrator built during this research is described, where the main requirements as well as the observed limitations are presented. Furthermore, it is defined in a setup being as close as possible to a complete real application scenario. This shows that it is actually possible to implement a complete end-to-end practical DVC system relying only on realistic assumptions. Even though DVC is inferior in terms of compression efficiency to the state of the art conventional coding for the moment, strengths of DVC reside in its good error resilience properties and the codec-independent scalability feature. Therefore, DVC offers promising possibilities for video compression with transmission over error-prone environments requirement as it significantly outperforms conventional coding in this case
    corecore