26 research outputs found

    Sparse image approximation with application to flexible image coding

    Get PDF
    Natural images are often modeled through piecewise-smooth regions. Region edges, which correspond to the contours of the objects, become, in this model, the main information of the signal. Contours have the property of being smooth functions along the direction of the edge, and irregularities on the perpendicular direction. Modeling edges with the minimum possible number of terms is of key importance for numerous applications, such as image coding, segmentation or denoising. Standard separable basis fail to provide sparse enough representation of contours, due to the fact that this kind of basis do not see the regularity of edges. In order to be able to detect this regularity, a new method based on (possibly redundant) sets of basis functions able to capture the geometry of images is needed. This thesis presents, in a first stage, a study about the features that basis functions should have in order to provide sparse representations of a piecewise-smooth image. This study emphasizes the need for edge-adapted basis functions, capable to accurately capture local orientation and anisotropic scaling of image structures. The need of different anisotropy degrees and orientations in the basis function set leads to the use of redundant dictionaries. However, redundant dictionaries have the inconvenience of giving no unique sparse image decompositions, and from all the possible decompositions of a signal in a redundant dictionary, just the sparsest is needed. There are several algorithms that allow to find sparse decompositions over redundant dictionaries, but most of these algorithms do not always guarantee that the optimal approximation has been recovered. To cope with this problem, a mathematical study about the properties of sparse approximations is performed. From this, a test to check whether a given sparse approximation is the sparsest is provided. The second part of this thesis presents a novel image approximation scheme, based on the use of a redundant dictionary. This scheme allows to have a good approximation of an image with a number of terms much smaller than the dimension of the signal. This novel approximation scheme is based on a dictionary formed by a combination of anisotropically refined and rotated wavelet-like mother functions and Gaussians. An efficient Full Search Matching Pursuit algorithm to perform the image decomposition in such a dictionary is designed. Finally, a geometric image coding scheme based on the image approximated over the anisotropic and rotated dictionary of basis functions is designed. The coding performances of this dictionary are studied. Coefficient quantization appears to be of crucial importance in the design of a Matching Pursuit based coding scheme. Thus, a quantization scheme for the MP coefficients has been designed, based on the theoretical energy upper bound of the MP algorithm and the empirical observations of the coefficient distribution and evolution. Thanks to this quantization, our image coder provides low to medium bit-rate image approximations, while it allows for on the fly resolution switching and several other affine image transformations to be performed directly in the transformed domain

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    Nonlinear approximation with redundant multi-component dictionaries

    Get PDF
    The problem of efficiently representing and approximating digital data is an open challenge and it is of paramount importance for many applications. This dissertation focuses on the approximation of natural signals as an organized combination of mutually connected elements, preserving and at the same time benefiting from their inherent structure. This is done by decomposing a signal onto a multi-component, redundant collection of functions (dictionary), built by the union of several subdictionaries, each of which is designed to capture a specific behavior of the signal. In this way, instead of representing signals as a superposition of sinusoids or wavelets many alternatives are available. In addition, since dictionaries we are interested in are overcomplete, the decomposition is non-unique. This gives us the possibility of adaptation, choosing among many possible representations the one which best fits our purposes. On the other hand, it also requires more complex approximation techniques whose theoretical decomposition capacity and computational load have to be carefully studied. In general, we aim at representing a signal with few and meaningful components. If we are able to represent a piece of information by using only few elements, it means that such elements can capture its main characteristics, allowing to compact the energy carried by a signal into the smallest number of terms. In such a framework, this work also proposes analysis methods which deal with the goal of considering the a priori information available when decomposing a structured signal. Indeed, a natural signal is not only an array of numbers, but an expression of a physical event about which we usually have a deep knowledge. Therefore, we claim that it is worth exploiting its structure, since it can be advantageous not only in helping the analysis process, but also in making the representation of such information more accessible and meaningful. The study of an adaptive image representation inspired and gave birth to this work. We often refer to images and visual information throughout the course of the dissertation. However, the proposed approximation setting extends to many different kinds of structured data and examples are given involving videos and electrocardiogram signals. An important part of this work is constituted by practical applications: first of all we provide very interesting results for image and video compression. Then, we also face the problem of signal denoising and, finally, promising achievements in the field of source separation are presented

    A reduced reference video quality assessment method for provision as a service over SDN/NFV-enabled networks

    Get PDF
    139 p.The proliferation of multimedia applications and services has generarted a noteworthy upsurge in network traffic regarding video content and has created the need for trustworthy service quality assessment methods. Currently, predominent position among the technological trends in telecommunication networkds are Network Function Virtualization (NFV), Software Defined Networking (SDN) and 5G mobile networks equipped with small cells. Additionally Video Quality Assessment (VQA) methods are a very useful tool for both content providers and network operators, to understand of how users perceive quality and this study the feasibility of potential services and adapt the network available resources to satisfy the user requirements

    A reduced reference video quality assessment method for provision as a service over SDN/NFV-enabled networks

    Get PDF
    139 p.The proliferation of multimedia applications and services has generarted a noteworthy upsurge in network traffic regarding video content and has created the need for trustworthy service quality assessment methods. Currently, predominent position among the technological trends in telecommunication networkds are Network Function Virtualization (NFV), Software Defined Networking (SDN) and 5G mobile networks equipped with small cells. Additionally Video Quality Assessment (VQA) methods are a very useful tool for both content providers and network operators, to understand of how users perceive quality and this study the feasibility of potential services and adapt the network available resources to satisfy the user requirements

    Optimized protection of streaming media authenticity

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Resource allocation and adaptive scheduling for scalable video streaming

    Get PDF
    The obvious recent advances in areas such as video compression and network architectures allow for the deployment of novel video distribution applications. These have the potential to provide ubiquitous media access to end users. In recent years, applications based on audio and video streaming have turned out to be immensely popular and the Internet has become the most widely used vector for media content distribution, due to its high availability and connectivity. However, the nature of the Internet infrastructure is not adapted to the specific characteristics of multimedia traffic, which presents a certain tolerance to losses, but strict delay and high bandwidth requirements. In this thesis, our goal is to improve the efficiency of media delivery over the existing network architecture. In order to do so we consider the delivery of scalable video in three main delivery scenarios, namely one-to-one client server architectures, one-to-many broadcasting architectures, and many-to-one distributed streaming architectures. First, we propose a distributed media-friendly rate allocation algorithm for the delivery of both finely and coarsely scalable video streams. Unlike existing solutions, our algorithm explicitly takes the characteristics of media streams into consideration. As a result, it provides rate allocations that better fit the heterogeneous characteristics of media streams. We outline an implementation that is robust to random feedback delays and that permits a scalable deployment of the algorithm. The rate allocation that is computed by our algorithm achieves network stability and high bandwidth utilization. It moreover allows to maximize the average received quality for all streams that are delivered in the network. While considering the transmission of coarsely layered streams, we derive conditions on the encoding rates of the video layers. These conditions depend on the allowed end-to-end delay and on the rate allocation algorithm that controls the sending rates. They allow us to take full advantage of the allocated transmission rates. Second, we investigate the problem of jointly addressing the needs of multiple receivers that consume different versions of a layered media stream in a broadcasting scenario. We provide optimal scheduling algorithms that jointly optimize the playback delay and the buffer occupancy at all of these receivers when the used channel is known. Furthermore we analyze low complexity heuristics based optimization techniques, which provide close to optimal results when only limited channel knowledge is available. Finally, we explore the possibility to exploit the inherent network diversity that is provided by the Internet infrastructure. In particular, we consider media delivery schemes where multiple senders are available for the transmission of a scalable video stream to a single client. Such an architecture is referred to as a distributed streaming architecture. It has the benefit of aggregating multiple unreliable channels into a single more robust channel with high availability. Through the use of Fountain codes, we are able to transform the distributed streaming problem into a rate allocation problem of lower complexity. The solution to this problem is shown to depend not only on the average packet loss rate, but also on the average length of packet loss bursts that are observed on each of the available channels. The coding scheme that we suggest enables our system to adapt the streamed content to the network characteristics, as well as to the needs of the receiving client

    High ratio wavelet video compression through real-time rate-distortion estimation.

    Get PDF
    Thesis (M.Sc.Eng.)-University of Natal, Durban, 2003.The success of the wavelet transform in the compression of still images has prompted an expanding effort to exercise this transform in the compression of video. Most existing video compression methods incorporate techniques from still image compression, such techniques being abundant, well defined and successful. This dissertation commences with a thorough review and comparison of wavelet still image compression techniques. Thereafter an examination of wavelet video compression techniques is presented. Currently, the most effective video compression system is the DCT based framework, thus a comparison between these and the wavelet techniques is also given. Based on this review, this dissertation then presents a new, low-complexity, wavelet video compression scheme. Noting from a complexity study that the generation of temporally decorrelated, residual frames represents a significant computational burden, this scheme uses the simplest such technique; difference frames. In the case of local motion, these difference frames exhibit strong spatial clustering of significant coefficients. A simple spatial syntax is created by splitting the difference frame into tiles. Advantage of the spatial clustering may then be taken by adaptive bit allocation between the tiles. This is the central idea of the method. In order to minimize the total distortion of the frame, the scheme uses the new p-domain rate-distortion estimation scheme with global numerical optimization to predict the optimal distribution of bits between tiles. Thereafter each tile is independently wavelet transformed and compressed using the SPIHT technique. Throughout the design process computational efficiency was the design imperative, thus leading to a real-time, software only, video compression scheme. The scheme is finally compared to both the current video compression standards and the leading wavelet schemes from the literature in terms of computational complexity visual quality. It is found that for local motion scenes the proposed algorithm executes approximately an order of magnitude faster than these methods, and presents output of similar quality. This algorithm is found to be suitable for implementation in mobile and embedded devices due to its moderate memory and computational requirements

    Recent Trends in Communication Networks

    Get PDF
    In recent years there has been many developments in communication technology. This has greatly enhanced the computing power of small handheld resource-constrained mobile devices. Different generations of communication technology have evolved. This had led to new research for communication of large volumes of data in different transmission media and the design of different communication protocols. Another direction of research concerns the secure and error-free communication between the sender and receiver despite the risk of the presence of an eavesdropper. For the communication requirement of a huge amount of multimedia streaming data, a lot of research has been carried out in the design of proper overlay networks. The book addresses new research techniques that have evolved to handle these challenges
    corecore