80 research outputs found

    SoftCast

    Get PDF
    The focus of this demonstration is the performance of streaming video over the mobile wireless channel. We compare two schemes: the standard approach to video which transmits H.264/AVC-encoded stream over 802.11-like PHY, and SoftCast -- a clean-slate design for wireless video where the source transmits one video stream that each receiver decodes to a video quality commensurate with its specific instantaneous channel quality

    SoftCast: Clean-slate Scalable Wireless Video

    Get PDF
    Video broadcast and mobile video challenge the conventional wireless design. In broadcast and mobile scenarios the bit rate supported by the channel differs across receivers and varies quickly over time. The conventional design however forces the source to pick a single bit rate and degrades sharply when the channel cannot not support the chosen bit rate. This paper presents SoftCast, a clean-slate design for wireless video where the source transmits one video stream that each receiver decodes to a video quality commensurate with its specific instantaneous channel quality. To do so, SoftCast ensures the samples of the digital video signal transmitted on the channel are linearly related to the pixels' luminance. Thus, when channel noise perturbs the transmitted signal samples, the perturbation naturally translates into approximation in the original video pixels. Hence, a receiver with a good channel (low noise) obtains a high fidelity video, and a receiver with a bad channel (high noise) obtains a low fidelity video. We implement SoftCast using the GNURadio software and the USRP platform. Results from a 20-node testbed show that SoftCast improves the average video quality (i.e., PSNR) across broadcast receivers in our testbed by up to 5.5dB. Even for a single receiver, it eliminates video glitches caused by mobility and increases robustness to packet loss by an order of magnitude

    Differentially private publication of database streams via hybrid video coding

    Get PDF
    While most anonymization technology available today is designed for static and small data, the current picture is of massive volumes of dynamic data arriving at unprecedented velocities. From the standpoint of anonymization, the most challenging type of dynamic data is data streams. However, while the majority of proposals deal with publishing either count-based or aggregated statistics about the underlying stream, little attention has been paid to the problem of continuously publishing the stream itself with differential privacy guarantees. In this work, we propose an anonymization method that can publish multiple numerical-attribute, finite microdata streams with high protection as well as high utility, the latter aspect measured as data distortion, delay and record reordering. Our method, which relies on the well-known differential pulse-code modulation scheme, adapts techniques originally intended for hybrid video encoding, to favor and leverage dependencies among the blocks of the original stream and thereby reduce data distortion. The proposed solution is assessed experimentally on two of the largest data sets in the scientific community working in data anonymization. Our extensive empirical evaluation shows the trade-off among privacy protection, data distortion, delay and record reordering, and demonstrates the suitability of adapting video-compression techniques to anonymize database streams

    Design and Optimization of Graph Transform for Image and Video Compression

    Get PDF
    The main contribution of this thesis is the introduction of new methods for designing adaptive transforms for image and video compression. Exploiting graph signal processing techniques, we develop new graph construction methods targeted for image and video compression applications. In this way, we obtain a graph that is, at the same time, a good representation of the image and easy to transmit to the decoder. To do so, we investigate different research directions. First, we propose a new method for graph construction that employs innovative edge metrics, quantization and edge prediction techniques. Then, we propose to use a graph learning approach and we introduce a new graph learning algorithm targeted for image compression that defines the connectivities between pixels by taking into consideration the coding of the image signal and the graph topology in rate-distortion term. Moreover, we also present a new superpixel-driven graph transform that uses clusters of superpixel as coding blocks and then computes the graph transform inside each region. In the second part of this work, we exploit graphs to design directional transforms. In fact, an efficient representation of the image directional information is extremely important in order to obtain high performance image and video coding. In this thesis, we present a new directional transform, called Steerable Discrete Cosine Transform (SDCT). This new transform can be obtained by steering the 2D-DCT basis in any chosen direction. Moreover, we can also use more complex steering patterns than a single pure rotation. In order to show the advantages of the SDCT, we present a few image and video compression methods based on this new directional transform. The obtained results show that the SDCT can be efficiently applied to image and video compression and it outperforms the classical DCT and other directional transforms. Along the same lines, we present also a new generalization of the DFT, called Steerable DFT (SDFT). Differently from the SDCT, the SDFT can be defined in one or two dimensions. The 1D-SDFT represents a rotation in the complex plane, instead the 2D-SDFT performs a rotation in the 2D Euclidean space

    State–of–the–art report on nonlinear representation of sources and channels

    Get PDF
    This report consists of two complementary parts, related to the modeling of two important sources of nonlinearities in a communications system. In the first part, an overview of important past work related to the estimation, compression and processing of sparse data through the use of nonlinear models is provided. In the second part, the current state of the art on the representation of wireless channels in the presence of nonlinearities is summarized. In addition to the characteristics of the nonlinear wireless fading channel, some information is also provided on recent approaches to the sparse representation of such channels

    Space Carving multi-view video plus depth sequences for representation and transmission of 3DTV and FTV contents

    Get PDF
    La vidéo 3D a suscité un intérêt croissant durant ces dernières années. Grâce au développement récent des écrans stéréoscopiques et auto-stéréoscopiques, la vidéo 3D fournit une sensation réaliste de profondeur à l'utilisateur et une navigation virtuelle autour de la scène observée. Cependant de nombreux défis techniques existent encore. Ces défis peuvent être liés à l'acquisition de la scène et à sa représentation d'une part ou à la transmission des données d'autre part. Dans le contexte de la représentation de scènes naturelles, de nombreux efforts ont été fournis afin de surmonter ces difficultés. Les méthodes proposées dans la littérature peuvent être basées image, géométrie ou faire appel à des représentations combinant image et géométrie. L'approche adoptée dans cette thèse consiste en une méthode hybride s'appuyant sur l'utilisation des séquences multi-vues plus profondeur MVD (Multi-view Video plus Depth) afin de conserver le photo-réalisme de la scène observée, combinée avec un modèle géométrique, à base de maillage triangulaire, renforçant ainsi la compacité de la représentation. Nous supposons que les cartes de profondeur des données MVD fournies sont fiables et que les caméras utilisées durant l'acquisition sont calibrées, les paramètres caméras sont donc connus, mais les images correspondantes ne sont pas nécessairement rectifiées. Nous considérerons ainsi le cas général où les caméras peuvent être parallèles ou convergentes. Les contributions de cette thèse sont les suivantes. D'abord, un schéma volumétrique dédié à la fusion des cartes de profondeur en une surface maillée est proposé. Ensuite, un nouveau schéma de plaquage de texture multi-vues est proposé. Finalement, nous abordons à l'issue ce ces deux étapes de modélisation, la transmission proprement dite et comparons les performances de notre schéma de modélisation avec un schéma basé sur le standard MPEG-MVC, état de l'art dans la compression de vidéos multi-vues.3D videos have witnessed a growing interest in the last few years. Due to the recent development ofstereoscopic and auto-stereoscopic displays, 3D videos provide a realistic depth perception to the user and allows a virtual navigation around the scene. Nevertheless, several technical challenges are still remaining. Such challenges are either related to scene acquisition and representation on the one hand or to data transmission on the other hand. In the context of natural scene representation, research activities have been strengthened worldwide in order to handle these issues. The proposed methods for scene representation can be image-based, geometry based or methods combining both image and geometry. In this thesis, we take advantage of image based representations, thanks to the use of Multi-view Video plus Depth representation, in order to preserve the photorealism of the observed scene, and geometric based representations in order to enforce the compactness ofthe proposed scene representation. We assume the provided depth maps to be reliable.Besides, the considered cameras are calibrated so that the cameras parameters are known but thecorresponding images are not necessarily rectified. We consider, therefore, the general framework where cameras can be either convergent or parallel. The contributions of this thesis are the following. First, a new volumetric framework is proposed in order to mergethe input depth maps into a single and compact surface mesh. Second, a new algorithm for multi-texturing the surface mesh is proposed. Finally, we address the transmission issue and compare the performance of the proposed modeling scheme with the current standard MPEG-MVC, that is the state of the art of multi-view video compression.RENNES-INSA (352382210) / SudocSudocFranceF

    Depth-based Multi-View 3D Video Coding

    Get PDF
    • …
    corecore