2 research outputs found

    Three-dimensional video coding on mobile platforms

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Institute of Engineering and Sciences of Bilkent University, 2009.Thesis (Master's) -- Bilkent University, 2009.Includes bibliographical references leaves 83-87.With the evolution of the wireless communication technologies and the multimedia capabilities of the mobile phones, it is expected that three-dimensional (3D) video technologies will soon get adapted to the mobile phones. This raises the problem of choosing the best 3D video representation and the most efficient coding method for the selected representation for mobile platforms. Since the latest 2D video coding standard, H.264/MPEG-4 AVC, provides better coding efficiency over its predecessors, coding methods of the most common 3D video representations are based on this standard. Among the most common 3D video representations, there are multi-view video, video plus depth, multi-view video plus depth and layered depth video. For using on mobile platforms, we selected the conventional stereo video (CSV), which is a special case of multi-view video, since it is the simplest among the available representations. To determine the best coding method for CSV, we compared the simulcast coding, multi-view coding (MVC) and mixed-resolution stereoscopic coding (MRSC) without inter-view prediction, with subjective tests using simple coding schemes. From these tests, MVC is found to provide the best visual quality for the testbed we used, but MRSC without inter-view prediction still came out to be promising for some of the test sequences and especially for low bit rates. Then we adapted the Joint Video Team’s reference multi-view decoder to run on ZOOMTM OMAP34xTM Mobile Development Kit (MDK). The first decoding performance tests on the MDK resulted with around four stereo frames per second with frame resolutions of 640×352. To further improve the performance, the decoder software is profiled and the most demanding algorithms are ported to run on the embedded DSP core. Tests resulted with performance gains ranging from 25% to 60% on the DSP core. However, due to the design of the hardware platform and the structure of the reference decoder, the time spent for the communication link between the main processing unit and the DSP core is found to be high, leaving the performance gains insignificant. For this reason, it is concluded that the reference decoder should be restructured to use this communication link as infrequently as possible in order to achieve overall performance gains by using the DSP core.Bal, CanM.S

    Conception d'un micro capteur d'image CMOS à faible consommation d'énergie pour les réseaux de capteurs sans fil

    Get PDF
    This research aims to develop a vision system with low energy consumption for Wireless Sensor Networks (WSNs). The imager in question must meet the specific requirements of multimedia applications for Wireless Vision Sensor Networks. Indeed, a multimedia application requires intensive computation at the node and a considerable number of packets to be exchanged through the transceiver, and therefore consumes a lot of energy. An obvious solution to reduce the amount of transmitted data is to compress the images before sending them over WSN nodes. However, the severe constraints of nodes make ineffective in practice the implementation of standard compression algorithms (JPEG, JPEG2000, MJPEG, MPEG, H264, etc.). Desired vision system must integrate image compression techniques that are both effective and with low-complexity. Particular attention should be taken into consideration in order to best satisfy the compromise "Energy Consumption - Quality of Service (QoS)".Ce travail de recherche vise à concevoir un système de vision à faible consommation d'énergie pour les réseaux de capteurs sans fil. L'imageur en question doit respecter les contraintes spécifiques des applications multimédias pour les réseaux de capteurs de vision sans fil. En effet, de par sa nature, une application multimédia impose un traitement intensif au niveau du noeud et un nombre considérable de paquets à échanger à travers le lien radio, et par conséquent beaucoup d'énergie à consommer. Une solution évidente pour diminuer la quantité de données transmise, et donc la durée de vie du réseau, est de compresser les images avant de les transmettre. Néanmoins, les contraintes strictes des noeuds du réseau rendent inefficace en pratique l'exécution des algorithmes de compression standards (JPEG, JPEG2000, MJPEG, MPEG, H264, etc.). Le système de vision à concevoir doit donc intégrer des techniques de compression d'image à la fois efficaces et à faible complexité. Une attention particulière doit être prise en compte en vue de satisfaire au mieux le compromis "Consommation énergétique - Qualité de Service (QoS)"
    corecore