298 research outputs found

    Energy-efficient bandwidth allocation for multiuser scalable video streaming over WLAN

    Get PDF
    We consider the problem of packet scheduling for the transmission of multiple video streams over a wireless local area network (WLAN). A cross-layer optimization framework is proposed to minimize the wireless transceiver energy consumption while meeting the user required visual quality constraints. The framework relies on the IEEE 802.11 standard and on the embedded bitstream structure of the scalable video coding scheme. It integrates an application-level video quality metric as QoS constraint (instead of a communication layer quality metric) with energy consumption optimization through link layer scaling and sleeping. Both energy minimization and min-max energy optimization strategies are discussed. Simulation results demonstrate significant energy gains compared to the state-of-the-art approaches

    Generalized Inpainting Method for Hyperspectral Image Acquisition

    Full text link
    A recently designed hyperspectral imaging device enables multiplexed acquisition of an entire data volume in a single snapshot thanks to monolithically-integrated spectral filters. Such an agile imaging technique comes at the cost of a reduced spatial resolution and the need for a demosaicing procedure on its interleaved data. In this work, we address both issues and propose an approach inspired by recent developments in compressed sensing and analysis sparse models. We formulate our superresolution and demosaicing task as a 3-D generalized inpainting problem. Interestingly, the target spatial resolution can be adjusted for mitigating the compression level of our sensing. The reconstruction procedure uses a fast greedy method called Pseudo-inverse IHT. We also show on simulations that a random arrangement of the spectral filters on the sensor is preferable to regular mosaic layout as it improves the quality of the reconstruction. The efficiency of our technique is demonstrated through numerical experiments on both synthetic and real data as acquired by the snapshot imager.Comment: Keywords: Hyperspectral, inpainting, iterative hard thresholding, sparse models, CMOS, Fabry-P\'ero

    Migrating real-time depth image-based rendering from traditional to next-gen GPGPU

    Full text link
    info:eu-repo/semantics/publishe

    View Synthesis Tool for VR Immersive Video

    Get PDF
    This chapter addresses the view synthesis of natural scenes in virtual reality (VR) using depth image-based rendering (DIBR). This method reaches photorealistic results as it directly warps photos to obtain the output, avoiding the need to photograph every possible viewpoint or to make a 3D reconstruction of a scene followed by a ray-tracing rendering. An overview of the DIBR approach and frequently encountered challenges (disocclusion and ghosting artifacts, multi-view blending, handling of non-Lambertian objects) are described. Such technology finds applications in VR immersive displays and holography. Finally, a comprehensive manual of the Reference View Synthesis software (RVS), an open-source tool tested on open datasets and recognized by the MPEG-I standardization activities (where”I″ refers to”immersive”) is described for hands-on practicing

    A model for adapting 3D graphics based on scalable coding, real-time simplification and remote rendering

    Get PDF
    Most current multiplayer 3D games can only be played on dedicated platforms, requiring specifically designed content and communication over a predefined network. To overcome these limitations, the OLGA (On-Line GAming) consortium has devised a framework to develop distributive, multiplayer 3D games. Scalability at the level of content, platforms and networks is exploited to achieve the best trade-offs between complexity and quality

    Tele-Robotics VR with Holographic Vision in Immersive Video

    Get PDF
    We present a first-of-its-kind end-to-end tele-robotic VR system where the user operates a robot arm remotely, while being virtually immersed into the scene through force feedback and holographic vision. In contrast to stereoscopic head mounted displays that only provide depth perception to the user, the holographic vision device projects a light field, additionally allowing the user to correctly accommodate his/her eyes to the perceived depth of the scene's objects. The highly improved immersive user experience results in less fatigue in the tele-operator's daily work, creating safer and/or longer working conditions. The core technology relies on recent advances in immersive video coding for audio-visual transmission developed within the MPEG standardization committee. Virtual viewpoints are synthesized for the tele-operator's viewing direction from a couple of colour and depth fixed video feeds. Besides of the display hardware and its GPU-enabled view synthesis driver, the biggest challenge hides in obtaining high-quality and reliable depth images from low-cost depth sensing devices. Specialized depth refinement tools have been developed for running in real- time at zero delay within the end-to-end tele-robotic immersive video pipeline, which must remain interactive by essence. Various modules work asynchronously and efficiently at their own pace, with the acquisition devices typically being limited to 30 frames per second (fps), while the holographic headset updates its projected light field at up to 240 fps. Such modular approach ensures high genericity over a wide range of free navigation VR/XR applications, also beyond the tele-robotic one presented in this paper

    New visual coding exploration in MPEG: Super-MultiView and free navigation in free viewpoint TV

    Get PDF
    ISO/IEC MPEG and ITU-T VCEG have recently jointly issued a new multiview video compression standard, called 3D-HEVC, which reaches unpreceded compression performances for linear,dense camera arrangements. In view of supporting future highquality,auto-stereoscopic 3D displays and Free Navigation virtual/augmented reality applications with sparse, arbitrarily arranged camera setups, innovative depth estimation and virtual view synthesis techniques with global optimizations over all camera views should be developed. Preliminary studies in response to the MPEG-FTV (Free viewpoint TV) Call for Evidence suggest these targets are within reach, with at least 6% bitrate gains over 3DHEVC technology

    DSP Processoren voor video toepassingen

    No full text
    info:eu-repo/semantics/publishe

    Method for operating a real-time multimedia terminal in a QoS manner

    No full text
    info:eu-repo/semantics/publishe
    • …
    corecore