647 research outputs found

    Optimized mobile thin clients through a MPEG-4 BiFS semantic remote display framework

    Get PDF
    According to the thin client computing principle, the user interface is physically separated from the application logic. In practice only a viewer component is executed on the client device, rendering the display updates received from the distant application server and capturing the user interaction. Existing remote display frameworks are not optimized to encode the complex scenes of modern applications, which are composed of objects with very diverse graphical characteristics. In order to tackle this challenge, we propose to transfer to the client, in addition to the binary encoded objects, semantic information about the characteristics of each object. Through this semantic knowledge, the client is enabled to react autonomously on user input and does not have to wait for the display update from the server. Resulting in a reduction of the interaction latency and a mitigation of the bursty remote display traffic pattern, the presented framework is of particular interest in a wireless context, where the bandwidth is limited and expensive. In this paper, we describe a generic architecture of a semantic remote display framework. Furthermore, we have developed a prototype using the MPEG-4 Binary Format for Scenes to convey the semantic information to the client. We experimentally compare the bandwidth consumption of MPEG-4 BiFS with existing, non-semantic, remote display frameworks. In a text editing scenario, we realize an average reduction of 23% of the data peaks that are observed in remote display protocol traffic

    Web browsing optimization over 2.5G and 3G: end-to-end mechanisms vs. usage of performance enhancing proxies

    Get PDF
    Published version on Wiley's platform: https://onlinelibrary.wiley.com/doi/abs/10.1002/wcm.4562.5 Generation (2.5G) and Third Generation (3G) cellular wireless networks allow mobile Internet access withbearers specifically designed for data communications. However, Internet protocols under-utilize wireless widearea network (WWAN) link resources, mainly due to large round trip times (RTTs) and request–reply protocolpatterns. Web browsing is a popular service that suffers significant performance degradation over 2.5G and 3G. Inthis paper, we review and compare the two main approaches for improving web browsing performance over wirelesslinks: (i) using adequate end-to-end parameters and mechanisms and (ii) interposing a performance enhancingproxy (PEP) between the wireless and wired parts. We conclude that PEPs are currently the only feasible way forsignificantly optimizing web browsing behavior over 2.5G and 3G. In addition, we evaluate the two main currentcommercial PEPs over live general packet radio service (GPRS) and universal mobile telecommunications system(UMTS) networks. The results show that PEPs can lead to near-ideal web browsing performance in certain scenarios.Postprint (published version

    Survey of Transportation of Adaptive Multimedia Streaming service in Internet

    Full text link
    [DE] World Wide Web is the greatest boon towards the technological advancement of modern era. Using the benefits of Internet globally, anywhere and anytime, users can avail the benefits of accessing live and on demand video services. The streaming media systems such as YouTube, Netflix, and Apple Music are reining the multimedia world with frequent popularity among users. A key concern of quality perceived for video streaming applications over Internet is the Quality of Experience (QoE) that users go through. Due to changing network conditions, bit rate and initial delay and the multimedia file freezes or provide poor video quality to the end users, researchers across industry and academia are explored HTTP Adaptive Streaming (HAS), which split the video content into multiple segments and offer the clients at varying qualities. The video player at the client side plays a vital role in buffer management and choosing the appropriate bit rate for each such segment of video to be transmitted. A higher bit rate transmitted video pauses in between whereas, a lower bit rate video lacks in quality, requiring a tradeoff between them. The need of the hour was to adaptively varying the bit rate and video quality to match the transmission media conditions. Further, The main aim of this paper is to give an overview on the state of the art HAS techniques across multimedia and networking domains. A detailed survey was conducted to analyze challenges and solutions in adaptive streaming algorithms, QoE, network protocols, buffering and etc. It also focuses on various challenges on QoE influence factors in a fluctuating network condition, which are often ignored in present HAS methodologies. Furthermore, this survey will enable network and multimedia researchers a fair amount of understanding about the latest happenings of adaptive streaming and the necessary improvements that can be incorporated in future developments.Abdullah, MTA.; Lloret, J.; Canovas Solbes, A.; GarcĂ­a-GarcĂ­a, L. (2017). Survey of Transportation of Adaptive Multimedia Streaming service in Internet. Network Protocols and Algorithms. 9(1-2):85-125. doi:10.5296/npa.v9i1-2.12412S8512591-

    Wireless Multimedia Communications and Networking Based on JPEG 2000

    Get PDF

    Decentralization of multimedia content in a heterogeneous environment

    Get PDF
    The aim of this study has been the decentralization of multimedia content in a heterogeneous environment. The environment consisted of the research networks connecting the European Organization for Nuclear Research and the Finnish University and Research Network. The European Organization for Nuclear Research produces multimedia content which can be used as studying material all over the world. The Web University pilot in the European Organization for Nuclear Research has been developing a multimedia content delivery service for years. Delivering the multimedia content requires plenty of capacity from the network infrastructure. Different content of the material can have different demands for the network. In a heterogeneous environment, like the Internet, fulfilling all the demands can be a problem. Several methods exist to improve the situation. Decentralization of the content is one of the most popular solutions. Mirroring and caching are the main methods for decentralization. Recently developed content delivery networks are using both of these techniques to satisfy the demands of the content. The practical application consisted of measurements of the network connection between the multimedia server in the European Organization for Nuclear Research and the Finnish University and Research Network, planning and building a decentralization system for the multimedia content. After the measurements, it became clear that there is n o need for decentralization of the multimedia content for users that are able to utilise the Finnish University and Research Network. There could be double today's usage, and still there would be no problems with the capacity. However, the European Organization for Nuclear Research routes all traffic that comes from outside research networks through a gateway in the USA. This affects every connection that is made from Finland: users are not able to use the international connection offered by the Finnish University and Research Network. For these users I designed and built a simple, modular and portable decentralization system

    Machine Learning for Multimedia Communications

    Get PDF
    Machine learning is revolutionizing the way multimedia information is processed and transmitted to users. After intensive and powerful training, some impressive efficiency/accuracy improvements have been made all over the transmission pipeline. For example, the high model capacity of the learning-based architectures enables us to accurately model the image and video behavior such that tremendous compression gains can be achieved. Similarly, error concealment, streaming strategy or even user perception modeling have widely benefited from the recent learningoriented developments. However, learning-based algorithms often imply drastic changes to the way data are represented or consumed, meaning that the overall pipeline can be affected even though a subpart of it is optimized. In this paper, we review the recent major advances that have been proposed all across the transmission chain, and we discuss their potential impact and the research challenges that they raise

    Forward Error Correction applied to JPEG-XS codestreams

    Full text link
    JPEG-XS offers low complexity image compression for applications with constrained but reasonable bit-rate, and low latency. Our paper explores the deployment of JPEG-XS on lossy packet networks. To preserve low latency, Forward Error Correction (FEC) is envisioned as the protection mechanism of interest. Despite the JPEG-XS codestream is not scalable in essence, we observe that the loss of a codestream fraction impacts the decoded image quality differently, depending on whether this codestream fraction corresponds to codestream headers, to coefficients significance information, or to low/high frequency data, respectively. Hence, we propose a rate-distortion optimal unequal error protection scheme that adapts the redundancy level of Reed-Solomon codes according to the rate of channel losses and the type of information protected by the code. Our experiments demonstrate that, at 5% loss rates, it reduces the Mean Squared Error by up to 92% and 65%, compared to a transmission without and with optimal but equal protection, respectively

    Semantic multimedia remote display for mobile thin clients

    Get PDF
    Current remote display technologies for mobile thin clients convert practically all types of graphical content into sequences of images rendered by the client. Consequently, important information concerning the content semantics is lost. The present paper goes beyond this bottleneck by developing a semantic multimedia remote display. The principle consists of representing the graphical content as a real-time interactive multimedia scene graph. The underlying architecture features novel components for scene-graph creation and management, as well as for user interactivity handling. The experimental setup considers the Linux X windows system and BiFS/LASeR multimedia scene technologies on the server and client sides, respectively. The implemented solution was benchmarked against currently deployed solutions (VNC and Microsoft-RDP), by considering text editing and WWW browsing applications. The quantitative assessments demonstrate: (1) visual quality expressed by seven objective metrics, e.g., PSNR values between 30 and 42 dB or SSIM values larger than 0.9999; (2) downlink bandwidth gain factors ranging from 2 to 60; (3) real-time user event management expressed by network round-trip time reduction by factors of 4-6 and by uplink bandwidth gain factors from 3 to 10; (4) feasible CPU activity, larger than in the RDP case but reduced by a factor of 1.5 with respect to the VNC-HEXTILE
    • …
    corecore