154 research outputs found

    Survey of Transportation of Adaptive Multimedia Streaming service in Internet

    Full text link
    [DE] World Wide Web is the greatest boon towards the technological advancement of modern era. Using the benefits of Internet globally, anywhere and anytime, users can avail the benefits of accessing live and on demand video services. The streaming media systems such as YouTube, Netflix, and Apple Music are reining the multimedia world with frequent popularity among users. A key concern of quality perceived for video streaming applications over Internet is the Quality of Experience (QoE) that users go through. Due to changing network conditions, bit rate and initial delay and the multimedia file freezes or provide poor video quality to the end users, researchers across industry and academia are explored HTTP Adaptive Streaming (HAS), which split the video content into multiple segments and offer the clients at varying qualities. The video player at the client side plays a vital role in buffer management and choosing the appropriate bit rate for each such segment of video to be transmitted. A higher bit rate transmitted video pauses in between whereas, a lower bit rate video lacks in quality, requiring a tradeoff between them. The need of the hour was to adaptively varying the bit rate and video quality to match the transmission media conditions. Further, The main aim of this paper is to give an overview on the state of the art HAS techniques across multimedia and networking domains. A detailed survey was conducted to analyze challenges and solutions in adaptive streaming algorithms, QoE, network protocols, buffering and etc. It also focuses on various challenges on QoE influence factors in a fluctuating network condition, which are often ignored in present HAS methodologies. Furthermore, this survey will enable network and multimedia researchers a fair amount of understanding about the latest happenings of adaptive streaming and the necessary improvements that can be incorporated in future developments.Abdullah, MTA.; Lloret, J.; Canovas Solbes, A.; GarcĂ­a-GarcĂ­a, L. (2017). Survey of Transportation of Adaptive Multimedia Streaming service in Internet. Network Protocols and Algorithms. 9(1-2):85-125. doi:10.5296/npa.v9i1-2.12412S8512591-

    ATOM : a distributed system for video retrieval via ATM networks

    Get PDF
    The convergence of high speed networks, powerful personal computer processors and improved storage technology has led to the development of video-on-demand services to the desktop that provide interactive controls and deliver Client-selected video information on a Client-specified schedule. This dissertation presents the design of a video-on-demand system for Asynchronous Transfer Mode (ATM) networks, incorporating an optimised topology for the nodes in the system and an architecture for Quality of Service (QoS). The system is called ATOM which stands for Asynchronous Transfer Mode Objects. Real-time video playback over a network consumes large bandwidth and requires strict bounds on delay and error in order to satisfy the visual and auditory needs of the user. Streamed video is a fundamentally different type of traffic to conventional IP (Internet Protocol) data since files are viewed in real-time, not downloaded and then viewed. This streaming data must arrive at the Client decoder when needed or it loses its interactive value. Characteristics of multimedia data are investigated including the use of compression to reduce the excessive bit rates and storage requirements of digital video. The suitability of MPEG-1 for video-on-demand is presented. Having considered the bandwidth, delay and error requirements of real-time video, the next step in designing the system is to evaluate current models of video-on-demand. The distributed nature of four such models is considered, focusing on how Clients discover Servers and locate videos. This evaluation eliminates a centralized approach in which Servers have no logical or physical connection to any other Servers in the network and also introduces the concept of a selection strategy to find alternative Servers when Servers are fully loaded. During this investigation, it becomes clear that another entity (called a Broker) could provide a central repository for Server information. Clients have logical access to all videos on every Server simply by connecting to a Broker. The ATOM Model for distributed video-on-demand is then presented by way of a diagram of the topology showing the interconnection of Servers, Brokers and Clients; a description of each node in the system; a list of the connectivity rules; a description of the protocol; a description of the Server selection strategy and the protocol if a Broker fails. A sample network is provided with an example of video selection and design issues are raised and solved including how nodes discover each other, a justification for using a mesh topology for the Broker connections, how Connection Admission Control (CAC) is achieved, how customer billing is achieved and how information security is maintained. A calculation of the number of Servers and Brokers required to service a particular number of Clients is presented. The advantages of ATOM are described. The underlying distributed connectivity is abstracted away from the Client. Redundant Server/Broker connections are eliminated and the total number of connections in the system are minimized by the rule stating that Clients and Servers may only connect to one Broker at a time. This reduces the total number of Switched Virtual Circuits (SVCs) which are a performance hindrance in ATM. ATOM can be easily scaled by adding more Servers which increases the total system capacity in terms of storage and bandwidth. In order to transport video satisfactorily, a guaranteed end-to-end Quality of Service architecture must be in place. The design methodology for such an architecture is investigated starting with a review of current QoS architectures in the literature which highlights important definitions including a flow, a service contract and flow management. A flow is a single media source which traverses resource modules between Server and Client. The concept of a flow is important because it enables the identification of the areas requiring consideration when designing a QoS architecture. It is shown that ATOM adheres to the principles motivating the design of a QoS architecture, namely the Integration, Separation and Transparency principles. The issue of mapping human requirements to network QoS parameters is investigated and the action of a QoS framework is introduced, including several possible causes of QoS degradation. The design of the ATOM Quality of Service Architecture (AQOSA) is then presented. AQOSA consists of 11 modules which interact to provide end-to-end QoS guarantees for each stream. Several important results arise from the design. It is shown that intelligent choice of stored videos in respect of peak bandwidth can improve overall system capacity. The concept of disk striping over a disk array is introduced and a Data Placement Strategy is designed which eliminates disk hot spots (i.e. Overuse of some disks whilst others lie idle.) A novel parameter (the B-P Ratio) is presented which can be used by the Server to predict future bursts from each video stream. The use of Traffic Shaping to decrease the load on the network from each stream is presented. Having investigated four algorithms for rewind and fast-forward in the literature, a rewind and fast-forward algorithm is presented. The method produces a significant decrease in bandwidth, and the resultant stream is very constant, reducing the chance that the stream will add to network congestion. The C++ classes of the Server, Broker and Client are described emphasizing the interaction between classes. The use of ATOM in the Virtual Private Network and the multimedia teaching laboratory is considered. Conclusions and recommendations for future work are presented. It is concluded that digital video applications require high bandwidth, low error, low delay networks; a video-on-demand system to support large Client volumes must be distributed, not centralized; control and operation (transport) must be separated; the number of ATM Switched Virtual Circuits (SVCs) must be minimized; the increased connections caused by the Broker mesh is justified by the distributed information gain; a Quality of Service solution must address end-to-end issues. It is recommended that a web front-end for Brokers be developed; the system be tested in a wide area A TM network; the Broker protocol be tested by forcing failure of a Broker and that a proprietary file format for disk striping be implemented

    A basic web-based distance education model

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2005Includes bibliographical references (leaves: 147)Text in English; Abstract: Turkish and Englishxv, 201 leavesDuring the recent years, the rapid growth of the Web and multimedia technologies urged a shift of Computer-Based Educational Technology towards the Web. In the leading universities of the developed countries, studies on Web-Based Education have started and in an increasing manner are going strong. In the last few years, the leading universities of Turkey are also greatly interested in Web-Based Education and have started their re-structuring accordingly.The goal of this study is to design a basic model to be utilized by a university aiming to offer web-based distance education. In achieving this; by the use of system approach, a model comprising of three subsystems, namely system analysis, system design and evaluation&control, working in coordination with each other, has been tried to be proposed. There may be only one missing point of this study, that is; since preparing a lesson or program according to this model was not foreseen in this thesis, the effectiveness evaluations suggested in the evaluation&control subsystem could not be realized. It is recommended to realize such an evaluation in a further study to make it possible to reveal the effectiveness of web-based education by preparing a lesson or program according to this model.On the other hand, a survey has been conducted in Turkey in some of the universities either offering web-based education or are interested in studies in this field.The aim of this survey is to analyze from system design point of view the studies carried out in our universities on this matter and to get a picture of the existing situation.The directed questions aiming this were prepared by taking into consideration of the three stages of system design subsystem, i.e. administrative design, educational design, and technological design. It is intended for the result of this survey to shed light to the new-coming institutions in this field. As a matter of fact, each stage of this subsystem is a survey item itself and should be researched one by one in other studies.Furthermore, for individuals interested in distance education and web-based distance education and for people newly involved in this matter, this thesis is intended to be a reference material and to serve this purpose the sections are prepared containing the basic information accordingly. Nevertheless, since most of the information regarding system design are prepared without taking into consideration the disabled people, the relevant information are not complete. In another study, the offering of the web-based education to the disabled people, especially for deaf, hard of hearing or speech impaired, and blind students, has to be investigated.Finally, in this thesis the proposed model for the Web-Based Distance Education, as being a basic and conceptual model, has a flexible structure; i.e., suitable for all the institutions and establishments intending to offer the web-based education.What is important here, is to exploit the potential sources within the institution that will display the required systematic approach

    Economically sustainable public security and emergency network exploiting a broadband communications satellite

    Get PDF
    The research contributes to work in Rapid Deployment of a National Public Security and Emergency Communications Network using Communication Satellite Broadband. Although studies in Public Security Communication networks have examined the use of communications satellite as an integral part of the Communication Infrastructure, there has not been an in-depth design analysis of an optimized regional broadband-based communication satellite in relation to the envisaged service coverage area, with little or no terrestrial last-mile telecommunications infrastructure for delivery of satellite solutions, applications and services. As such, the research provides a case study of a Nigerian Public Safety Security Communications Pilot project deployed in regions of the African continent with inadequate terrestrial last mile infrastructure and thus requiring a robust regional Communications Satellite complemented with variants of terrestrial wireless technologies to bridge the digital hiatus as a short and medium term measure apart from other strategic needs. The research not only addresses the pivotal role of a secured integrated communications Public safety network for security agencies and emergency service organizations with its potential to foster efficient information symmetry amongst their operations including during emergency and crisis management in a timely manner but demonstrates a working model of how analogue spectrum meant for Push-to-Talk (PTT) services can be re-farmed and digitalized as a “dedicated” broadband-based public communications system. The network’s sustainability can be secured by using excess capacity for the strategic commercial telecommunication needs of the state and its citizens. Utilization of scarce spectrum has been deployed for Nigeria’s Cashless policy pilot project for financial and digital inclusion. This effectively drives the universal access goals, without exclusivity, in a continent, which still remains the least wired in the world

    Development of a MPEG-7 based multimedia content description and retrieval tool for internet protocol television (IPTV)

    Get PDF
    Search and retrieval of multimedia content from open platforms such as the Internet and IPTV platforms has long been found to be hugely inefficient. It has been noted that a major cause of such inefficient results is the improper labeling or incomplete description of multimedia content by its creators. The lack of adequate description of video content by the proper annotation of video content with the relevant metadata leads to poor search and retrieval yields. The creation of such metadata itself is a major problem as there are various metadata description standards which users could employ. On the other hand there are tools such as FFprobe that can retrieve important features of video that can be used in searching and retrieval. The combination of such tools and metadata description standards could be the solution to the metadata problem. The Multimedia Content Description Interface (MPEG-7) is an example of a metadata description standard. It has been adopted by TISPAN for the description of IPTV multimedia content. The MPEG-7 standard is rather complex, seeing as it has over 1200 global Descriptors and Description Schemes which a user would have to know in order to implement such technology. This complexity is a nuisance when we consider the existence of multitudes of amateur video producers. These multimedia content creators have no idea how to use the MPEG-7 standard to annotate their creations with metadata. Consequently, overloading of the IPTV platform with content that has not been annotated in a standardized manner occurs, making search and retrieval of the multimedia content (videos, in this instance) inefficient. Therefore, it was imperative to try and determine whether the use of the MPEG-7 standard could be made much easier by creating a tool that is MPEG-7 enabled which will allow for the annotation of video content by any user without concerning themselves about how to use the MPEG-7 standard. In attempting to develop a tool for metadata generation, it was incumbent for us to understand the issues associated with metadata generation for users wishing to create IPTV services. An extensive literature review on IPTV standardization was carried out to determine the issues associated with metadata generation for IPTV and their proposed solutions. An experimental research approach was taken in an attempt to figure out if our proposed solution to the lack of technical expertise by users about the MPEG-7 standard could be the final solution to the metadata generation problem. We developed a Multimedia Content Description and Management System (MCDMS) prototype which enabled us to describe video content by annotating it with 16 different metadata elements and storing the descriptions in XML MPEG-7 format. Incremental development and re-use oriented development were used during the development phase of this research. The MCDMS underwent functional testing; smoke testing of the individual system components and Big Bang integration testing for the combined components. Our results indicate that the more metadata is appended to a video as description the better it is to search for and retrieve. The MCDMS hides the complexity of MPEG-7 metadata creation from the users. With the effortless creation of MPEG-7 based metadata, it becomes easier to annotate videos. Consequently, search and retrieval of video content becomes more efficient. It is important to note that the description of multimedia content remains a complex feat. Even with the metadata elements laid out for users, there still exist other issues that affect metadata creation such as polysemy and the semantic gap. However, the provision of a tool that does the MPEG-7 standardizing behind the scenes for users when they upload a video makes the description of multimedia content in a standardized manner a much easier feat to achieve

    Actas da 10ÂȘ ConferĂȘncia sobre Redes de Computadores

    Get PDF
    Universidade do MinhoCCTCCentro AlgoritmiCisco SystemsIEEE Portugal Sectio

    Adaptivity of 3D web content in web-based virtual museums : a quality of service and quality of experience perspective

    Get PDF
    The 3D Web emerged as an agglomeration of technologies that brought the third dimension to the World Wide Web. Its forms spanned from being systems with limited 3D capabilities to complete and complex Web-Based Virtual Worlds. The advent of the 3D Web provided great opportunities to museums by giving them an innovative medium to disseminate collections' information and associated interpretations in the form of digital artefacts, and virtual reconstructions thus leading to a new revolutionary way in cultural heritage curation, preservation and dissemination thereby reaching a wider audience. This audience consumes 3D Web material on a myriad of devices (mobile devices, tablets and personal computers) and network regimes (WiFi, 4G, 3G, etc.). Choreographing and presenting 3D Web components across all these heterogeneous platforms and network regimes present a significant challenge yet to overcome. The challenge is to achieve a good user Quality of Experience (QoE) across all these platforms. This means that different levels of fidelity of media may be appropriate. Therefore, servers hosting those media types need to adapt to the capabilities of a wide range of networks and devices. To achieve this, the research contributes the design and implementation of Hannibal, an adaptive QoS & QoE-aware engine that allows Web-Based Virtual Museums to deliver the best possible user experience across those platforms. In order to ensure effective adaptivity of 3D content, this research furthers the understanding of the 3D web in terms of Quality of Service (QoS) through empirical investigations studying how 3D Web components perform and what are their bottlenecks and in terms of QoE studying the subjective perception of fidelity of 3D Digital Heritage artefacts. Results of these experiments lead to the design and implementation of Hannibal

    Interoperability of wireless communication technologies in hybrid networks : evaluation of end-to-end interoperability issues and quality of service requirements

    Get PDF
    Hybrid Networks employing wireless communication technologies have nowadays brought closer the vision of communication “anywhere, any time with anyone”. Such communication technologies consist of various standards, protocols, architectures, characteristics, models, devices, modulation and coding techniques. All these different technologies naturally may share some common characteristics, but there are also many important differences. New advances in these technologies are emerging very rapidly, with the advent of new models, characteristics, protocols and architectures. This rapid evolution imposes many challenges and issues to be addressed, and of particular importance are the interoperability issues of the following wireless technologies: Wireless Fidelity (Wi-Fi) IEEE802.11, Worldwide Interoperability for Microwave Access (WiMAX) IEEE 802.16, Single Channel per Carrier (SCPC), Digital Video Broadcasting of Satellite (DVB-S/DVB-S2), and Digital Video Broadcasting Return Channel through Satellite (DVB-RCS). Due to the differences amongst wireless technologies, these technologies do not generally interoperate easily with each other because of various interoperability and Quality of Service (QoS) issues. The aim of this study is to assess and investigate end-to-end interoperability issues and QoS requirements, such as bandwidth, delays, jitter, latency, packet loss, throughput, TCP performance, UDP performance, unicast and multicast services and availability, on hybrid wireless communication networks (employing both satellite broadband and terrestrial wireless technologies). The thesis provides an introduction to wireless communication technologies followed by a review of previous research studies on Hybrid Networks (both satellite and terrestrial wireless technologies, particularly Wi-Fi, WiMAX, DVB-RCS, and SCPC). Previous studies have discussed Wi-Fi, WiMAX, DVB-RCS, SCPC and 3G technologies and their standards as well as their properties and characteristics, such as operating frequency, bandwidth, data rate, basic configuration, coverage, power, interference, social issues, security problems, physical and MAC layer design and development issues. Although some previous studies provide valuable contributions to this area of research, they are limited to link layer characteristics, TCP performance, delay, bandwidth, capacity, data rate, and throughput. None of the studies cover all aspects of end-to-end interoperability issues and QoS requirements; such as bandwidth, delay, jitter, latency, packet loss, link performance, TCP and UDP performance, unicast and multicast performance, at end-to-end level, on Hybrid wireless networks. Interoperability issues are discussed in detail and a comparison of the different technologies and protocols was done using appropriate testing tools, assessing various performance measures including: bandwidth, delay, jitter, latency, packet loss, throughput and availability testing. The standards, protocol suite/ models and architectures for Wi-Fi, WiMAX, DVB-RCS, SCPC, alongside with different platforms and applications, are discussed and compared. Using a robust approach, which includes a new testing methodology and a generic test plan, the testing was conducted using various realistic test scenarios on real networks, comprising variable numbers and types of nodes. The data, traces, packets, and files were captured from various live scenarios and sites. The test results were analysed in order to measure and compare the characteristics of wireless technologies, devices, protocols and applications. The motivation of this research is to study all the end-to-end interoperability issues and Quality of Service requirements for rapidly growing Hybrid Networks in a comprehensive and systematic way. The significance of this research is that it is based on a comprehensive and systematic investigation of issues and facts, instead of hypothetical ideas/scenarios or simulations, which informed the design of a test methodology for empirical data gathering by real network testing, suitable for the measurement of hybrid network single-link or end-to-end issues using proven test tools. This systematic investigation of the issues encompasses an extensive series of tests measuring delay, jitter, packet loss, bandwidth, throughput, availability, performance of audio and video session, multicast and unicast performance, and stress testing. This testing covers most common test scenarios in hybrid networks and gives recommendations in achieving good end-to-end interoperability and QoS in hybrid networks. Contributions of study include the identification of gaps in the research, a description of interoperability issues, a comparison of most common test tools, the development of a generic test plan, a new testing process and methodology, analysis and network design recommendations for end-to-end interoperability issues and QoS requirements. This covers the complete cycle of this research. It is found that UDP is more suitable for hybrid wireless network as compared to TCP, particularly for the demanding applications considered, since TCP presents significant problems for multimedia and live traffic which requires strict QoS requirements on delay, jitter, packet loss and bandwidth. The main bottleneck for satellite communication is the delay of approximately 600 to 680 ms due to the long distance factor (and the finite speed of light) when communicating over geostationary satellites. The delay and packet loss can be controlled using various methods, such as traffic classification, traffic prioritization, congestion control, buffer management, using delay compensator, protocol compensator, developing automatic request technique, flow scheduling, and bandwidth allocation.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Enabling energy-awareness for internet video

    Get PDF
    Continuous improvements to the state of the art have made it easier to create, send and receive vast quantities of video over the Internet. Catalysed by these developments, video is now the largest, and fastest growing type of traffic on modern IP networks. In 2015, video was responsible for 70% of all traffic on the Internet, with an compound annual growth rate of 27%. On the other hand, concerns about the growing energy consumption of ICT in general, continue to rise. It is not surprising that there is a significant energy cost associated with these extensive video usage patterns. In this thesis, I examine the energy consumption of typical video configurations during decoding (playback) and encoding through empirical measurements on an experimental test-bed. I then make extrapolations to a global scale to show the opportunity for significant energy savings, achievable by simple modifications to these video configurations. Based on insights gained from these measurements, I propose a novel, energy-aware Quality of Experience (QoE) metric for digital video - the Energy - Video Quality Index (EnVI). Then, I present and evaluate vEQ-benchmark, a benchmarking and measurement tool for the purpose of generating EnVI scores. The tool enables fine-grained resource-usage analyses on video playback systems, and facilitates the creation of statistical models of power usage for these systems. I propose GreenDASH, an energy-aware extension of the existing Dynamic Adaptive Streaming over HTTP standard (DASH). GreenDASH incorporates relevant energy-usage and video quality information into the existing standard. It could enable dynamic, energy-aware adaptation for video in response to energy-usage and user ‘green’ preferences. I also evaluate the subjective perception of such energy-aware, adaptive video streaming by means of a user study featuring 36 participants. I examine how video may be adapted to save energy without a significant impact on the Quality of Experience of these users. In summary, this thesis highlights the significant opportunities for energy savings if Internet users gain an awareness about their energy usage, and presents a technical discussion how this can be achieved by straightforward extensions to the current state of the art
    • 

    corecore