64 research outputs found

    Forward Error Correction for Fast Streaming with Open-Source Components

    Get PDF
    The mechanisms that provide streaming functionality are complex and far from perfect. Reliability in transmission depends upon the underlying protocols chosen for implementation. There are two main networking protocols for data transmission: TCP and UDP. TCP guarantees the arrival of data at the receiver, whereas UDP does not. Forward Error Correction is based on a technology called “erasure coding”, and can be used to mitigate data loss experienced when using UDP. This paper describes in detail the development of a video streaming library making use of the UDP transport protocol in order to test and further explore network based Forward Error Correction erasure codes

    Streaming DICOM Real-Time Video and Metadata Flows Outside The Operating Room

    Get PDF
    International audienceWith the current advancement in the medical world, surgeons are faced with the challenge of handling many sources of medical information in more and more complex and technological Operating Rooms (ORs). Obviously, in the next generation ones, there will be an increasing number of video flows during the surgery (e.g. endoscopes, cameras, ultrasounds, etc.), which can be also displayed all over the OR in order to facilitate the task for the surgeon and to avoid any adverse events or problems related to inadequate communication in the OR. Additionally, other information needs to be shared, pre/post/during an operation, such as the history of the digital images related to the patient in the PACS and the metadata coming from medical sensors. Moreover, these medical videos captured from the OR can be either displayed on a large screen in the OR in order to provide the surgeon with more visibility, in this case via DICOM-RTV, or streamed outside the OR via a P2P solution. The latter one can serve various purposes such as for teaching medical student in real-time or for remote-expertise with a remote senior surgeons. Hence, this paper addresses the challenges of streaming DICOM-RTV video and metadata flows live from the operating room, typically during an ongoing surgery, in real-time to the outside world. A Proof of Concept is also presented in order to demonstrate the feasibility of our solution

    Video QoS/QoE over IEEE802.11n/ac: A Contemporary Survey

    Get PDF
    The demand for video applications over wireless networks has tremendously increased, and IEEE 802.11 standards have provided higher support for video transmission. However, providing Quality of Service (QoS) and Quality of Experience (QoE) for video over WLAN is still a challenge due to the error sensitivity of compressed video and dynamic channels. This thesis presents a contemporary survey study on video QoS/QoE over WLAN issues and solutions. The objective of the study is to provide an overview of the issues by conducting a background study on the video codecs and their features and characteristics, followed by studying QoS and QoE support in IEEE 802.11 standards. Since IEEE 802.11n is the current standard that is mostly deployed worldwide and IEEE 802.11ac is the upcoming standard, this survey study aims to investigate the most recent video QoS/QoE solutions based on these two standards. The solutions are divided into two broad categories, academic solutions, and vendor solutions. Academic solutions are mostly based on three main layers, namely Application, Media Access Control (MAC) and Physical (PHY) which are further divided into two major categories, single-layer solutions, and cross-layer solutions. Single-layer solutions are those which focus on a single layer to enhance the video transmission performance over WLAN. Cross-layer solutions involve two or more layers to provide a single QoS solution for video over WLAN. This thesis has also presented and technically analyzed QoS solutions by three popular vendors. This thesis concludes that single-layer solutions are not directly related to video QoS/QoE, and cross-layer solutions are performing better than single-layer solutions, but they are much more complicated and not easy to be implemented. Most vendors rely on their network infrastructure to provide QoS for multimedia applications. They have their techniques and mechanisms, but the concept of providing QoS/QoE for video is almost the same because they are using the same standards and rely on Wi-Fi Multimedia (WMM) to provide QoS

    Compare multimedia frameworks in mobile platforms

    Get PDF
    Multimedia feature is currently one of the most important features in mobile devices. Many modern mobile platforms use a centralized software stack to handle multimedia requirements that software stack is called multimedia framework. Multimedia framework belongs to the middleware layer of mobile operating system. It can be considered as a bridge that connects mobile operating system kernel, hardware drivers with UI applications. It supplies high level APIs that offers simple and easy solutions for complicated multimedia tasks to UI application developers. Multimedia Framework also manages and utilizes low lever system software and hardware in an efficient manner. It offers a centralize solution between high level demands and low level system resources. In this M.Sc. thesis project we have studied, analyzed and compared open source GStreamer, Android Stagefright and Microsoft Silverlight Media Framework from several perspectives. Some of the comparison perspectives are architecture, supported use cases, extensibility, implementation language and program language support (bindings), developer support, and legal status aspects. One of the main contributions of this thesis work is that clarifying in details the strength and weaknesses of each framework. Furthermore, the thesis should serve decision-making guidance when on needs to select a multimedia framework for a project. Moreover, and to enhance the impression with the three multimedia frameworks, a basic media player implementation is demonstrated with source code in the thesis.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    Analysis of Real Time Video Communication Systems

    Get PDF
    The most of the existing real time video communication systems mainly focus their work on providing better video quality throughout session. In quest of maintaining video quality they succeed in doing so at the cost of broken sessions, blocky video or sound disturbances when network bandwidth drops below required rate. The system described in this paper mainly concentrates on analysis of input parameters to audio and video encoder which affects the quality of communication. The input parameters to video encoder are altered such that a balance is maintained between video quality and continuity in communication. The input parameters to video encoderused for analysis are video frame size, and frames per second and target encode bitrate used for encoding video frame. The input parameters for audio encoder used for analysis are sampling frequency, bits per sample and no of audio channels used for recording sound. The input parameters to video encoderare changed frequently depending upon various factors such as bandwidth variations, and encodetime required on hardware used. In extreme low bandwidth situation the video is stopped. The communication should always keep alive throughout the session by keeping audio session connected always, so that users should not feel disconnected. The other important factors required for real time video communication to work smoothly are transport protocols used to carry media data and control data across peers. The protocols discussed in this paper are Real Time Protocol (RTP) and Real Time Control Protocol (RTCP). The media data generated at peers is transported using RTP and the control data describing the media data is transported using RTCP. DOI: 10.17762/ijritcc2321-8169.15083

    Implementation and Evaluation of Security on a Gateway for Web-based Real-Time Communication

    Get PDF
    Verkkopohjainen reaaliaikainen kommunikointi (WebRTC) on joukko uusia standardeja, jotka mahdollistavat selainten välisen multimediakommunikoinnin. Nämä standardit määrittelevät vaatimukset selaimille, sisältäen JavaScriptohjelmointirajapinnan sovelluskehittäjille, kuin myös mediatason protokollat, joita käytetään yhteyden muodostamiseen, median välittämiseen sekä tiedon salaukseen. Tuki interaktiiviselle yhteyden luomiselle (ICE) ja tiedon salaukselle toteutettiin olemassaolevalle yhdyskäytäväprototyypille. Kyseinen yhdyskäytävä oli alunperin luotu yhdistämään WebRTC-mahdollisuudet olemassaolevaan IP-pohjaiseen multimediaverkkoon, mutta siitä puuttui tarvittavat tietoturvaominaisuudet. Yhdyskäytävän suorituskyky mitattiin ja analysoitiin eri puhelutyypeillä WebRTC-käyttäjien välillä. Analyysi keskittyi kahteen suureeseen: yhdyskäytävän prosessointikuorma sekä pakettien viive. Yksittäisten puheluiden lisäksi yhdyskäytävää kuormitettiin kymmenellä HD videopuhelulla ja kymmenellä audiopuhelulla. Mittausten perusteella tehtyjen arvioiden mukaan kahden WebRTC-käyttäjän välillä olevan yksittäisen yhdyskäytävän suorituskyky yltää 14:stä yhtäaikaisesta HD videopuhelusta 74:ään yhtäaikaiseen audiopuheluun. Mediaaniviive pysyi kaikissa testeissä alle 0.2 millisekunnissa.Web Real-Time Communication (WebRTC) is a set of standards that are being developed, aiming to provide native peer-to-peer multimedia communication between browsers. The standards specify the requirements for browsers, including a JavaScript Application Programming Interface (API) for web developers, as well as the media plane protocols to be used for connection establishment, media transportation and data encryption. In this thesis, support for Interactive Connectivity Establishment (ICE) and media encryption was implemented to an existing gateway prototype. The gateway was originally developed to connect the novel WebRTC possibilities with existing IP Multimedia Subsystem (IMS) services, but it was lacking the necessary security functionalities. The performance of the gateway was measured and analyzed in different call scenarios between WebRTC clients. Two key elements, CPU load of the gateway and packet delay, were considered in the analysis. In addition to single call scenarios, the tests included relaying of ten simultaneous HD video calls, and relaying of ten simultaneous audio calls. Estimates based on the measurements suggest, that the overall capacity of a single gateway between two WebRTC clients ranges from 14 simultaneous HD video calls to 74 simultaneous audio calls. The median delay in the gateway remained under 0.2 milliseconds throughout the testing

    Network Performance in HTML5 Video Connections

    Full text link
    [EN] Currently, most of remote education systems use video streaming as the main basis to support teaching. These emissions can be seen in devices with different hardware features such as personal computers, tablets or smartphones through networks with different capacities. The use of different web browsers and coding options can also influence the network performance. Therefore, the quality of the video displayed may be different. This work presents a practical study to establish the best combination of web browsers and containers to encode multimedia files for videos streaming in personal computers running Windows 7 and Windows 10 operating systems. For this, a video encoded with different codecs and compressed with different containers have been transmitted through a 1000BaseT network. Finally, the results are analyzed and compared to determine which would be the most efficient combination of parameters according to the resolution of the transmitted video.This work has been partially supported by the European Union through the ERANETMED (Euromediterranean Cooperation through ERANET joint activities and beyond) project ERANETMED3-227 SMARTWATIR and by the Ministerio de Educación, Cultura y Deporte , through the Convocatoria 2016 - Proyectos I+D+I - Programa Estatal De Investigación, Desarrollo e Innovación Orientada a los retos de la sociedad (Project TEC2016-76795-C6-4-R) and through the Convocatoria 2017 - Proyectos I+D+I - Programa Estatal de Investigación, Desarrollo e Innovación, convocatoria excelencia (Project TIN2017-84802-C2-1-P).Sendra, S.; Túnez-Murcia, AI.; Lloret, J.; Jimenez, JM. (2018). Network Performance in HTML5 Video Connections. Network Protocols and Algorithms. 10(3):43-62. https://doi.org/10.5296/npa.v10i3.13933S436210

    Large-Scale Measurement of Real-Time Communication on the Web

    Get PDF
    Web Real-Time Communication (WebRTC) is getting wide adoptions across the browsers (Chrome, Firefox, Opera, etc.) and platforms (PC, Android, iOS). It enables application developers to add real-time communications features (text chat, audio/video calls) to web applications using W3C standard JavaScript APIs, and the end users can enjoy real-time multimedia communication experience from the browser without the complication of installing special applications or browser plug-ins. As WebRTC based applications are getting deployed on the Internet by thousands of companies across the globe, it is very important to understand the quality of the real-time communication services provided by these applications. Important performance metrics to be considered include: whether the communication session was properly setup, what are the network delays, packet loss rate, throughput, etc. At Callstats.io, we provide a solution to address the above concerns. By integrating an JavaScript API into WebRTC applications, Callstats.io helps application providers to measure the Quality of Experience (QoE) related metrics on the end user side. This thesis illustrates how this WebRTC performance measurement system is designed and built and we show some statistics derived from the collected data to give some insight into the performance of today’s WebRTC based real-time communication services. According to our measurement, real-time communication over the Internet are generally performing well in terms of latency and loss. The throughput are good for about 30% of the communication sessions
    corecore