12,614 research outputs found

    Minimizing the impact of delay on live SVC-based HTTP adaptive streaming services

    Get PDF
    HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for Over-The-Top video streaming services. Video content is temporally split into segments which are offered at multiple qualities to the clients. These clients autonomously select the quality layer matching the current state of the network through a quality selection heuristic. Recently, academia and industry have begun evaluating the feasibility of adopting layered video coding for HAS. Instead of downloading one file for a certain quality level, scalable video streaming requires downloading several interdependent layers to obtain the same quality. This implies that the base layer is always downloaded and is available for playout, even when throughput fluctuates and enhancement layers can not be downloaded in time. This layered video approach can help in providing better service quality assurance for video streaming. However, adopting scalable video coding for HAS also leads to other issues, since requesting multiple files over HTTP leads to an increased impact of the end-to-end delay and thus on the service provided to the client. This is even worse in a Live TV scenario where the drift on the live signal should be minimized, requiring smaller segment and buffer sizes. In this paper, we characterize the impact of delay on several measurement-based heuristics. Furthermore, we propose several ways to overcome the end-to-end delay issues, such as parallel and pipelined downloading of segment layers, to provide a higher quality for the video service

    Improving perceptual multimedia quality with an adaptable communication protocol

    Get PDF
    Copyrights @ 2005 University Computing Centre ZagrebInnovations and developments in networking technology have been driven by technical considerations with little analysis of the benefit to the user. In this paper we argue that network parameters that define the network Quality of Service (QoS) must be driven by user-centric parameters such as user expectations and requirements for multimedia transmitted over a network. To this end a mechanism for mapping user-oriented parameters to network QoS parameters is outlined. The paper surveys existing methods for mapping user requirements to the network. An adaptable communication system is implemented to validate the mapping. The architecture adapts to varying network conditions caused by congestion so as to maintain user expectations and requirements. The paper also surveys research in the area of adaptable communications architectures and protocols. Our results show that such a user-biased approach to networking does bring tangible benefits to the user

    New Method of Measuring TCP Performance of IP Network using Bio-computing

    Full text link
    The measurement of performance of Internet Protocol IP network can be done by Transmission Control Protocol TCP because it guarantees send data from one end of the connection actually gets to the other end and in the same order it was send, otherwise an error is reported. There are several methods to measure the performance of TCP among these methods genetic algorithms, neural network, data mining etc, all these methods have weakness and can't reach to correct measure of TCP performance. This paper proposed a new method of measuring TCP performance for real time IP network using Biocomputing, especially molecular calculation because it provides wisdom results and it can exploit all facilities of phylogentic analysis. Applying the new method at real time on Biological Kurdish Messenger BIOKM model designed to measure the TCP performance in two types of protocols File Transfer Protocol FTP and Internet Relay Chat Daemon IRCD. This application gives very close result of TCP performance comparing with TCP performance which obtains from Little's law using same model (BIOKM), i.e. the different percentage of utilization (Busy or traffic industry) and the idle time which are obtained from a new method base on Bio-computing comparing with Little's law was (nearly) 0.13%. KEYWORDS Bio-computing, TCP performance, Phylogenetic tree, Hybridized Model (Normalized), FTP, IRCDComment: 17 Pages,10 Figures,5 Table

    A machine learning-based framework for preventing video freezes in HTTP adaptive streaming

    Get PDF
    HTTP Adaptive Streaming (HAS) represents the dominant technology to deliver videos over the Internet, due to its ability to adapt the video quality to the available bandwidth. Despite that, HAS clients can still suffer from freezes in the video playout, the main factor influencing users' Quality of Experience (QoE). To reduce video freezes, we propose a network-based framework, where a network controller prioritizes the delivery of particular video segments to prevent freezes at the clients. This framework is based on OpenFlow, a widely adopted protocol to implement the software-defined networking principle. The main element of the controller is a Machine Learning (ML) engine based on the random undersampling boosting algorithm and fuzzy logic, which can detect when a client is close to a freeze and drive the network prioritization to avoid it. This decision is based on measurements collected from the network nodes only, without any knowledge on the streamed videos or on the clients' characteristics. In this paper, we detail the design of the proposed ML-based framework and compare its performance with other benchmarking HAS solutions, under various video streaming scenarios. Particularly, we show through extensive experimentation that the proposed approach can reduce video freezes and freeze time with about 65% and 45% respectively, when compared to benchmarking algorithms. These results represent a major improvement for the QoE of the users watching multimedia content online

    ATLANTIDES: Automatic Configuration for Alert Verification in Network Intrusion Detection Systems

    Get PDF
    We present an architecture designed for alert verification (i.e., to reduce false positives) in network intrusion-detection systems. Our technique is based on a systematic (and automatic) anomaly-based analysis of the system output, which provides useful context information regarding the network services. The false positives raised by the NIDS analyzing the incoming traffic (which can be either signature- or anomaly-based) are reduced by correlating them with the output anomalies. We designed our architecture for TCP-based network services which have a client/server architecture (such as HTTP). Benchmarks show a substantial reduction of false positives between 50% and 100%

    Talos: Neutralizing Vulnerabilities with Security Workarounds for Rapid Response

    Full text link
    Considerable delays often exist between the discovery of a vulnerability and the issue of a patch. One way to mitigate this window of vulnerability is to use a configuration workaround, which prevents the vulnerable code from being executed at the cost of some lost functionality -- but only if one is available. Since program configurations are not specifically designed to mitigate software vulnerabilities, we find that they only cover 25.2% of vulnerabilities. To minimize patch delay vulnerabilities and address the limitations of configuration workarounds, we propose Security Workarounds for Rapid Response (SWRRs), which are designed to neutralize security vulnerabilities in a timely, secure, and unobtrusive manner. Similar to configuration workarounds, SWRRs neutralize vulnerabilities by preventing vulnerable code from being executed at the cost of some lost functionality. However, the key difference is that SWRRs use existing error-handling code within programs, which enables them to be mechanically inserted with minimal knowledge of the program and minimal developer effort. This allows SWRRs to achieve high coverage while still being fast and easy to deploy. We have designed and implemented Talos, a system that mechanically instruments SWRRs into a given program, and evaluate it on five popular Linux server programs. We run exploits against 11 real-world software vulnerabilities and show that SWRRs neutralize the vulnerabilities in all cases. Quantitative measurements on 320 SWRRs indicate that SWRRs instrumented by Talos can neutralize 75.1% of all potential vulnerabilities and incur a loss of functionality similar to configuration workarounds in 71.3% of those cases. Our overall conclusion is that automatically generated SWRRs can safely mitigate 2.1x more vulnerabilities, while only incurring a loss of functionality comparable to that of traditional configuration workarounds.Comment: Published in Proceedings of the 37th IEEE Symposium on Security and Privacy (Oakland 2016
    • …
    corecore