771 research outputs found

    Investigating the Effects of Network Dynamics on Quality of Delivery Prediction and Monitoring for Video Delivery Networks

    Get PDF
    Video streaming over the Internet requires an optimized delivery system given the advances in network architecture, for example, Software Defined Networks. Machine Learning (ML) models have been deployed in an attempt to predict the quality of the video streams. Some of these efforts have considered the prediction of Quality of Delivery (QoD) metrics of the video stream in an effort to measure the quality of the video stream from the network perspective. In most cases, these models have either treated the ML algorithms as black-boxes or failed to capture the network dynamics of the associated video streams. This PhD investigates the effects of network dynamics in QoD prediction using ML techniques. The hypothesis that this thesis investigates is that ML techniques that model the underlying network dynamics achieve accurate QoD and video quality predictions and measurements. The thesis results demonstrate that the proposed techniques offer performance gains over approaches that fail to consider network dynamics. This thesis results highlight that adopting the correct model by modelling the dynamics of the network infrastructure is crucial to the accuracy of the ML predictions. These results are significant as they demonstrate that improved performance is achieved at no additional computational or storage cost. These techniques can help the network manager, data center operatives and video service providers take proactive and corrective actions for improved network efficiency and effectiveness

    Design of an adaptive congestion control protocol for reliable vehicle safety communication

    Get PDF
    [no abstract

    Scalable Video Streaming over the Internet

    Get PDF
    The objectives of this thesis are to investigate the challenges on video streaming, to explore and compare different video streaming mechanisms, and to develop video streaming algorithms that maximize visual quality. To achieve these objectives, we first investigate scalable video multicasting schemes by comparing layered video multicasting with replicated stream video multicasting. Even though it has been generally accepted that layered video multicasting is superior to replicated stream multicasting, this assumption is not based on a systematic and quantitative comparison. We argue that there are indeed scenarios where replicated stream multicasting is the preferred approach. We also consider the problem of providing perceptually good quality of layered VBR video. This problem is challenging, because the dynamic behavior of the Internet's available bandwidth makes it difficult to provide good quality. Also a video encoded to provide a consistent quality exhibits significant data rate variability. We are, therefore, faced with the problem of accommodating the mismatch between the available bandwidth variability and the data rate variability of the encoded video. We propose an optimal quality adaptation algorithm that minimizes quality variation while at the same time increasing the utilization of the available bandwidth. Finally, we investigate the transmission control protocol (TCP) for a transport layer protocol in streaming packetized media data. Our approach is to model a video streaming system and derive relationships under which the system employing the TCP protocol achieves desired performance. Both simulation results and the Internet experimental results validate this model and demonstrate the buffering delay requirements achieve desired video quality with high accuracy. Based on the relationships, we also develop realtime estimation algorithms of playout buffer requirements.Ph.D.Committee Chair: Mostafa H. Ammar; Committee Co-Chair: Yucel Altunbasak; Committee Member: Chuanyi Ji; Committee Member: George Riley; Committee Member: Henry Owen; Committee Member: Jack Brassi

    Improved algorithms for TCP congestion control

    Get PDF
    Reliable and efficient data transfer on the Internet is an important issue. Since late 70’s the protocol responsible for that has been the de facto standard TCP, which has proven to be successful through out the years, its self-managed congestion control algorithms have retained the stability of the Internet for decades. However, the variety of existing new technologies such as high-speed networks (e.g. fibre optics) with high-speed long-delay set-up (e.g. cross-Atlantic links) and wireless technologies have posed lots of challenges to TCP congestion control algorithms. The congestion control research community proposed solutions to most of these challenges. This dissertation adds to the existing work by: firstly tackling the highspeed long-delay problem of TCP, we propose enhancements to one of the existing TCP variants (part of Linux kernel stack). We then propose our own variant: TCP-Gentle. Secondly, tackling the challenge of differentiating the wireless loss from congestive loss in a passive way and we propose a novel loss differentiation algorithm which quantifies the noise in packet inter arrival times and use this information together with the span (ratio of maximum to minimum packet inter arrival times) to adapt the multiplicative decrease factor according to a predefined logical formula. Finally, extending the well-known drift model of TCP to account for wireless loss and some hypothetical cases (e.g. variable multiplicative decrease), we have undertaken stability analysis for the new version of the model

    Connected and Autonomous Vehicles Applications Development and Evaluation for Transportation Cyber-Physical Systems

    Get PDF
    Cyber-Physical Systems (CPS) seamlessly integrate computation, networking and physical devices. A Connected and Autonomous Vehicle (CAV) system in which each vehicle can wirelessly communicate and share data with other vehicles or infrastructures (e.g., traffic signal, roadside unit), requires a Transportation Cyber-Physical System (TCPS) for improving safety and mobility, and reducing greenhouse gas emissions. Unfortunately, a typical TCPS with a centralized computing service cannot support real-time CAV applications due to the often unpredictable network latency, high data loss rate and expensive communication bandwidth, especially in a mobile network, such as a CAV environment. Edge computing, a new concept for the CPS, distributes the resources for communication, computation, control, and storage at different edges of the systems. TCPS with edge computing strategy forms an edge-centric TCPS. This edge-centric TCPS system can reduce data loss and data delivery delay, and fulfill the high bandwidth requirements. Within the edge-centric TCPS, Vehicle-to-X (V2X) communication, along with the in-vehicle sensors, provides a 360-degree view for CAVs that enables autonomous vehicles’ operation beyond the sensor range. The addition of wireless connectivity would improve the operational efficiency of CAVs by providing real-time roadway information, such as traffic signal phasing and timing information, downstream traffic incident alerts, and predicting future traffic queue information. In addition, temporal variation of roadway traffic can be captured by sharing Basic Safety Messages (BSMs) from each vehicle through the communication between vehicles as well as with roadside infrastructures (e.g., traffic signal, roadside unit) and traffic management centers. In the early days of CAVs, data will be collected only from a limited number of CAVs due to a low CAV penetration rate and not from other non-connected vehicles. This will result in noise in the traffic data because of low penetration rate of CAVs. This lack of data combined with the data loss rate in the wireless CAV environment makes it challenging to predict traffic behavior, which is dynamic over time. To address this challenge, it is important to develop and evaluate a machine learning technique to capture stochastic variation in traffic patterns over time. This dissertation focuses on the development and evaluation of various connected and autonomous vehicles applications in an edge-centric TCPS. It includes adaptive queue prediction, traffic data prediction, dynamic routing and Cooperative Adaptive Cruise Control (CACC) applications. An adaptive queue prediction algorithm is described in Chapter 2 for predicting real-time traffic queue status in an edge-centric TCPS. Chapter 3 presents noise reduction models to reduce the noise from the traffic data generated from the BSMs at different penetration of CAVs and evaluate the performance of the Long Short-Term Memory (LSTM) prediction model for predicting traffic data using the resulting filtered data set. The development and evaluation of a dynamic routing application in a CV environment is detailed in Chapter 4 to reduce incident recovery time and increase safety on a freeway. The development of an evaluation framework is detailed in Chapter 5 to evaluate car-following models for CACC controller design in terms of vehicle dynamics and string stability to ensure user acceptance is detailed in Chapter 5. Innovative methods presented in this dissertation were proven to be providing positive improvements in transportation mobility. These research will lead to the real-world deployment of these applications in an edge-centric TCPS as the dissertation focuses on the edge-centric TCPS deployment strategy. In addition, as multiple CAV applications as presented in this dissertation can be supported simultaneously by the same TCPS, public investments will only include infrastructure investments, such as investments in roadside infrastructure and back-end computing infrastructure. These connected and autonomous vehicle applications can potentially provide significant economic benefits compared to its cost

    Towards Real-time Remote Processing of Laparoscopic Video

    Get PDF
    Laparoscopic surgery is a minimally invasive technique where surgeons insert a small video camera into the patient\u27s body to visualize internal organs and use small tools to perform these procedures. However, the benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic system is the daVinci-si robotic surgical vision system. The video streams generate approximately 360 megabytes of data per second, demonstrating a trend toward increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Real-time processing this large stream of data on a bedside PC, single or dual node setup, may be challenging and a high-performance computing (HPC) environment is not typically available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate (fps), it is required that each 11.9 MB (1080p) video frame be processed by a server and returned within the time this frame is displayed or 1/30th of a second. The ability to acquire, process, and visualize data in real time is essential for the performance of complex tasks as well as minimizing risk to the patient. We have implemented and compared performance of compression, segmentation and registration algorithms on Clemson\u27s Palmetto supercomputer using dual Nvidia graphics processing units (GPUs) per node and compute unified device architecture (CUDA) programming model. We developed three separate applications that run simultaneously: video acquisition, image processing, and video display. The image processing application allows several algorithms to run simultaneously on different cluster nodes and transfer images through message passing interface (MPI). Our segmentation and registration algorithms resulted in an acceleration factor of around 2 and 8 times respectively. To achieve a higher frame rate, we also resized images and reduced the overall processing time. As a result, using high-speed network to access computing clusters with GPUs to implement these algorithms in parallel will improve surgical procedures by providing real-time medical image processing and laparoscopic data

    Intelligent Routing for Software-Defined Media Networks

    Get PDF
    The multimedia market is an industry with an ever-growing demand coupled with strict requirements. Be it in live streaming services or file content broadcast, multimedia providers need to deliver the best possible quality in order to meet their costumer’s requirements and gain or keep their trust. Multimedia traffic has a high impact on networks and, due to its nature, is sensitive to congestion or hardware failure. Thus, it is frequently that multimedia providers resort to third-party software to monitor quality parameters. Skyline Communications’ DataMiner® offers network monitoring, orchestrating and automation capabilities across a broad range of applications and environments. These features are enabled by the emergence of Software-Defined Networking (SDN) which provides a global view of networks and the ability to change network properties through software applications. This contrasts with traditional networks which are rigid, static and difficult to scale-up. An application that greatly benefits from the global network view of SDN is routing optimization. Through routing optimization, a network can effectively deliver more traffic by efficiently balancing load across the different links and paths between end points of a service, reaching an increased performance in data transport. This dissertation comes to light with the goal of optimizing DataMiner’s routing mechanism by exploring the routing optimization possibilities enabled by its SDN-like architecture. Both link cost optimization-based and Machine Learning (ML) approaches are evaluated as possible solutions to Skyline’s problem and several experiments were conducted to compare them and understand their impact on network performance while transporting multimedia streams.O mercado audiovisual é uma indústria onde a procura está em constante crescimento, bem como a exigência. Tanto durante transmissões ao vivo como de conteúdo multimédia pré-gravado, os provedores de multimédia necessitam de garantir a melhor qualidade possível para corresponderem aos requisitos dos seus clientes e conquistarem ou manterem a sua confiança nos seus serviços. O tráfego multimédia tem um forte impacto nas redes que o transportam e, graças à sua natureza, é bastante sensível a congestão ou a falhas de equipamento. Por este motivo, é frequente os provedores de multimédia recorrerem a aplicações externas para monitorização de parâmetros de qualidade. O DataMiner®, desenvolvido pela Skyline Communications, oferece a capacidade de monitorizar e orquestrar redes de transporte de multimédia bem como de automatizar as suas funcionalidades num vasto conjunto de enquadramentos e ambientes. Tais funcionalidades são oferecidas pelo aparecimento de SDN que permite que se tenha uma visão global de uma rede e que se altere de forma flexível as suas definições através de aplicações. As características de redes deste tipo contrastam fortemente com as redes tradicionais marcadas pela sua rigidez, estaticidade e dificuldade de expansão. Uma área que beneficia bastante com a visão global de redes oferecida pela tecnologia de SDN é a otimização do transporte de dados. Desta forma, uma rede consegue transportar mais dados de forma eficiente através do balanceamento da carga a que é submetida pelas diferentes ligações entre elementos e caminhos que conectam pontos de entrada e saída da mesma, atingindo altos níveis de desempenho. A presente dissertação surge da intenção da Skyline de otimizar o seu algoritmo de encaminhamento através da exploração de métodos alternativos introduzidos pela tecnologia de SDN. Tanto métodos baseados em otimização do custo de ligações da rede como em aprendizagem automática são avaliados como possíveis soluções para o problema proposto e diversas simulações são conduzidas para as comparar e averiguar o seu impacto no desempenho de redes de transporte de dados multimédia

    Modeling operating system crash behavior through multifractal analysis, long range dependence and mining of memory usage patterns

    Get PDF
    Software Aging is a phenomenon where the state of the operating systems degrades over a period of time due to transient errors. These transient errors can result in resource exhaustion and operating system hangups or crashes.;Three different techniques from fractal geometry are studied using the same datasets for operating system crash modeling and prediction. Holder Exponent is an indicator of how chaotic a signal is. M5 Prime is a nominal classification algorithm that allows prediction of a numerical quantity such as time to crash based on current and previous data. Hurst exponent measures the self similarity and long range dependence or memory of a process or data set and has been used to predict river flows and network usage.;For each of these techniques, a thorough investigation was conducted using crash, hangup and nominal operating system monitoring data. All three approaches demonstrated a promising ability to identify software aging and predict upcoming operating system crashes. This thesis describes the experiments, reports the best candidate techniques and identifies the topics for further investigation
    • …
    corecore