20 research outputs found
Application-specific workload shaping in resource-constrained media players
Ph.DDOCTOR OF PHILOSOPH
A Semantic-Based Middleware for Multimedia Collaborative Applications
The Internet growth and the performance increase of desktop computers have enabled large-scale distributed multimedia applications. They are expected to grow in demand and services and their traffic volume will dominate. Real-time delivery, scalability, heterogeneity are some requirements of these applications that have motivated a revision of the traditional Internet services, the operating systems structures, and the software systems for supporting application development. This work proposes a Java-based lightweight middleware for the development of large-scale multimedia applications. The middleware offers four services for multimedia applications. First, it provides two scalable lightweight protocols for floor control. One follows a centralized model that easily integrates with centralized resources such as a shared too], and the other is a distributed protocol targeted to distributed resources such as audio. Scalability is achieved by periodically multicasting a heartbeat that conveys state information used by clients to request the resource via temporary TCP connections. Second, it supports intra- and inter-stream synchronization algorithms and policies. We introduce the concept of virtual observer, which perceives the session as being in the same room with a sender. We avoid the need for globally synchronized clocks by introducing the concept of user\u27s multimedia presence, which defines a new manner for combining streams coming from multiple sites. It includes a novel algorithm for estimation and removal of clock skew. In addition, it supports event-driven asynchronous message reception, quality of service measures, and traffic rate control. Finally, the middleware provides support for data sharing via a resilient and scalable protocol for transmission of images that can dynamically change in content and size. The effectiveness of the middleware components is shown with the implementation of Odust, a prototypical sharing tool application built on top of the middleware
A MODEL FOR PREDICTING THE PERFORMANCE OF IP VIDEOCONFERENCING
With the incorporation of free desktop videoconferencing (DVC) software on the
majority of the world's PCs, over the recent years, there has, inevitably, been considerable
interest in using DVC over the Internet. The growing popularity of DVC
increases the need for multimedia quality assessment. However, the task of predicting
the perceived multimedia quality over the Internet Protocol (IP) networks is
complicated by the fact that the audio and video streams are susceptible to unique
impairments due to the unpredictable nature of IP networks, different types of task
scenarios, different levels of complexity, and other related factors. To date, a standard
consensus to define the IP media Quality of Service (QoS) has yet to be implemented.
The thesis addresses this problem by investigating a new approach to
assess the quality of audio, video, and audiovisual overall as perceived in low cost
DVC systems.
The main aim of the thesis is to investigate current methods used to assess the perceived
IP media quality, and then propose a model which will predict the quality of
audiovisual experience from prevailing network parameters.
This thesis investigates the effects of various traffic conditions, such as, packet loss,
jitter, and delay and other factors that may influence end user acceptance, when low
cost DVC is used over the Internet. It also investigates the interaction effects between
the audio and video media, and the issues involving the lip sychronisation
error. The thesis provides the empirical evidence that the subjective mean opinion
score (MOS) of the perceived multimedia quality is unaffected by lip synchronisation
error in low cost DVC systems.
The data-gathering approach that is advocated in this thesis involves both field and
laboratory trials to enable the comparisons of results between classroom-based experiments
and real-world environments to be made, and to provide actual real-world
confirmation of the bench tests. The subjective test method was employed
since it has been proven to be more robust and suitable for the research studies, as
compared to objective testing techniques.
The MOS results, and the number of observations obtained, have enabled a set of
criteria to be established that can be used to determine the acceptable QoS for given
network conditions and task scenarios. Based upon these comprehensive findings,
the final contribution of the thesis is the proposal of a new adaptive architecture
method that is intended to enable the performance of IP based DVC of a particular
session to be predicted for a given network condition
Secure VoIP Performance Measurement
This project presents a mechanism for instrumentation of secure VoIP calls. The experiments were run under different network conditions and security systems. VoIP services such as Google Talk, Express Talk and Skype were under test. The project allowed analysis of the voice quality of the VoIP services based on the Mean Opinion Score (MOS) values generated by Perceptual valuation of Speech Quality (PESQ). The quality of the audio streams
produced were subjected to end-to-end delay, jitter, packet loss and extra processing in the networking hardware and end devices due to Internetworking Layer security or Transport Layer
security implementations. The MOS values were mapped to Perceptual Evaluation of Speech Quality for wideband (PESQ-WB) scores. From these PESQ-WB scores, the graphs of the mean of 10 runs and box and whisker plots for each parameter were drawn. Analysis on the graphs was performed in order to deduce the quality of each VoIP service. The E-model was
used to predict the network readiness and Common vulnerability Scoring System (CVSS) was used to predict the network vulnerabilities. The project also provided the mechanism to measure the throughput for each test case. The overall performance of each VoIP service was determined by PESQ-WB scores, CVSS scores and the throughput. The experiment demonstrated the relationship among VoIP performance, VoIP security and VoIP service type. The experiment also suggested that, when compared to an unsecure IPIP tunnel, Internetworking
Layer security like IPSec ESP or Transport Layer security like OpenVPN TLS would improve
a VoIP security by reducing the vulnerabilities of the media part of the VoIP signal. Morever,
adding a security layer has little impact on the VoIP voice quality
Wireless multimedia sensor networks, security and key management
Wireless Multimedia Sensor Networks (WMSNs) have emerged and shifted the focus from the typical scalar wireless sensor networks to networks with multimedia devices that are capable to retrieve video, audio, images, as well as scalar sensor data. WMSNs are able to deliver multimedia content due to the availability of inexpensive CMOS cameras and microphones coupled with the significant progress in distributed signal processing and multimedia source coding techniques.
These mentioned characteristics, challenges, and requirements of designing WMSNs open many research issues and future research directions to develop protocols, algorithms, architectures, devices, and testbeds to maximize the network lifetime while satisfying the quality of service requirements of the various applications. In this thesis dissertation, we outline the design challenges of WMSNs and we give a comprehensive discussion of the proposed architectures and protocols for the different layers of the communication protocol stack for WMSNs along with their open research issues. Also, we conduct a comparison among the existing WMSN hardware and testbeds based on their specifications and features along with complete classification based on their functionalities and capabilities. In addition, we introduce our complete classification for content security and contextual privacy in WSNs. Our focus in this field, after conducting a complete survey in WMSNs and event privacy in sensor networks, and earning the necessary knowledge of programming sensor motes such as Micaz and Stargate and running simulation using NS2, is to design suitable protocols meet the challenging requirements of WMSNs targeting especially the routing and MAC layers, secure the wirelessly exchange of data against external attacks using proper security algorithms: key management and secure routing, defend the network from internal attacks by using a light-weight intrusion detection technique, protect the contextual information from being leaked to unauthorized parties by adapting an event unobservability scheme, and evaluate the performance efficiency and energy consumption of employing the security algorithms over WMSNs
Network environment for testing peer-to-peer streaming applications
Peer-to-Peer (P2P) streaming applications are an emerging trend in content distribution. A reliable network environment was needed to test their capabilities and performance limits, which this thesis focused on. Furthermore, some experimental tests in the environment were performed with an application implemented in the Department of Communications Engineering (DCE) at Tampere University of Technology.
For practical reasons, the testing environment was assembled in a teaching laboratory at DCE premises. The environment was built using a centralized architecture, where a Linux emulation node, WANemulator, generates realistic packet losses, delays, and jitters to the network. After an extensive literature survey an extension to the Iproute2’s Tc utility, NetEm, was chosen to be responsible of the network link emulation at the WANemulator. The peers are run inside VirtualBox images, which are used at the Linux computers to keep the laboratory still suitable for teaching purposes. In addition to the network emulation, Linux traffic controlling mechanisms were used both at the WANemulator and VirtualBox’s virtual machines to limit the traffic rates of the peers. When used together, emulation and rate limitation resemble to the statistical behaviour of the Internet quite closely.
Virtualization overhead limited the maximum number of Virtual Machines (VMs) at each laboratory computer into two. Also, a peculiar feature in VirtualBox’s bridge implementation reduced the network capabilities of the VMs. However, the bottleneck in the environment is the centralized architecture, where all of the traffic is routed through the WANemulator. The environment was tested reliable with the chosen streamed content and 160 peers, but by tuning the parameters in WANemulator bigger overlays might be achievable. In addition, a distributed emulation should be possible with the environment, but it was not tested.
The results from the experimental tests performed with the P2P streaming application proved the application to be functional in an environment that has mobile network conditions. The designed network environment is tested to work reliably, it enables reasonable scalability and provides better possibility to emulate the networking characteristics of the Internet, when compared to an ordinary local area network environment. /Kir1
Recommended from our members
QoSME: QoS Management Environment
Distributed multimedia applications are sensitive to the Quality of Service (QoS) delivered by underlying communication networks. For example, a video conference exchange can be very sensitive to the effective network throughput. Network jitter can greatly disrupt a speech stream. The main question this thesis addresses is how to adapt multimedia applications to the QoS delivered by the network and vice versa. Such adaptation is especially important because current networks are unable to assure the QoS required by applications and the latter is usually unprepared for periods of QoS degradation. This work introduces the QoS Management Environment (QoSME) that provides mechanisms for such adaptation. The main contributions of this thesis are: Language level abstractions for QoS management. The Quality Assurance Language (QuAL) in QoSME enables the specification of how to allocate, monitor, analyze, and adapt to delivered QoS. Applications can express in QuAL their QoS needs and how to handle potential violations. Automatic QoS monitoring. QoSME automatically generates the instrumentation to monitor QoS when applications use QuAL constructs. The QoSME runtime scrutinizes interactions among applications, transport protocols, and Operating Systems (OS) and collects in QoS Management Information Bases (MIBs) statistics on the QoS delivered. Integration of QoS and standard network management. A Simple Network Management Protocol (SNMP) agent embedded in QoSME provides QoS MIB access to SNMP managers. The latter can use this feature to monitor end-to-end QoS delivery and adapt network resource allocation and operations accordingly. A partial prototype of QoSME has been released for public access. It runs on SunOS 4.3 and Solaris 2.3 and supports communication on ATM adaptation layer, ST-II, UDP/IP, TCP/IP, and Unix internal protocols
Recommended from our members
Distributed virtual environment scalability and security
Distributed virtual environments (DVEs) have been an active area of research and engineering for more than 20 years. The most widely deployed DVEs are network games such as Quake, Halo, and World of Warcraft (WoW), with millions of users and billions of dollars in annual revenue. Deployed DVEs remain expensive centralized implementations despite significant research outlining ways to distribute DVE workloads.
This dissertation shows previous DVE research evaluations are inconsistent with deployed DVE needs. Assumptions about avatar movement and proximity - fundamental scale factors - do not match WoW’s workload, and likely the workload of other deployed DVEs. Alternate workload models are explored and preliminary conclusions presented. Using realistic workloads it is shown that a fully decentralized DVE cannot be deployed to today’s consumers, regardless of its overhead.
Residential broadband speeds are improving, and this limitation will eventually disappear. When it does, appropriate security mechanisms will be a fundamental requirement for technology adoption.
A trusted auditing system (“Carbon”) is presented which has good security, scalability, and resource characteristics for decentralized DVEs. When performing exhaustive auditing, Carbon adds 27% network overhead to a decentralized DVE with a WoW-like workload. This resource consumption can be reduced significantly, depending upon the DVE’s risk tolerance.
Finally, the Pairwise Random Protocol (PRP) is described. PRP enables adversaries to fairly resolve probabilistic activities, an ability missing from most decentralized DVE security proposals.
Thus, this dissertations contribution is to address two of the obstacles for deploying research on decentralized DVE architectures. First, lack of evidence that research results apply to existing DVEs. Second, the lack of security systems combining appropriate security guarantees with acceptable overhead
Scalable download protocols
Scalable on-demand content delivery systems, designed to effectively handle increasing request rates, typically use service aggregation or content replication techniques. Service aggregation relies on one-to-many communication techniques, such as multicast, to efficiently deliver content from a single sender to multiple receivers. With replication, multiple geographically distributed replicas of the service or content share the load of processing client requests and enable delivery from a nearby server.Previous scalable protocols for downloading large, popular files from a single server include batching and cyclic multicast. Analytic lower bounds developed in this thesis show that neither of these protocols consistently yields performance close to optimal. New hybrid protocols are proposed that achieve within 20% of the optimal delay in homogeneous systems, as well as within 25% of the optimal maximum client delay in all heterogeneous scenarios considered.In systems utilizing both service aggregation and replication, well-designed policies determining which replica serves each request must balance the objectives of achieving high locality of service, and high efficiency of service aggregation. By comparing classes of policies, using both analysis and simulations, this thesis shows that there are significant performance advantages in using current system state information (rather than only proximities and average loads) and in deferring selection decisions when possible. Most of these performance gains can be achieved using only “local” (rather than global) request information.Finally, this thesis proposes adaptations of already proposed peer-assisted download techniques to support a streaming (rather than download) service, enabling playback to begin well before the entire media file is received. These protocols split each file into pieces, which can be downloaded from multiple sources, including other clients downloading the same file. Using simulations, a candidate protocol is presented and evaluated. The protocol includes both a piece selection technique that effectively mediates the conflict between achieving high piece diversity and the in-order requirements of media file playback, as well as a simple on-line rule for deciding when playback can safely commence