679 research outputs found
The Bits of Silence : Redundant Traffic in VoIP
Human conversation is characterized by brief pauses and so-called turn-taking behavior between the speakers. In the context of VoIP, this means that there are frequent periods where the microphone captures only background noise – or even silence whenever the microphone is muted. The bits transmitted from such silence periods introduce overhead in terms of data usage, energy consumption, and network infrastructure costs. In this paper, we contribute by shedding light on these costs for VoIP applications. We systematically measure the performance of six popular mobile VoIP applications with controlled human conversation and acoustic setup. Our analysis demonstrates that significant savings can indeed be achievable - with the best performing silence suppression technique being effective on 75% of silent pauses in the conversation in a quiet place. This results in 2-5 times data savings, and 50-90% lower energy consumption compared to the next better alternative. Even then, the effectiveness of silence suppression can be sensitive to the amount of background noise, underlying speech codec, and the device being used. The codec characteristics and performance do not depend on the network type. However, silence suppression makes VoIP traffic network friendly as much as VoLTE traffic. Our results provide new insights into VoIP performance and offer a motivation for further enhancements, such as performance-aware codec selection, that can significantly benefit a wide variety of voice assisted applications, as such intelligent home assistants and other speech codec enabled IoT devices.Peer reviewe
Pando: Personal Volunteer Computing in Browsers
The large penetration and continued growth in ownership of personal
electronic devices represents a freely available and largely untapped source of
computing power. To leverage those, we present Pando, a new volunteer computing
tool based on a declarative concurrent programming model and implemented using
JavaScript, WebRTC, and WebSockets. This tool enables a dynamically varying
number of failure-prone personal devices contributed by volunteers to
parallelize the application of a function on a stream of values, by using the
devices' browsers. We show that Pando can provide throughput improvements
compared to a single personal device, on a variety of compute-bound
applications including animation rendering and image processing. We also show
the flexibility of our approach by deploying Pando on personal devices
connected over a local network, on Grid5000, a French-wide computing grid in a
virtual private network, and seven PlanetLab nodes distributed in a wide area
network over Europe.Comment: 14 pages, 12 figures, 2 table
A Survey on Information Visualization for Network and Service Management
Network and service management encompasses a set of activities, methods, procedures, and tools whose ultimate goal is to guarantee the proper functioning of a networked system. Computational tools are essential to help network administrators in their daily tasks, and information visualization techniques are of great value in such context. In essence, information visualization techniques associated to visual analytics aim at facilitating the tasks of network administrators in the process of monitoring and maintaining the network health. This paper surveys the use of information visualization techniques as a tool to support the network and service management process. Through a Systematic Literature Review (SLR), we provide a historical overview and discuss the current state of the art in the field. We present a classification of 285 articles and papers from 1985 to 2013, according to an information visualization taxonomy as well as a network and service management taxonomy. Finally, we point out future research directions and opportunities regarding the use of information visualization in network and service management
On Information-centric Resiliency and System-level Security in Constrained, Wireless Communication
The Internet of Things (IoT) interconnects many heterogeneous embedded devices either locally between each other, or globally with the Internet. These things are resource-constrained, e.g., powered by battery, and typically communicate via low-power and lossy wireless links. Communication needs to be secured and relies on crypto-operations that are often resource-intensive and in conflict with the device constraints. These challenging operational conditions on the cheapest hardware possible, the unreliable wireless transmission, and the need for protection against common threats of the inter-network, impose severe challenges to IoT networks. In this thesis, we advance the current state of the art in two dimensions.
Part I assesses Information-centric networking (ICN) for the IoT, a network paradigm that promises enhanced reliability for data retrieval in constrained edge networks. ICN lacks a lower layer definition, which, however, is the key to enable device sleep cycles and exclusive wireless media access. This part of the thesis designs and evaluates an effective media access strategy for ICN to reduce the energy consumption and wireless interference on constrained IoT nodes.
Part II examines the performance of hardware and software crypto-operations, executed on off-the-shelf IoT platforms. A novel system design enables the accessibility and auto-configuration of crypto-hardware through an operating system. One main focus is the generation of random numbers in the IoT. This part of the thesis further designs and evaluates Physical Unclonable Functions (PUFs) to provide novel randomness sources that generate highly unpredictable secrets, on low-cost devices that lack hardware-based security features.
This thesis takes a practical view on the constrained IoT and is accompanied by real-world implementations and measurements. We contribute open source software, automation tools, a simulator, and reproducible measurement results from real IoT deployments using off-the-shelf hardware. The large-scale experiments in an open access testbed provide a direct starting point for future research
Predicting Software Performance with Divide-and-Learn
Predicting the performance of highly configurable software systems is the
foundation for performance testing and quality assurance. To that end, recent
work has been relying on machine/deep learning to model software performance.
However, a crucial yet unaddressed challenge is how to cater for the sparsity
inherited from the configuration landscape: the influence of configuration
options (features) and the distribution of data samples are highly sparse.
In this paper, we propose an approach based on the concept of
'divide-and-learn', dubbed . The basic idea is that, to handle sample
sparsity, we divide the samples from the configuration landscape into distant
divisions, for each of which we build a regularized Deep Neural Network as the
local model to deal with the feature sparsity. A newly given configuration
would then be assigned to the right model of division for the final prediction.
Experiment results from eight real-world systems and five sets of training
data reveal that, compared with the state-of-the-art approaches, performs
no worse than the best counterpart on 33 out of 40 cases (within which 26 cases
are significantly better) with up to improvement on accuracy;
requires fewer samples to reach the same/better accuracy; and producing
acceptable training overhead. Practically, also considerably improves
different global models when using them as the underlying local models, which
further strengthens its flexibility. To promote open science, all the data,
code, and supplementary figures of this work can be accessed at our repository:
https://github.com/ideas-labo/DaL.Comment: This paper has been accepted by The ACM Joint European Software
Engineering Conference and Symposium on the Foundations of Software
Engineering (ESEC/FSE), 202
The Four-C Framework for High Capacity Ultra-Low Latency in 5G Networks: A Review
Network latency will be a critical performance metric for the Fifth Generation (5G) networks
expected to be fully rolled out in 2020 through the IMT-2020 project. The multi-user multiple-input
multiple-output (MU-MIMO) technology is a key enabler for the 5G massive connectivity criterion,
especially from the massive densification perspective. Naturally, it appears that 5G MU-MIMO will
face a daunting task to achieve an end-to-end 1 ms ultra-low latency budget if traditional network
set-ups criteria are strictly adhered to. Moreover, 5G latency will have added dimensions of scalability
and flexibility compared to prior existing deployed technologies. The scalability dimension caters
for meeting rapid demand as new applications evolve. While flexibility complements the scalability
dimension by investigating novel non-stacked protocol architecture. The goal of this review paper
is to deploy ultra-low latency reduction framework for 5G communications considering flexibility
and scalability. The Four (4) C framework consisting of cost, complexity, cross-layer and computing
is hereby analyzed and discussed. The Four (4) C framework discusses several emerging new
technologies of software defined network (SDN), network function virtualization (NFV) and fog
networking. This review paper will contribute significantly towards the future implementation of
flexible and high capacity ultra-low latency 5G communications
- …