4,455 research outputs found
A live system for wavelet compression of high speed computer network measurements
Monitoring high-speed networks for a long period of time produces a high volume of
data, making the storage of this information practically inefficient. To this end, there
is a need to derive an efficient method of data analysis and reduction in order to archive
and store the enormous amount of monitored traffic.
Satisfying this need is useful not only for administrators but also for researchers
who run their experiments on the monitored network. The researchers would like to
know how their experiments affect the network's behavior in terms of utilization,
delay, packet loss, data rate etc.
In this paper a method of compressing computer network measurements while preserving
the quality in interesting signal characteristics is presented. Eight different
mother wavelets are compared against each other in order to examine which one offers
the best results in terms of quality in the reconstructed signal. The proposed
wavelet compression algorithm is compared against the lossless compression tool
bzip2 in terms of compression ratio (C.R.). Finally, practical results are presented by
compressing sampled traffic recorded from a live network
A system for online compression of high-speed network measurements
Measuring various metrics of high speed and high capacity networks pro-
duces a vast amount of information over a long period of time, making the conventional
storage of the data practically ine cient. Such metrics are derived from packet level
information and can be represented as time series signals. Thus, they can be ana-
lyzed using signal analysis techniques. This paper looks at the Wavelet transform as a
method of analyzing and compressing measurement signals (such as delay, utilization,
data rate etc.) produced from high-speed networks. A live system can calculate these
measurements and then perform wavelet techniques to keep the signi cant information
and discard the small variations. An investigation into the choice of an appropriate
wavelet is presented along with results both from o -line and on-line experiments.
The quality of the decompressed signal is measured by the PSNR and a comparison of
compression performance is presented against the lossless tool bzip2
Applying wavelets for the controlled compression of communication network measurements
Monitoring and measuring various metrics of high-speed networks produces a vast amount of information over a long period of
time making the storage of the metrics a serious issue. Previous work has suggested stream aware compression algorithms, among
others, i.e. methodologies that try to organise the network packets in a compact way in order to occupy less storage. However, these
methods do not reduce the redundancy in the stream information. Lossy compression becomes an attractive solution, as higher
compression ratios can be achieved. However, the important and significant elements of the original data need to be preserved.
This work proposes the use of a lossy wavelet compression mechanism that preserves crucial statistical and visual characteristics
of the examined computer network measurements and provides significant compression against the original file sizes.
To the best of our knowledge, the authors are the first to suggest and implement a wavelet analysis technique for compressing
computer network measurements. In this paper, wavelet analysis is used and compared against the Gzip and Bzip2 tools for
data rate and delay measurements. In addition this paper provides a comparison of eight different wavelets with respect to the
compression ratio, the preservation of the scaling behavior, of the long range dependence, of mean and standard deviation and of
the general reconstruction quality. The results show that the Haar wavelet provides higher peak signal-to-noise ratio (PSNR) values
and better overall results, than other wavelets with more vanishing moments. Our proposed methodology has been implemented
on an on-line based measurement platform and compressed data traffic generated from a live network
Graph Signal Processing: Overview, Challenges and Applications
Research in Graph Signal Processing (GSP) aims to develop tools for
processing data defined on irregular graph domains. In this paper we first
provide an overview of core ideas in GSP and their connection to conventional
digital signal processing. We then summarize recent developments in developing
basic GSP tools, including methods for sampling, filtering or graph learning.
Next, we review progress in several application areas using GSP, including
processing and analysis of sensor network data, biological data, and
applications to image processing and machine learning. We finish by providing a
brief historical perspective to highlight how concepts recently developed in
GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE
Compressing computer network measurements using embedded zerotree wavelets
Monitoring and measuring various metrics of high data
rate and high capacity networks produces a vast amount of
information over a long period of time. Characteristics such
as throughput and delay are derived from packet level information
and can be represented as time series signals. This
paper looks at the Embedded Zero Tree algorithm, proposed
by Shapiro, in order to compress computer network delay
and throughput measurements while preserving the quality
of interesting features and controlling the level of quality
of the compressed signal. The quality characteristics that
are examined are the preservation of the mean square error
(MSE), the standard deviation, the general visual quality
(the PSNR) and the scaling behavior. Experimental results
are obtained to evaluate the behaviour of the algorithm on
delay and data rate signals. Finally, a comparison of compression
performance is presented against the lossless tool
bzip2
SpatioTemporal Feature Integration and Model Fusion for Full Reference Video Quality Assessment
Perceptual video quality assessment models are either frame-based or
video-based, i.e., they apply spatiotemporal filtering or motion estimation to
capture temporal video distortions. Despite their good performance on video
quality databases, video-based approaches are time-consuming and harder to
efficiently deploy. To balance between high performance and computational
efficiency, Netflix developed the Video Multi-method Assessment Fusion (VMAF)
framework, which integrates multiple quality-aware features to predict video
quality. Nevertheless, this fusion framework does not fully exploit temporal
video quality measurements which are relevant to temporal video distortions. To
this end, we propose two improvements to the VMAF framework: SpatioTemporal
VMAF and Ensemble VMAF. Both algorithms exploit efficient temporal video
features which are fed into a single or multiple regression models. To train
our models, we designed a large subjective database and evaluated the proposed
models against state-of-the-art approaches. The compared algorithms will be
made available as part of the open source package in
https://github.com/Netflix/vmaf
- …