104 research outputs found
Recommended from our members
Research and developments of Dirac video codec
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel University.In digital video compression, apart from storage, successful transmission of the compressed video
data over the bandwidth limited erroneous channels is another important issue. To enable a video
codec for broadcasting application, it is required to implement the corresponding coding tools (e.g.
error-resilient coding, rate control etc.). They are normally non-normative parts of a video codec and
hence their specifications are not defined in the standard. In Dirac as well, the original codec is
optimized for storage purpose only and so, several non-normative part of the encoding tools are still
required in order to be able to use in other types of application.
Being the "Research and Developments of the Dirac Video Codec" as the research title, phase I of
the project is mainly focused on the error-resilient transmission over a noisy channel. The error-resilient
coding method used here is a simple and low complex coding scheme which provides the
error-resilient transmission of the compressed video bitstream of Dirac video encoder over the packet
erasure wired network. The scheme combines source and channel coding approach where error-resilient
source coding is achieved by data partitioning in the wavelet transformed domain and
channel coding is achieved through the application of either Rate-Compatible Punctured
Convolutional (RCPC) Code or Turbo Code (TC) using un-equal error protection between header plus
MV and data. The scheme is designed mainly for the packet-erasure channel, i.e. targeted for the
Internet broadcasting application.
But, for a bandwidth limited channel, it is still required to limit the amount of bits generated from
the encoder depending on the available bandwidth in addition to the error-resilient coding. So, in the
2nd phase of the project, a rate control algorithm is presented. The algorithm is based upon the Quality
Factor (QF) optimization method where QF of the encoded video is adaptively changing in order to
achieve average bitrate which is constant over each Group of Picture (GOP). A relation between the
bitrate, R and the QF, which is called Rate-QF (R-QF) model is derived in order to estimate the
optimum QF of the current encoding frame for a given target bitrate, R.
In some applications like video conferencing, real-time encoding and decoding with minimum
delay is crucial, but, the ability to do real-time encoding/decoding is largely determined by the
complexity of the encoder/decoder. As we all know that motion estimation process inside the encoder
is the most time consuming stage. So, reducing the complexity of the motion estimation stage will
certainly give one step closer to the real-time application. So, as a partial contribution toward realtime
application, in the final phase of the research, a fast Motion Estimation (ME) strategy is designed
and implemented. It is the combination of modified adaptive search plus semi-hierarchical way of
motion estimation. The same strategy was implemented in both Dirac and H.264 in order to
investigate its performance on different codecs. Together with this fast ME strategy, a method which
is called partial cost function calculation in order to further reduce down the computational load of the
cost function calculation was presented. The calculation is based upon the pre-defined set of patterns
which were chosen in such a way that they have as much maximum coverage as possible over the
whole block.
In summary, this research work has contributed to the error-resilient transmission of compressed
bitstreams of Dirac video encoder over a bandwidth limited error prone channel. In addition to this,
the final phase of the research has partially contributed toward the real-time application of the Dirac
video codec by implementing a fast motion estimation strategy together with partial cost function
calculation idea.BBC R&D and Brunel University
A rate control algorithm for scalable video coding
This thesis proposes a rate control (RC) algorithm for H.264/scalable video coding
(SVC) specially designed for real-time variable bit rate (VBR) applications with
buffer constraints. The VBR controller assumes that consecutive pictures within the
same scene often exhibit similar degrees of complexity, and aims to prevent unnecessary
quantization parameter (QP) fluctuations by allowing for just an incremental
variation of QP with respect to that of the previous picture. In order to adapt this
idea to H.264/SVC, a rate controller is located at each dependency layer (spatial or
coarse grain scalability) so that each rate controller is responsible for determining
the proper QP increment. Actually, one of the main contributions of the thesis is
a QP increment regression model that is based on Gaussian processes. This model
has been derived from some observations drawn from a discrete set of representative
encoding states. Two real-time application scenarios were simulated to assess the
performance of the VBR controller with respect to two well-known RC methods.
The experimental results show that our proposal achieves an excellent performance
in terms of quality consistency, buffer control, adjustment to the target bit rate, and computational complexity.
Moreover, unlike typical RC algorithms for SVC that only satisfy the hypothetical
reference decoder (HRD) constraints for the highest temporal resolution sub-stream
of each dependency layer, the proposed VBR controller also delivers HRD-compliant
sub-streams with lower temporal resolutions.To this end, a novel approach that uses a set of buffers (one per temporal resolution sub-stream) within a dependency layer has been built on top of the RC algorithm.The proposed approach aims to simultaneously control the buffer levels for overflow and underflow prevention, while maximizing the reconstructed video quality of the corresponding sub-streams. This in-layer multibuffer framework for rate-controlled SVC does not require additional dependency layers to deliver different HRD-compliant temporal resolutions for a given video source, thus improving the coding e ciency when compared to typical SVC encoder con gurations since, for the same target bit rate, less layers are encoded
Recommended from our members
Network coding for sensor networks, distributed storage and video streaming
The classical store-and-forward routing has and will continue to be the most important routing architecture in many modern packet-switched communication networks. In a packet-switched network, data is sent in the form of discrete packets that traverse hop-by-hop from a source to a destination. At each intermediate hop, the router stores and examines the packets it receives then forwards them to the next hop until they reach the correct destinations according to some pre-defined routing algorithms. Importantly, the intermediate routers do not modify but simply store and forward the contents of the packets. In contrast, a new generalized approach to routing called Network Coding (NC) allows the intermediate routers to modify and combine packets from different sources and destinations in such a way that increases the overall throughput. The core idea of NC allowing the intermediate nodes in a network to perform data processing has a wide range of applications well beyond its initial application to routing, impacting different disciplines from distributed data storage and security to energy efficient sensor networks and Internet media streaming. To that end, this dissertation aims to develop the theories and applications of NC via four main thrusts:
1) Energy efficient NC techniques for sensor networks,
2) Novel NC techniques and protocols for Internet video streaming,
3) Stochastic data replenishment for large scale NC-based distributed storage
systems,
4) Real-world implementation of NC-based distributed video streaming system.
In thrust one, we describe a novel cross-sensor coding technique that combines
network topology and coding techniques to maximize the life-time of a sensor network,
by addressing the uneven energy consumption problem in data gathering
sensor networks where the nodes closer to the sink tend to consume more energy
than those of the farther nodes. Our approach is based on the following observation
from the sensor networks using On-Off Keying and digital transmission:
transmitting bit "1" consumes much more energy than bit "0". Our proposed
coding technique exploits this difference to reduce the communication energy by
limiting the number of bits "1" in the output codeword (low-weight codeword) and
to use NC-based cross-sensor coding technique to equalize the communication energy
among the nodes. This cross-sensor coding scheme can significantly extend
the network lifetime as compared with traditional (binary) coding by solving the
energy-consumption unfairness problem. The theoretical and experimental results
confirm that transmission energy can be reduced substantially (e.g., a factor of 15)
and the unequal energy consumption among nodes can be practically eliminated.
In thrust two, we describe a rate distortion aware hierarchical NC technique
and transport protocol for Internet video streaming. We begin by proposing
a NC-based multi-sender streaming framework that reduces the overall storage,
eliminates the complexity of sender synchronization, and enables TCP streaming.
Furthermore, we propose a Hierarchical Network Coding (HNC) technique that
facilitates scalable video streaming to combat bandwidth fluctuation on the Internet.
This HNC technique enables receiver to recover the important data gracefully
in the presence of limited bandwidth which causes an increase in decoding delay.
Simulations demonstrate that under certain scenarios, our proposed NC techniques
can result in bandwidth saving up to 60% over the traditional schemes.
In thrust three, we present a theory of NC-based data replenishment to automate
the process of data maintenance for large scale distributed storage systems.
The data replenishment mechanism is the core of these systems that promises to
reduce the coordination complexity and increases performance scalability. The
data replenishment automates the process of maintaining a sufficient level of data
redundancy to ensure the availability of data in presence of peer departures and
failures. The dynamics of peers entering and leaving the network is modeled as
a stochastic process. We propose a novel analytical time-backward technique to
bound the expected time, the longer the better, for a piece of data to remain in
P2P systems. Both theoretical and simulation results are in agreement, indicating
that our proposed data replenishment via random linear network coding (RLNC)
outperforms other popular strategies that employ repetition and channel coding
techniques. Specifically, we show that the expected time for a piece of data to
remain in a P2P system is exponential in the number of peers used to store the
data for the RLNC-based strategy, while they are quadratic for other strategies.
Furthermore, the time-backward technique can be applied to problems in other
disciplines such as gene population modeling in theoretical biology.
Finally in thrust four, we present the architecture, design, and experimental
results of an actual NC-based distributed video streaming system. We first implement
random linear network coding (RLNC) library and show the feasibility of
using RLNC in P2P video streaming applications. Then we design, implement and
analyze RESnc - a resilient P2P video storage and streaming over the Internet using
network coding. RESnc increases the streaming throughput and data resiliency
against peer departures and failures using peer diversity. These improvements are
based on three architectural elements:
1) The RLNC scheme that breaks a video stream into multiple smaller pieces,
codes, and disperses them throughout peers in the network, in such a way to
maximize the probability of recovering the original video under peer departures
and failures;
2) The scalable mechanism for automating the data replenishment process using
RLNC to maintain a sufficient level of redundancy for video stored in the system;
3) The path-diversity streaming protocol for a client to simultaneously stream
a video from multiple peers with minimal coordination.
Experimental results demonstrated that our system adapts well with bandwidth
fluctuation, provides significant playback quality improvement and bandwidth saving
Women in Science 2017
Ever since its 1967 start, SURF has been a cornerstone of Smith’s science education. Women in Science 2017 summarizes research done by Smith College’s SURF Program participants during the summer of 2017. 151 students participated in SURF (144 hosted on campus and nearby eld sites), supervised by 58 faculty mentor-advisors drawn from the Clark Science Center and connected to its eighteen science, mathematics, and engineering departments and programs and associated centers and units. At summer’s end, SURF participants summarized their research experiences for this publication.https://scholarworks.smith.edu/clark_womeninscience/1006/thumbnail.jp
Satellite Communications
This study is motivated by the need to give the reader a broad view of the developments, key concepts, and technologies related to information society evolution, with a focus on the wireless communications and geoinformation technologies and their role in the environment. Giving perspective, it aims at assisting people active in the industry, the public sector, and Earth science fields as well, by providing a base for their continued work and thinking
On Personal Storage Systems: Architecture and Design Considerations
Actualment, els usuaris necessiten grans quantitats d’espai d’emmagatzematge remot per
guardar la seva informació personal. En aquesta dissertació, estudiarem dues arquitectures emergents de sistemes d’emmagatzematge d’informació personal: els Núvols Personals (centralitzats) i els sistemes d’emmagatzematge social (descentralitzats).
A la Part I d'aquesta tesi, contribuïm desvelant l’operació interna d’un Núvol Personal d’escala global, anomenat UbuntuOne (U1), incloent-hi la seva arquitectura, el seu servei de metadades i les interaccions d’emmagatzematge de dades. A més, proporcionem una anà lisi de la part de servidor d’U1 on estudiem la cà rrega del sistema, el comportament dels usuaris i el rendiment del seu servei de metadades. També suggerim tota una sèrie de millores potencials al sistema que poden beneficiar sistemes similars.
D'altra banda, en aquesta tesi també contribuïm mesurant i analitzant la qualitat de servei (p.e., velocitat, variabilitat) de les transferències sobre les REST APIs oferides pels Núvols Personals. A més, durant aquest estudi, ens hem adonat que aquestes interfÃcies poden ser objecte d’abús quan són utilitzades sobre els comptes gratuïts que normalment ofereixen aquests serveis. Això ha motivat l’estudi d’aquesta vulnerabilitat, aixà com de potencials contramesures.
A la Part II d'aquesta dissertació, la nostra primera contribució és analitzar la qualitat de servei que els sistemes d’emmagatzematge social poden proporcionar en termes de disponibilitat de dades, velocitat de transferència i balanceig de la cà rrega. El nostre interès principal és entendre com fenòmens intrÃnsecs, com les dinà miques de connexió dels usuaris o l’estructura de la xarxa social, limiten el rendiment d’aquests sistemes. També proposem nous mecanismes de manegament de dades per millorar aquestes limitacions.
Finalment, dissenyem una arquitectura hÃbrida que combina recursos del Núvol i dels usuaris. Aquesta arquitectura té com a objectiu millorar la qualitat de servei del sistema i deixa als usuaris decidir la quantitat de recursos utilitzats del Núvol, o en altres paraules, és una decisió entre control de les seves dades i rendiment.Los usuarios cada vez necesitan espacios mayores de almacenamiento en lÃnea para guardar su información personal. Este reto motiva a los investigadores a diseñar y evaluar nuevas infraestructuras de almacenamiento de datos personales. En esta tesis, nos centramos en dos arquitecturas emergentes de almacenamiento de datos personales: las Nubes Personales (centralización) y los sistemas de almacenamiento social (descentralización). Creemos que, pese a su creciente popularidad, estos sistemas requieren de un mayor estudio cientÃfico.
En la Parte I de esta disertación, examinamos aspectos referentes a la operación interna y el rendimiento de varias Nubes Personales. Concretamente, nuestra primera contribución es desvelar la operación interna e infraestructura de una Nube Personal de gran escala (UbuntuOne, U1). Además, proporcionamos un estudio de la actividad interna de U1 que incluye la carga diaria soportada, el comportamiento de los usuarios y el rendimiento de su sistema de metadatos. También sugerimos mejoras sobre U1 que pueden ser de utilidad en sistemas similares.
Por otra parte, en esta tesis medimos y caracterizamos el rendimiento del servicio de REST APIs ofrecido por varias Nubes Personales (velocidad de transferencia, variabilidad, etc.). También demostramos que la combinación de REST APIs sobre cuentas gratuitas de usuario puede dar lugar a abusos por parte de usuarios malintencionados. Esto nos motiva a proponer mecanismos para limitar el impacto de esta vulnerabilidad.
En la Parte II de esta tesis, estudiamos la calidad de servicio que pueden ofrecer los sistemas de almacenamiento social en términos de disponibilidad de datos, balanceo de carga y tiempos de transferencia. Nuestro interés principal es entender la manera en que fenómenos intrÃnsecos, como las dinámicas de conexión de los usuarios o la estructura de su red social, limitan el rendimiento de estos sistemas. También proponemos nuevos mecanismos de gestión de datos para mejorar esas limitaciones.
Finalmente, diseñamos y evaluamos una arquitectura hÃbrida para mejorar la calidad de servicio de los sistemas de almacenamiento social que combina recursos de usuarios y de la Nube. Esta arquitectura permite al usuario decidir su equilibrio entre control de sus datos y rendimiento.Increasingly, end-users demand larger amounts of online storage space to store their personal information. This challenge motivates researchers to devise novel personal storage infrastructures. In this thesis, we focus on two popular personal storage architectures: Personal Clouds (centralized) and social storage systems (decentralized). In our view, despite their growing popularity among users and researchers, there still remain some critical aspects to address regarding these systems.
In the Part I of this dissertation, we examine various aspects of the internal operation and performance of various Personal Clouds. Concretely, we first contribute by unveiling the internal structure of a global-scale Personal Cloud, namely UbuntuOne (U1). Moreover, we provide a back-end analysis of U1 that includes the study of the storage workload, the user behavior and the performance of the U1 metadata store. We also suggest improvements to U1 (storage optimizations, user behavior detection and security) that can also benefit similar systems.
From an external viewpoint, we actively measure various Personal Clouds through their REST APIs for characterizing their QoS, such as transfer speed, variability and failure rate. We also demonstrate that combining open APIs and free accounts may lead to abuse by malicious parties, which motivates us to propose countermeasures to limit the impact of abusive applications in this scenario.
In the Part II of this thesis, we study the storage QoS of social storage systems in terms of data availability, load balancing and transfer times. Our main interest is to understand the way intrinsic phenomena, such as the dynamics of users and the structure of their social relationships, limit the storage QoS of these systems, as well as to research novel mechanisms to ameliorate these limitations.
Finally, we design and evaluate a hybrid architecture to enhance the QoS achieved by a social storage system that combines user resources and cloud storage to let users infer the right balance between user control and QoS
- …