5,614 research outputs found
Voice quality control in packet switched wireless networks
Wireless systems have engaged the evolutionary migration from traditional circuit switch
technology to packet based technology. Presently all next generation wireless networks have
been specified with a packet based Radio Access Networks (RAN), which implies that all the
flaws of traditional packet based networks now also apply to voice. These flaws result in
decreased speech quality derived from increased latency, jitter and packet loss. This thesis
provides the basis for a solution that will facilitate voice quality control in a packet switched
wireless network based on the integrated approach of providing Quality of Service (QoS) control
across the Admission Control (AC) component, Bearer or Service Flow component and
mapping across these components to the appropriate Quality of Service (QoS) metrics at the
transport network.
The original contribution of knowledge to the field of electrical and information engineering is
the proposal of a Quality of Service (QoS) framework and control mechanisms that result in the
transmission of quality voice over a packet switched wireless network autonomous to voice
specific signalling or media protocols. These proposals include: Heuristic Analysis in the
Admission Control (AC) component; the addition of a voice service class Admission Control
(AC) model; selection of a voice specific Bearer or Service Flow and the mapping thereof to a
voice specific Quality of Service (QoS) queue or Service Flow at the transport or backhaul
network. All these solutions are presented with the goal of ensuring the preservation of quality
voice over a packet switched wireless network as governed by network quality metrics such as
latency, jitter and packet loss.
This research delivers a comprehensive analysis of 4th Generation (4G) networks such as,
Worldwide Interoperability for Microwave Access (WiMAX) and Long Term Evolution (LTE),
as specified by the standards bodies yet with focused orientation to the Quality of Service (QoS)
framework provided by each of the standards. Specific investigations are targeted towards the
Admission Control (AC) and Scheduling of physical resources over the air interface by the Media
Access Control (MAC) and Radio Link Control (RLC) layer. Current research and industry led
initiatives in the provisioning of quality voice, such as Circuit Switch Fallback (CSFB) and IP
Multimedia Subsystem (IMS) are presented and include the associated advantages and
disadvantages.
The results and recommendations of this research consist of a multi-faceted solution,
commencing with the addition of Heuristic Analysis with Deep Packet Inspection (DPI) being
proposed at the eNodeB or WiMAX Base Station (BS) level. An Admission Control (AC)
scheme tailored for voice utilising Heuristic Analysis as an input is created, thereafter an
identified QoS Class Identifier (QCI) Bearer or Service Flow and transportation Quality of
Service (QoS) Identifier for voice is triggered by the User Equipment (UE) application or Bearer
initiation procedures. The LTE Bearers and WiMAX Service Flows are tested with the intention
of recommending an LTE Bearer and WiMAX Service Flow that will ensure compliance to the
minimum required network quality metrics. Finally the testing of the invoking mechanisms is
presented mapping the Quality of Service (QoS) metrics across each of the network components
thereby completing the solution
Past and Future Operations Concepts of NASA's Earth Science Data and Information System
NASA committed to support the collection and distribution of Earth science data to study global change in the 1990's. A series of Earth science remote sensing satellites, the Earth Observing System (EOS), was to be the centerpiece. The concept for the science data system, the EOS Data and Information System (EOSDIS), created new challenges in the data processing of multiple satellite instrument observations for climate research and in the distribution of global-coverage remote sensor products to a large and growing science research community. EOSDIS was conceived to facilitate easy access to EOS science data for a wide heterogeneous national and international community of users. EOSDIS was to provide a spectrum of services designed for research scientists working on NASA focus areas but open to the general public and international science community. EOSDIS would give researchers tools and assistance in searching, selecting and acquiring data, allowing them to focus on Earth science climate research rather than complex product generation. Goals were to promote exchange of data and research results and expedite development of new geophysical algorithms. The system architecture had to accommodate a diversity of data types, data acquisition and product generation operations, data access requirements and different centers of science discipline expertise. Steps were taken early to make EOSDIS flexible by distributing responsibility for basic services. Many of the system operations concept decisions made in the 90s continued to this day. Once implemented, concepts such as the EOSDIS data model played a critical role developing effective data services, now a hallmark of EOSDIS. In other cases, EOSDIS architecture has evolved to enable more efficient operations, taking advantage of new technology and thereby shifting more resources on data services and less on operating and maintaining infrastructure. In looking to the future, EOSDIS may be able to take advantage of commercial compute environments for infrastructure and further enable large scale climate research. In this presentation, we will discuss key EOSDIS operations concepts from the 1990's, how they were implemented and evolved in the architecture, and look at concepts and architectural challenges for EOSDIS operations utilizing commercial cloud services
RoboChain: A Secure Data-Sharing Framework for Human-Robot Interaction
Robots have potential to revolutionize the way we interact with the world
around us. One of their largest potentials is in the domain of mobile health
where they can be used to facilitate clinical interventions. However, to
accomplish this, robots need to have access to our private data in order to
learn from these data and improve their interaction capabilities. Furthermore,
to enhance this learning process, the knowledge sharing among multiple robot
units is the natural step forward. However, to date, there is no
well-established framework which allows for such data sharing while preserving
the privacy of the users (e.g., the hospital patients). To this end, we
introduce RoboChain - the first learning framework for secure, decentralized
and computationally efficient data and model sharing among multiple robot units
installed at multiple sites (e.g., hospitals). RoboChain builds upon and
combines the latest advances in open data access and blockchain technologies,
as well as machine learning. We illustrate this framework using the example of
a clinical intervention conducted in a private network of hospitals.
Specifically, we lay down the system architecture that allows multiple robot
units, conducting the interventions at different hospitals, to perform
efficient learning without compromising the data privacy.Comment: 7 pages, 6 figure
Recommended from our members
Dialectic tensions in the financial markets: a longitudinal study of pre- and post-crisis regulatory technology
This article presents the findings from a longitudinal research study on regulatory technology in the UK financial services industry. The financial crisis with serious corporate and mutual fund scandals raised the profile of
compliance as governmental bodies, institutional and private investors introduced a ‘tsunami’ of financial regulations. Adopting a multi-level analysis, this study examines how regulatory technology was used by financial firms to meet their compliance obligations, pre- and post-crisis. Empirical data collected over 12 years examine the deployment of
an investment management system in eight financial firms. Interviews with public regulatory bodies, financial
institutions and technology providers reveal a culture of compliance with increased transparency, surveillance and
accountability. Findings show that dialectic tensions arise as the pursuit of transparency, surveillance and
accountability in compliance mandates is simultaneously rationalized, facilitated and obscured by regulatory
technology. Responding to these challenges, regulatory bodies continue to impose revised compliance mandates on
financial firms to force them to adapt their financial technologies in an ever-changing multi-jurisdictional regulatory landscape
Design and Implementation of a Measurement-Based Policy-Driven Resource Management Framework For Converged Networks
This paper presents the design and implementation of a measurement-based QoS
and resource management framework, CNQF (Converged Networks QoS Management
Framework). CNQF is designed to provide unified, scalable QoS control and
resource management through the use of a policy-based network management
paradigm. It achieves this via distributed functional entities that are
deployed to co-ordinate the resources of the transport network through
centralized policy-driven decisions supported by measurement-based control
architecture. We present the CNQF architecture, implementation of the prototype
and validation of various inbuilt QoS control mechanisms using real traffic
flows on a Linux-based experimental test bed.Comment: in Ictact Journal On Communication Technology: Special Issue On Next
Generation Wireless Networks And Applications, June 2011, Volume 2, Issue 2,
Issn: 2229-6948(Online
On the Delay of Geographical Caching Methods in Two-Tiered Heterogeneous Networks
We consider a hierarchical network that consists of mobile users, a
two-tiered cellular network (namely small cells and macro cells) and central
routers, each of which follows a Poisson point process (PPP). In this scenario,
small cells with limited-capacity backhaul are able to cache content under a
given set of randomized caching policies and storage constraints. Moreover, we
consider three different content popularity models, namely fixed content
popularity, distance-dependent and load-dependent, in order to model the
spatio-temporal behavior of users' content request patterns. We derive
expressions for the average delay of users assuming perfect knowledge of
content popularity distributions and randomized caching policies. Although the
trend of the average delay for all three content popularity models is
essentially identical, our results show that the overall performance of
cached-enabled heterogeneous networks can be substantially improved, especially
under the load-dependent content popularity model.Comment: to be presented at IEEE SPAWC'2016, Edinburgh, U
- …