2,358 research outputs found
Towards Scalable Design of Future Wireless Networks
Wireless operators face an ever-growing challenge to meet the throughput and processing requirements of billions of devices that are getting connected. In current wireless networks, such as LTE and WiFi, these requirements are addressed by provisioning more resources: spectrum, transmitters, and baseband processors. However, this simple add-on approach to scale system performance is expensive and often results in resource underutilization. What are, then, the ways to efficiently scale the throughput and operational efficiency of these wireless networks? To answer this question, this thesis explores several potential designs: utilizing unlicensed spectrum to augment the bandwidth of a licensed network; coordinating transmitters to increase system throughput; and finally, centralizing wireless processing to reduce computing costs.
First, we propose a solution that allows LTE, a licensed wireless standard, to co-exist with WiFi in the unlicensed spectrum. The proposed solution bridges the incompatibility between the fixed access of LTE, and the random access of WiFi, through channel reservation. It achieves a fair LTE-WiFi co-existence despite the transmission gaps and unequal frame durations. Second, we consider a system where different MIMO transmitters coordinate to transmit data of multiple users.
We present an adaptive design of the channel feedback protocol that mitigates interference resulting from the imperfect channel information. Finally, we consider a Cloud-RAN architecture where a datacenter or a cloud resource processes wireless frames. We introduce a tree-based design for real-time transport of baseband samples and provide its end-to-end schedulability
and capacity analysis. We also present a processing framework that combines real-time scheduling with fine-grained parallelism. The framework reduces processing times by migrating parallelizable tasks to idle compute resources, and thus, decreases the processing deadline-misses at no additional cost.
We implement and evaluate the above solutions using software-radio platforms and off-the-shelf radios, and confirm their applicability in real-world settings.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133358/1/gkchai_1.pd
Quality aspects of Internet telephony
Internet telephony has had a tremendous impact on how people communicate.
Many now maintain contact using some form of Internet telephony.
Therefore the motivation for this work has been to address the quality aspects
of real-world Internet telephony for both fixed and wireless telecommunication.
The focus has been on the quality aspects of voice communication,
since poor quality leads often to user dissatisfaction. The scope of the work
has been broad in order to address the main factors within IP-based voice
communication.
The first four chapters of this dissertation constitute the background
material. The first chapter outlines where Internet telephony is deployed
today. It also motivates the topics and techniques used in this research.
The second chapter provides the background on Internet telephony including
signalling, speech coding and voice Internetworking. The third chapter
focuses solely on quality measures for packetised voice systems and finally
the fourth chapter is devoted to the history of voice research.
The appendix of this dissertation constitutes the research contributions.
It includes an examination of the access network, focusing on how calls are
multiplexed in wired and wireless systems. Subsequently in the wireless
case, we consider how to handover calls from 802.11 networks to the cellular
infrastructure. We then consider the Internet backbone where most of our
work is devoted to measurements specifically for Internet telephony. The
applications of these measurements have been estimating telephony arrival
processes, measuring call quality, and quantifying the trend in Internet telephony
quality over several years. We also consider the end systems, since
they are responsible for reconstructing a voice stream given loss and delay
constraints. Finally we estimate voice quality using the ITU proposal PESQ
and the packet loss process.
The main contribution of this work is a systematic examination of Internet
telephony. We describe several methods to enable adaptable solutions
for maintaining consistent voice quality. We have also found that relatively
small technical changes can lead to substantial user quality improvements.
A second contribution of this work is a suite of software tools designed to
ascertain voice quality in IP networks. Some of these tools are in use within
commercial systems today
Traffic and task allocation in networks and the cloud
Communication services such as telephony, broadband and TV are increasingly migrating into Internet Protocol(IP) based networks because of the consolidation of telephone and data networks. Meanwhile, the increasingly wide application of Cloud Computing enables the accommodation of tens of thousands of applications from the general public or enterprise users which make use of Cloud services on-demand through IP networks such as the Internet. Real-Time services over IP (RTIP) have also been increasingly significant due to the convergence of network services, and the real-time needs of the Internet of Things (IoT) will strengthen this trend. Such Real-Time applications have strict Quality of Service (QoS) constraints, posing a major challenge for IP networks. The Cognitive Packet Network (CPN) has been designed as a QoS-driven protocol that addresses user-oriented QoS demands by adaptively routing packets based on online sensing and measurement. Thus in this thesis we first describe our design for a novel ``Real-Time (RT) traffic over CPN'' protocol which uses QoS goals that match the needs of voice packet delivery in the presence of other background traffic under varied traffic conditions; we present its experimental evaluation via measurements of key QoS metrics such as packet delay, delay variation (jitter) and packet loss ratio. Pursuing our investigation of packet routing in the Internet, we then propose a novel Big Data and Machine Learning approach for real-time Internet scale Route Optimisation based on Quality-of-Service using an overlay network, and evaluate is performance. Based on the collection of data sampled each minutes over a large number of source-destinations pairs, we observe that intercontinental Internet Protocol (IP) paths are far from optimal with respect to metrics such as end-to-end round-trip delay. On the other hand, our machine learning based overlay network routing scheme exploits large scale data collected from communicating node pairs to select overlay paths, while it uses IP between neighbouring overlay nodes. We report measurements over a week long experiment with several million data points shows substantially better end-to-end QoS than is observed with pure IP routing. Pursuing the machine learning approach, we then address the challenging problem of dispatching incoming tasks to servers in Cloud systems so as to offer the best QoS and reliable job execution; an experimental system (the Task Allocation Platform) that we have developed is presented and used to compare several task allocation schemes, including a model driven algorithm, a reinforcement learning based scheme, and a ``sensible’’ allocation algorithm that assigns tasks to sub-systems that are observed to provide lower response time. These schemes are compared via measurements both among themselves and against a standard round-robin scheduler, with two architectures (with homogenous and heterogenous hosts having different processing capacities) and the conditions under which the different schemes offer better QoS are discussed. Since Cloud systems include both locally based servers at user premises and remote servers and multiple Clouds that can be reached over the Internet, we also describe a smart distributed system that combines local and remote Cloud facilities, allocating tasks dynamically to the service that offers the best overall QoS, and it includes a routing overlay which minimizes network delay for data transfer between Clouds. Internet-scale experiments that we report exhibit the effectiveness of our approach in adaptively distributing workload across multiple Clouds.Open Acces
Best effort measurement based congestion control
Abstract available: p.
On Resilient Control for Secure Connected Vehicles: A Hybrid Systems Approach
According to the Internet of Things Forecast conducted by Ericsson, connected devices will be around 29 billion by 2022. This technological revolution enables the concept of Cyber-Physical Systems (CPSs) that will transform many applications, including power-grid, transportation, smart buildings, and manufacturing. Manufacturers and institutions are relying on technologies related to CPSs to improve the efficiency and performances of their products and services. However, the higher the number of connected devices, the higher the exposure to cybersecurity threats. In the case of CPSs, successful cyber-attacks can potentially hamper the economy and endanger human lives. Therefore, it is of paramount importance to develop and adopt resilient technologies that can complement the existing security tools to make CPSs more resilient to cyber-attacks.
By exploiting the intrinsically present physical characteristics of CPSs, this dissertation employs dynamical and control systems theory to improve the CPS resiliency to cyber-attacks. In particular, we consider CPSs as Networked Control Systems (NCSs), which are control systems where plant and controller share sensing and actuating information through networks. This dissertation proposes novel design procedures that maximize the resiliency of NCSs to network imperfections (i.e., sampling, packet dropping, and network delays) and denial of service (DoS) attacks.
We model CPSs from a general point of view to generate design procedures that have a vast spectrum of applicability while creating computationally affordable algorithms capable of real-time performances. Indeed, the findings of this research aspire to be easily applied to several CPSs applications, e.g., power grid, transportation systems, and remote surgery. However, this dissertation focuses on applying its theoretical outcomes to connected and automated vehicle (CAV) systems where vehicles are capable of sharing information via a wireless communication network.
In the first part of the dissertation, we propose a set of LMI-based constructive Lyapunov-based tools for the analysis of the resiliency of NCSs, and we propose a design approach that maximizes the resiliency.
In the second part of the thesis, we deal with the design of DOS-resilient control systems for connected vehicle applications. In particular, we focus on the Cooperative Adaptive Cruise Control (CACC), which is one of the most popular and promising applications involving CAVs
Performance measurement methodology for integrated services networks
With the emergence of advanced integrated services networks, the need for effective
performance analysis techniques has become extremely important. Further
advancements in these networks can only be possible if the practical performance
issues of the existing networks are clearly understood. This thesis is concerned with
the design and development of a measurement system which has been implemented on
a large experimental network.
The measurement system is based on dedicated traffic generators which have been
designed and implemented on the Project Unison network. The Unison project is a
multisite networking experiment for conducting research into the interconnection and
interworking of local area network based multi-media application systems. The traffic
generators were first developed for the Cambridge Ring based Unison network. Once
their usefulness and effectiveness was proven, high performance traffic generators
using transputer technology were built for the Cambridge Fast Ring based Unison
network. The measurement system is capable of measuring the conventional
performance parameters such as throughput and packet delay, and is able to
characterise the operational performance of network bridging components under
various loading conditions. In particular, the measurement system has been used in a
'measure and tune' fashion in order to improve the performance of a complex bridging
device.
Accurate measurement of packet delay in wide area networks is a recognised problem.
The problem is associated with the synchronisation of the clocks between the distant
machines. A chronological timestamping technique has been introduced in which the
clocks are synchronised using a broadcast synchronisation technique. Rugby time
clock receivers have been interfaced to each generator for the purpose of
synchronisation.
In order to design network applications, an accurate knowledge of the expected
network performance under different loading conditions is essential. Using the
measurement system, this has been achieved by examining the network characteristics
at the network/user interface. Also, the generators are capable of emulating a variety
of application traffic which can be injected into the network along with the traffic
from real applications, thus enabling user oriented performance parameters to be
evaluated in a mixed traffic environment.
A number of performance measurement experiments have been conducted using the
measurement system. Experimental results obtained from the Unison network serve to
emphasise the power and effectiveness of the measurement methodology
A Priority-based Fair Queuing (PFQ) Model for Wireless Healthcare System
Healthcare is a very active research area, primarily due to the increase in the elderly population that leads to increasing number of emergency situations that require urgent actions. In recent years some of wireless networked medical devices were equipped with different sensors to measure and report on vital signs of patient remotely. The most important sensors are Heart Beat Rate (ECG), Pressure and Glucose sensors. However, the strict requirements and real-time nature of medical applications dictate the extreme importance and need for appropriate Quality of Service (QoS), fast and accurate delivery of a patient’s measurements in reliable e-Health ecosystem.
As the elderly age and older adult population is increasing (65 years and above) due to the advancement in medicine and medical care in the last two decades; high QoS and reliable e-health ecosystem has become a major challenge in Healthcare especially for patients who require continuous monitoring and attention. Nevertheless, predictions have indicated that elderly population will be approximately 2 billion in developing countries by 2050 where availability of medical staff shall be unable to cope with this growth and emergency cases that need immediate intervention. On the other side, limitations in communication networks capacity, congestions and the humongous increase of devices, applications and IOT using the available communication networks add extra layer of challenges on E-health ecosystem such as time constraints, quality of measurements and signals reaching healthcare centres.
Hence this research has tackled the delay and jitter parameters in E-health M2M wireless communication and succeeded in reducing them in comparison to current available models. The novelty of this research has succeeded in developing a new Priority Queuing model ‘’Priority Based-Fair Queuing’’ (PFQ) where a new priority level and concept of ‘’Patient’s Health Record’’ (PHR) has been developed and
integrated with the Priority Parameters (PP) values of each sensor to add a second level of priority. The results and data analysis performed on the PFQ model under different scenarios simulating real M2M E-health environment have revealed that the PFQ has outperformed the results obtained from simulating the widely used current models such as First in First Out (FIFO) and Weight Fair Queuing (WFQ).
PFQ model has improved transmission of ECG sensor data by decreasing delay and jitter in emergency cases by 83.32% and 75.88% respectively in comparison to FIFO and 46.65% and 60.13% with respect to WFQ model. Similarly, in pressure sensor the improvements were 82.41% and 71.5% and 68.43% and 73.36% in comparison to FIFO and WFQ respectively. Data transmission were also improved in the Glucose sensor by 80.85% and 64.7% and 92.1% and 83.17% in comparison to FIFO and WFQ respectively. However, non-emergency cases data transmission using PFQ model was negatively impacted and scored higher rates than FIFO and WFQ since PFQ tends to give higher priority to emergency cases.
Thus, a derivative from the PFQ model has been developed to create a new version namely “Priority Based-Fair Queuing-Tolerated Delay” (PFQ-TD) to balance the data transmission between emergency and non-emergency cases where tolerated delay in emergency cases has been considered. PFQ-TD has succeeded in balancing fairly this issue and reducing the total average delay and jitter of emergency and non-emergency cases in all sensors and keep them within the acceptable allowable standards. PFQ-TD has improved the overall average delay and jitter in emergency and non-emergency cases among all sensors by 41% and 84% respectively in comparison to PFQ model
Recommended from our members
Converged IP-over-standard ethernet progress control networks for hydrocarbon process automation applications controllers
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The maturity level of Internet Protocol (IP) and the emergence of standard Ethernet interfaces of Hydrocarbon Process Automation Application (HPAA) present a real opportunity to combine independent industrial applications onto an integrated IP based network platform. Quality of Service (QoS) for IP over Ethernet has the strength to regulate traffic mix and support timely delivery. The combinations of these technologies lend themselves to provide a platform to support HPAA applications across Local Area Network (LAN) and Wide Area Network (WAN) networks. HPAA systems are composed of sensors, actuators, and logic solvers networked together to form independent control system network platforms. They support hydrocarbon plants operating under critical conditions that — if not controlled — could become dangerous to people, assets and the environment. This demands high speed networking which is triggered by the need to capture data with higher frequency rate at a finer granularity. Nevertheless, existing HPAA network infrastructure is based on unique autonomous systems, which has resulted in multiple, parallel and separate networks with limited interconnectivity supporting different functions. This created increased complexity in integrating various applications and resulted higher costs in the technology life cycle total ownership. To date, the concept of consolidating HPAA into a converged IP network over standard Ethernet has not yet been explored. This research aims to explore and develop the HPAA Process Control Systems (PCS) in a Converged Internet Protocol (CIP) using experimental and simulated networks case studies. Results from experimental and simulation work showed encouraging outcomes and provided a good argument for supporting the co-existence of HPAA and non-HPAA applications taking into consideration timeliness and reliability requirements. This was achieved by invoking priority based scheduling with the highest priority being awarded to PCS among other supported services such as voice, multimedia streams and other applications. HPAA can benefit from utilizing CIP over Ethernet by reducing the number of interdependent HPAA PCS networks to a single uniform and standard network. In addition, this integrated infrastructure offers a platform for additional support services such as multimedia streaming, voice, and data. This network‐based model manifests itself to be integrated with remote control system platform capabilities at the end user's desktop independent of space and time resulting in the concept of plant virtualization
- …