29 research outputs found

    Quality-oriented adaptation scheme for multimedia streaming in local broadband multi-service IP networks

    Get PDF
    The research reported in this thesis proposes, designs and tests the Quality-Oriented Adaptation Scheme (QOAS), an application-level adaptive scheme that offers high quality multimedia services to home residences and business premises via local broadband IP-networks in the presence of other traffic of different types. QOAS uses a novel client-located grading scheme that maps some network-related parameters’ values, variations and variation patterns (e.g. delay, jitter, loss rate) to application-level scores that describe the quality of delivery. This grading scheme also involves an objective metric that estimates the end-user perceived quality, increasing its effectiveness. A server-located arbiter takes content and rate adaptation decisions based on these quality scores, which is the only information sent via feedback by the clients. QOAS has been modelled, implemented and tested through simulations and an instantiation of it has been realized in a prototype system. The performance was assessed in terms of estimated end-user perceived quality, network utilisation, loss rate and number of customers served by a fixed infrastructure. The influence of variations in the parameters used by QOAS and of the networkrelated characteristics was studied. The scheme’s adaptive reaction was tested with background traffic of different type, size and variation patterns and in the presence of concurrent multimedia streaming processes subject to user-interactions. The results show that the performance of QOAS was very close to that of an ideal adaptive scheme. In comparison with other adaptive schemes QOAS allows for a significant increase in the number of simultaneous users while maintaining a good end-user perceived quality. These results are verified by a set of subjective tests that have been performed on viewers using a prototype system

    Recent Trends in Communication Networks

    Get PDF
    In recent years there has been many developments in communication technology. This has greatly enhanced the computing power of small handheld resource-constrained mobile devices. Different generations of communication technology have evolved. This had led to new research for communication of large volumes of data in different transmission media and the design of different communication protocols. Another direction of research concerns the secure and error-free communication between the sender and receiver despite the risk of the presence of an eavesdropper. For the communication requirement of a huge amount of multimedia streaming data, a lot of research has been carried out in the design of proper overlay networks. The book addresses new research techniques that have evolved to handle these challenges

    Learning-based Decision Making in Wireless Communications

    Get PDF
    Fueled by emerging applications and exponential increase in data traffic, wireless networks have recently grown significantly and become more complex. In such large-scale complex wireless networks, it is challenging and, oftentimes, infeasible for conventional optimization methods to quickly solve critical decision-making problems. With this motivation, in this thesis, machine learning methods are developed and utilized for obtaining optimal/near-optimal solutions for timely decision making in wireless networks. Content caching at the edge nodes is a promising technique to reduce the data traffic in next-generation wireless networks. In this context, we in the first part of the thesis study content caching at the wireless network edge using a deep reinforcement learning framework with Wolpertinger architecture. Initially, we develop a learning-based caching policy for a single base station aiming at maximizing the long-term cache hit rate. Then, we extend this study to a wireless communication network with multiple edge nodes. In particular, we propose deep actor-critic reinforcement learning based policies for both centralized and decentralized content caching. Next, with the purpose of making efficient use of limited spectral resources, we develop a deep actor-critic reinforcement learning based framework for dynamic multichannel access. We consider both a single-user case and a scenario in which multiple users attempt to access channels simultaneously. In the single-user model, in order to evaluate the performance of the proposed channel access policy and the framework\u27s tolerance against uncertainty, we explore different channel switching patterns and different switching probabilities. In the case of multiple users, we analyze the probabilities of each user accessing channels with favorable channel conditions and the probability of collision. Following the analysis of the proposed learning-based dynamic multichannel access policy, we consider adversarial attacks on it. In particular, we propose two adversarial policies, one based on feed-forward neural networks and the other based on deep reinforcement learning policies. Both attack strategies aim at minimizing the accuracy of a deep reinforcement learning based dynamic channel access agent, and we demonstrate and compare their performances. Next, anomaly detection as an active hypothesis test problem is studied. Specifically, we study deep reinforcement learning based active sequential testing for anomaly detection. We assume that there is an unknown number of abnormal processes at a time and the agent can only check with one sensor in each sampling step. To maximize the confidence level of the decision and minimize the stopping time concurrently, we propose a deep actor-critic reinforcement learning framework that can dynamically select the sensor based on the posterior probabilities. Separately, we also regard the detection of threshold crossing as an anomaly detection problem, and analyze it via hierarchical generative adversarial networks (GANs). In the final part of the thesis, to address state estimation and detection problems in the presence of noisy sensor observations and probing costs, we develop a soft actor-critic deep reinforcement learning framework. Moreover, considering Byzantine attacks, we design a GAN-based framework to identify the Byzantine sensors. To evaluate the proposed framework, we measure the performance in terms of detection accuracy, stopping time, and the total probing cost needed for detection

    Video Quality Metrics

    Get PDF

    Green Buildings and Ambient Intelligence: case study for N.A.S.A. Sustainability Base and future Smart Infrastructures

    Get PDF
    Con la diffusione delle smart infrastructures, espressione con cui ci si riferisce collettivamente ai concetti di smart cities e smart grid, i sistemi di building automation vedono il proprio ruolo espandersi oltre i tradizionali limiti degli ambienti isolati che sono progettati per gestire, supervisionare ed ottimizzare. Da sistemi isolati all’interno di edifici residenziali o commerciali, stanno iniziando ad ottenere un ruolo importante su scala più ampia nell’ambito di scenari più complessi a livello urbano o a livello di infrastruttura. Esempi di questa tendenza possono essere le attuali sperimentazioni in varie città del mondo per automatizzare l’illuminazione pubblica, complessi residenziali diffusi (spesso denominati smart connected comunities) e microgrid locali generate dalla federazione di varie unità residenziali a formare cosidette virtual power plants. A causa di questo processo, ci sono aspettative crescenti circa il potenziale delle reti di automazione di introdurre funzionalità sofisticate da un parte ed efficienza energetica dall’altra, ed entrambi gli aspetti su vasta scala. Sfortunatamente questi due obiettivi sono per diversi motivi in conflitto ed è dunque inevitabile individuare un ragionevole compromesso di progettazione. Questa ricerca realizza una caratterizzazione delle attuali tecnologie di automazione per identificare i termini di tale compromesso, con un’attenzione maggiormente polarizzata sugli aspetti di efficienza energetica, analizzata seguendo un approccio olistico, affrontando diversi aspetti del problema. Indubbiamente, data la complessità del vasto scenario tecnologico delle future smart infrastructures, non c’è una finalità sistematica nel lavoro. Piuttosto si intende fornire un contributo alla conoscenza, dando priorità ad alcune sfide di ricerca che sono altresì spesso sottovalutate. Il Green networking, ovvero l’efficienza energetica nel funzionamento di rete, è una di tali sfide. L’attuale infrastruttura IT globale è costruita su attrezzature che collettivamente consumano 21.4 TWh/anno (Global e-Sustainability Initiative, 2010). Questo è dovuto alla scarsa consapevolezza del fatto che le specifiche dei protocolli di comunicazione hanno varie implicazioni sull’efficienza energetica e alla generale tendenza ad una progettazione ridondante e sovra-dimensionata per il caso peggiore. Questo problema potrebbe essere riscontrato anche nelle reti di automazione, specialmente data la tendenza di cui si discuteva sopra, e in tal caso, queste potrebbero introdurre un ulteriore carbon footprint, in aggiunta a quello della rete internet. In questa ricerca si intende dimensionare tale problema e proporre approcci alternativi agli attuali modelli di hardware e protocollo tipici delle tecnologie di automazione in commercio. Spostandosi dalla rete di controllo all’ambiente fisico, altro obiettivo di questo lavoro è la caratterizzazione di sistemi di gestione automatica dei plug loads, carichi elettrici altrimenti non gestiti da alcun impianto di building automation. Per tali sistemi verranno mostrati i limiti e le potenzialità, identificando potenziali problematiche di design e proponendo un approccio integrato di tali sistemi all’interno di sistemi più ampi di gestione dell’energia. Infine, il meccanismo introdotto nella parte di green networking è potenzialmente in grado di fornire informazioni in tempo reale circa il contesto controllato. Si tratta di un potenziale sfruttabile per sviluppare soluzioni di Demand Side Management, allo scopo di effettuare previsioni di picco e di carico. Questa analisi è attualmente in corso, attraverso una partnership con Enel Distribuzione. With the advent of smart infrastructures, collective expression used here to refer to novel concepts such as smart cities and smart grid, building automation and control networks are having their role expanded beyond the traditional boundaries of the isolated environments they are designed to manage, supervise and optimize. From being confined within residential or commercial buildings as islanded, self-contained systems, they are starting to gain an important role on a wider scale for more complex scenarios at urban or infrastructure level. Example of this ongoing process are current experimental setups in cities worldwide to automate urban street lighting, diffused residential facilities (also often addressed to as smart connected communities) and local micro-grids generated by the federation of several residential units into so-called virtual power plants. Given this underlying process, expectations are dramatically increasing about the potential of control networks to introduce sophisticated features on one side and energy efficiency on the other, and both on a wide scale. Unfortunately, these two objectives are, in several ways, conflicting, and impose to settle for reasonable trade-offs. This research work performs an assessment of current control and automation technologies to identify the terms of this trade-off with a stronger focus on energy efficiency which is analyzed following a holistic approach covering several aspects of the problem. Nevertheless, given the complexity of the wide technology scenario of future smart infrastructure, there isn’t a systematic intention in the work. Rather, this research will aim at providing valuable contribution to the knowledge in the field, prioritizing challenges within the whole picture that are often neglected. Green networking, that is energy efficiency of the very network operation, is one of these challenges. The current worldwide IT infrastructure is built upon networking equipment that collectively consume 21.4 TWh/year (Global e-Sustainability Initiative, 2010). This is the result of an overall unawareness of energy efficiency implications of communication protocols specifications and a tendency toward over-provisioning and redundancy in architecture design. As automation and control networks become global, they may be subject to the same issue and introduce an additional carbon footprint along with that of the internet. This research work performs an assessment of the dimension of this problem and proposes an alternative approach to current hardware and protocol design found in commercial building automation technologies. Shifting from the control network to the physical environment, another objective of this work is related to plug load management systems, which will be characterized as to their performance and limitations, highlighting potential design pitfalls and proposing an approach toward integrating these systems into more general energy management systems. Finally, the mechanism introduced above to increase networking energy efficiency also demonstrated a potential to provide real-time awareness about the context being managed. This potential is currently under investigation for its implications in performing basic load/peak forecasting to support demand side management architectures for the smart grid, through a partnership with the Italian electric utility

    Optimum hybrid error correction scheme under strict delay constraints

    Get PDF
    In packet-based wireless networks, media-based services often require a multicast-enabled transport that guarantees quasi error free transmission under strict delay constraints. Furthermore, both multicast and delay constraints deeply influence the architecture of erasure error recovery (EER). Therefore, we propose a general architecture of EER and study its optimization in this thesis. The architecture integrates overall existing important EER techniques: Automatic Repeat Request, Forward Error Correction and Hybrid ARQ techniques. Each of these EER techniques can be viewed as a special case of Hybrid Error Correction (HEC) schemes. Since the Gilbert-Elliott (GE) erasure error model has been proven to be valid for a wide range of packet based wireless networks, in this thesis, we present the general architecture and its optimization based on the GE channel model. The optimization target is to satisfy a certain target packet loss level under strict delay constraints efficiently. Through the optimization for a given real-time multicast scenario, the total needed redundancy information can be minimized by choosing the best HEC scheme automatically among the entire schemes included in the architecture. As a result, the performance of the optimum HEC scheme can approach the Shannon limit as closely as possible dynamically according to current channel state information.In Paket-basierten drahtlosen Netzwerken benötigen Medien-basierte Dienste oft Multicast-fähigen Transport, der quasi-fehlerfreie Übertragung unter strikten Zeitgrenzen garantiert. Außerdem beeinflussen sowohl Multicast als auch Zeitbegrenzungen stark die Architektur von Auslöschungs-Fehlerschutz (Erasure Error Recovery, EER). Daher stellen wir eine allgemeine Architektur der EER vor und untersuchen ihre Optimierung in dieser Arbeit. Die Architektur integriert alle wichtigen EER-Techniken: Automatic Repeat Request, Forward Error Correction und Hybrid ARQ. Jede dieser EER-Techniken kann als Spezialfall der Hybrid Error Correction (HEC) angesehen werden. Da das Gilbert-Elliot (GE) Auslöschungs-Fehler-Modell für einen weiten Bereich von Paket-basierten drahtlosen Netzwerken als gültig erwiesen wurde, präsentieren wir in dieser Arbeit die allgemeine Architektur und deren Optimierung basierend auf dem GE Kanalmodell. Zweck der Optimierung ist es, eine gewisse Ziel-Paketfehlerrate unter strikten Zeitgrenzen effizient zu erreichen. Durch die Optimierung für ein gegebenes Echtzeit-Mutlicast-Szenario kann die insgesamt benötigte Redundanz-Information minimiert werden. Dies erfolgt durch automatische Auswahl des optimalen HEC Schemas unter all den Schemata, die in die Architektur integriert sind. Das optimale HEC-Schema kann die Shannon Grenze so nahe wie möglich, dynamisch, entsprechend dem derzeitigen Kanalzustand, erreichen

    On Information-centric Resiliency and System-level Security in Constrained, Wireless Communication

    Get PDF
    The Internet of Things (IoT) interconnects many heterogeneous embedded devices either locally between each other, or globally with the Internet. These things are resource-constrained, e.g., powered by battery, and typically communicate via low-power and lossy wireless links. Communication needs to be secured and relies on crypto-operations that are often resource-intensive and in conflict with the device constraints. These challenging operational conditions on the cheapest hardware possible, the unreliable wireless transmission, and the need for protection against common threats of the inter-network, impose severe challenges to IoT networks. In this thesis, we advance the current state of the art in two dimensions. Part I assesses Information-centric networking (ICN) for the IoT, a network paradigm that promises enhanced reliability for data retrieval in constrained edge networks. ICN lacks a lower layer definition, which, however, is the key to enable device sleep cycles and exclusive wireless media access. This part of the thesis designs and evaluates an effective media access strategy for ICN to reduce the energy consumption and wireless interference on constrained IoT nodes. Part II examines the performance of hardware and software crypto-operations, executed on off-the-shelf IoT platforms. A novel system design enables the accessibility and auto-configuration of crypto-hardware through an operating system. One main focus is the generation of random numbers in the IoT. This part of the thesis further designs and evaluates Physical Unclonable Functions (PUFs) to provide novel randomness sources that generate highly unpredictable secrets, on low-cost devices that lack hardware-based security features. This thesis takes a practical view on the constrained IoT and is accompanied by real-world implementations and measurements. We contribute open source software, automation tools, a simulator, and reproducible measurement results from real IoT deployments using off-the-shelf hardware. The large-scale experiments in an open access testbed provide a direct starting point for future research

    Physical and Link Layer Implications in Vehicle Ad Hoc Networks

    Get PDF
    Vehicle Ad hoc Networks (V ANET) have been proposed to provide safety on the road and deliver road traffic information and route guidance to drivers along with commercial applications. However the challenges facing V ANET are numerous. Nodes move at high speeds, road side units and basestations are scarce, the topology is constrained by the road geometry and changes rapidly, and the number of nodes peaks suddenly in traffic jams. In this thesis we investigate the physical and link layers of V ANET and propose methods to achieve high data rates and high throughput. For the physical layer, we examine the use of Vertical BLAST (VB LAST) systems as they provide higher capacities than single antenna systems in rich fading environments. To study the applicability of VB LAST to VANET, a channel model was developed and verified using measurement data available in the literature. For no to medium line of sight, VBLAST systems provide high data rates. However the performance drops as the line of sight strength increases due to the correlation between the antennas. Moreover, the performance of VBLAST with training based channel estimation drops as the speed increases since the channel response changes rapidly. To update the channel state information matrix at the receiver, a channel tracking algorithm for flat fading channels was developed. The algorithm updates the channel matrix thus reducing the mean square error of the estimation and improving the bit error rate (BER). The analysis of VBLAST-OFDM systems showed they experience an error floor due to inter-carrier interference (lCI) which increases with speed, number of antennas transmitting and number of subcarriers used. The update algorithm was extended to VBLAST -OFDM systems and it showed improvements in BER performance but still experienced an error floor. An algorithm to equalise the ICI contribution of adjacent subcarriers was then developed and evaluated. The ICI equalisation algorithm reduces the error floor in BER as more subcarriers are equalised at the expense of more hardware complexity. The connectivity of V ANET was investigated and it was found that for single lane roads, car densities of 7 cars per communication range are sufficient to achieve high connectivity within the city whereas 12 cars per communication range are required for highways. Multilane roads require higher densities since cars tend to cluster in groups. Junctions and turns have lower connectivity than straight roads due to disconnections at the turns. Although higher densities improve the connectivity and, hence, the performance of the network layer, it leads to poor performance at the link layer. The IEEE 802.11 p MAC layer standard under development for V ANET uses a variant of Carrier Sense Multiple Access (CSMA). 802.11 protocols were analysed mathematically and via simulations and the results prove the saturation throughput of the basic access method drops as the number of nodes increases thus yielding very low throughput in congested areas. RTS/CTS access provides higher throughput but it applies only to unicast transmissions. To overcome the limitations of 802.11 protocols, we designed a protocol known as SOFT MAC which combines Space, Orthogonal Frequency and Time multiple access techniques. In SOFT MAC the road is divided into cells and each cell is allocated a unique group of subcarriers. Within a cell, nodes share the available subcarriers using a combination of TDMA and CSMA. The throughput analysis of SOFT MAC showed it has superior throughput compared to the basic access and similar to the RTS/CTS access of 802.11

    Mathematical modelling of end-to-end packet delay in multi-hop wireless networks and their applications to qos provisioning

    Get PDF
    This thesis addresses the mathematical modelling of end-to-end packet delay for Quality of Service (QoS) provisioning in multi-hop wireless networks. The multi-hop wireless technology increases capacity and coverage in a cost-effective way and it has been standardised in the Fourth-Generation (4G) standards. The effective capacity model approximates end-to-end delay performances, including Complementary Cumulative Density Function (CCDF) of delay, average delay and jitter. This model is first tested using Internet traffic trace from a real gigabit Ethernet gateway. The effective capacity model is developed based on single-hop and continuous-time communication systems but a multi-hop wireless system is better described to be multi-hop and time-slotted. The thesis extends the effective capacity model by taking multi-hop and time-slotted concepts into account, resulting in two new mathematical models: the multi-hop effective capacity model for multi-hop networks and the mixed continuous/discrete-time effective capacity model for time-slotted networks. Two scenarios are considered to validate these two effective capacity-based models based on ideal wireless communications (the physical-layer instantaneous transmission rate is the Shannon channel capacity): 1) packets traverse multiple wireless network devices and 2) packets are transmitted to or received from a wireless network device every Transmission Time Interval (TTI). The results from these two scenarios consistently show that the new mathematical models developed in the thesis characterise end-to-end delay performances accurately. Accurate and efficient estimators for end-to-end packet delay play a key role in QoS provisioning in modern communication systems. The estimators from the new effective capacity-based models are directly tested in two systems, faithfully created using realistic simulation techniques: 1) the IEEE 802.16-2004 networks and 2) wireless tele-ultrasonography medical systems. The results show that the estimation and simulation results are in good agreement in terms of end-to-end delay performances

    Research and developments of distributed video coding

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The recent developed Distributed Video Coding (DVC) is typically suitable for the applications such as wireless/wired video sensor network, mobile camera etc. where the traditional video coding standard is not feasible due to the constrained computation at the encoder. With DVC, the computational burden is moved from encoder to decoder. The compression efficiency is achieved via joint decoding at the decoder. The practical application of DVC is referred to Wyner-Ziv video coding (WZ) where the side information is available at the decoder to perform joint decoding. This join decoding inevitably causes a very complex decoder. In current WZ video coding issues, many of them emphasise how to improve the system coding performance but neglect the huge complexity caused at the decoder. The complexity of the decoder has direct influence to the system output. The beginning period of this research targets to optimise the decoder in pixel domain WZ video coding (PDWZ), while still achieves similar compression performance. More specifically, four issues are raised to optimise the input block size, the side information generation, the side information refinement process and the feedback channel respectively. The transform domain WZ video coding (TDWZ) has distinct superior performance to the normal PDWZ due to the exploitation in spatial direction during the encoding. However, since there is no motion estimation at the encoder in WZ video coding, the temporal correlation is not exploited at all at the encoder in all current WZ video coding issues. In the middle period of this research, the 3D DCT is adopted in the TDWZ to remove redundancy in both spatial and temporal direction thus to provide even higher coding performance. In the next step of this research, the performance of transform domain Distributed Multiview Video Coding (DMVC) is also investigated. Particularly, three types transform domain DMVC frameworks which are transform domain DMVC using TDWZ based 2D DCT, transform domain DMVC using TDWZ based on 3D DCT and transform domain residual DMVC using TDWZ based on 3D DCT are investigated respectively. One of the important applications of WZ coding principle is error-resilience. There have been several attempts to apply WZ error-resilient coding for current video coding standard e.g. H.264/AVC or MEPG 2. The final stage of this research is the design of WZ error-resilient scheme for wavelet based video codec. To balance the trade-off between error resilience ability and bandwidth consumption, the proposed scheme emphasises the protection of the Region of Interest (ROI) area. The efficiency of bandwidth utilisation is achieved by mutual efforts of WZ coding and sacrificing the quality of unimportant area. In summary, this research work contributed to achieves several advances in WZ video coding. First of all, it is targeting to build an efficient PDWZ with optimised decoder. Secondly, it aims to build an advanced TDWZ based on 3D DCT, which then is applied into multiview video coding to realise advanced transform domain DMVC. Finally, it aims to design an efficient error-resilient scheme for wavelet video codec, with which the trade-off between bandwidth consumption and error-resilience can be better balanced
    corecore