228 research outputs found
Self organization in 3GPP long term evolution networks
Mobiele en breedbandige internettoegang is realiteit. De internetgeneratie vindt het immers normaal om overal breedbandige internettoegang te hebben. Vandaag zijn er al 5,9 miljard mobiele abonnees ( 87% van de wereldbevolking) en 20% daarvan hebben toegang tot een mobiele breedbandige internetverbinding. Dit wordt aangeboden door 3G (derde generatie) technologieën zoals HSPA (High Speed Packet Access) en 4G (vierde generatie) technologieën zoals LTE (Long Term Evolution). De vraag naar hoogkwalitatieve diensten stelt de mobiele netwerkoperatoren en de verkopers van telecommunicatieapparatuur voor nieuwe uitdagingen: zij moeten nieuwe oplossingen vinden om hun diensten steeds sneller en met een hogere kwaliteit aan te bieden. De nieuwe LTE-standaard brengt niet alleen hogere pieksnelheden en kleinere vertragingen. Het heeft daarnaast ook nieuwe functionaliteiten in petto die zeer aantrekkelijk zijn voor de mobiele netwerkoperator: de integratie van zelfregelende functies die kunnen ingezet worden bij de planning van het netwerk, het uitrollen van een netwerk en het controleren van allerhande netwerkmechanismen (o.a. handover, spreiding van de belasting over de cellen). Dit proefschrift optimaliseert enkele van deze zelfregelende functies waardoor de optimalisatie van een mobiel netwerk snel en automatisch kan gebeuren. Hierdoor verwacht men lagere kosten voor de mobiele operator en een hogere kwaliteit van de aangeboden diensten
Incentive-driven QoS in peer-to-peer overlays
A well known problem in peer-to-peer overlays is that no single entity has control over the software,
hardware and configuration of peers. Thus, each peer can selfishly adapt its behaviour to maximise its
benefit from the overlay. This thesis is concerned with the modelling and design of incentive mechanisms
for QoS-overlays: resource allocation protocols that provide strategic peers with participation incentives,
while at the same time optimising the performance of the peer-to-peer distribution overlay.
The contributions of this thesis are as follows. First, we present PledgeRoute, a novel contribution
accounting system that can be used, along with a set of reciprocity policies, as an incentive mechanism
to encourage peers to contribute resources even when users are not actively consuming overlay services.
This mechanism uses a decentralised credit network, is resilient to sybil attacks, and allows peers to
achieve time and space deferred contribution reciprocity. Then, we present a novel, QoS-aware resource
allocation model based on Vickrey auctions that uses PledgeRoute as a substrate. It acts as an incentive
mechanism by providing efficient overlay construction, while at the same time allocating increasing
service quality to those peers that contribute more to the network. The model is then applied to lagsensitive
chunk swarming, and some of its properties are explored for different peer delay distributions.
When considering QoS overlays deployed over the best-effort Internet, the quality received by a
client cannot be adjudicated completely to either its serving peer or the intervening network between
them. By drawing parallels between this situation and well-known hidden action situations in microeconomics,
we propose a novel scheme to ensure adherence to advertised QoS levels. We then apply
it to delay-sensitive chunk distribution overlays and present the optimal contract payments required,
along with a method for QoS contract enforcement through reciprocative strategies. We also present a
probabilistic model for application-layer delay as a function of the prevailing network conditions.
Finally, we address the incentives of managed overlays, and the prediction of their behaviour. We
propose two novel models of multihoming managed overlay incentives in which overlays can freely
allocate their traffic flows between different ISPs. One is obtained by optimising an overlay utility
function with desired properties, while the other is designed for data-driven least-squares fitting of the
cross elasticity of demand. This last model is then used to solve for ISP profit maximisation
Energy aware performance evaluation of WSNs
Distributed sensor networks have been discussed for more than 30 years, but the vision
of Wireless Sensor Networks (WSNs) has been brought into reality only by the rapid advancements
in the areas of sensor design, information technologies, and wireless networks
that have paved the way for the proliferation of WSNs. The unique characteristics of
sensor networks introduce new challenges, amongst which prolonging the sensor lifetime
is the most important. Energy-efficient solutions are required for each aspect of WSN design
to deliver the potential advantages of the WSN phenomenon, hence in both existing
and future solutions for WSNs, energy efficiency is a grand challenge. The main contribution
of this thesis is to present an approach considering the collaborative nature of WSNs
and its correlation characteristics, providing a tool which considers issues from physical
to application layer together as entities to enable the framework which facilitates the
performance evaluation of WSNs. The simulation approach considered provides a clear
separation of concerns amongst software architecture of the applications, the hardware
configuration and the WSN deployment unlike the existing tools for evaluation. The
reuse of models across projects and organizations is also promoted while realistic WSN
lifetime estimations and performance evaluations are possible in attempts of improving
performance and maximizing the lifetime of the network. In this study, simulations are
carried out with careful assumptions for various layers taking into account the real time
characteristics of WSN.
The sensitivity of WSN systems are mainly due to their fragile nature when energy
consumption is considered. The case studies presented demonstrate the importance of
various parameters considered in this study. Simulation-based studies are presented,
taking into account the realistic settings from each layer of the protocol stack. Physical
environment is considered as well. The performance of the layered protocol stack in
realistic settings reveals several important interactions between different layers. These
interactions are especially important for the design of WSNs in terms of maximizing the
lifetime of the network
On the design of efficient caching systems
Content distribution is currently the prevalent Internet use case, accounting for the majority of global Internet traffic and growing exponentially. There is general consensus that the most effective method to deal with the large amount of content demand is through the deployment of massively distributed caching infrastructures as the means to localise content delivery traffic. Solutions based on caching have been already widely deployed through Content Delivery Networks. Ubiquitous caching is also a fundamental aspect of the emerging Information-Centric Networking paradigm which aims to rethink the current Internet architecture for long term evolution. Distributed content caching systems are expected to grow substantially in the future, in terms of both footprint and traffic carried and, as such, will become substantially more complex and costly. This thesis addresses the problem of designing scalable and cost-effective distributed caching systems that will be able to efficiently support the expected massive growth of content traffic and makes three distinct contributions. First, it produces an extensive theoretical characterisation of sharding, which is a widely used technique to allocate data items to resources of a distributed system according to a hash function. Based on the findings unveiled by this analysis, two systems are designed contributing to the abovementioned objective. The first is a framework and related algorithms for enabling efficient load-balanced content caching. This solution provides qualitative advantages over previously proposed solutions, such as ease of modelling and availability of knobs to fine-tune performance, as well as quantitative advantages, such as 2x increase in cache hit ratio and 19-33% reduction in load imbalance while maintaining comparable latency to other approaches. The second is the design and implementation of a caching node enabling 20 Gbps speeds based on inexpensive commodity hardware. We believe these contributions advance significantly the state of the art in distributed caching systems
Smart Sensor Technologies for IoT
The recent development in wireless networks and devices has led to novel services that will utilize wireless communication on a new level. Much effort and resources have been dedicated to establishing new communication networks that will support machine-to-machine communication and the Internet of Things (IoT). In these systems, various smart and sensory devices are deployed and connected, enabling large amounts of data to be streamed. Smart services represent new trends in mobile services, i.e., a completely new spectrum of context-aware, personalized, and intelligent services and applications. A variety of existing services utilize information about the position of the user or mobile device. The position of mobile devices is often achieved using the Global Navigation Satellite System (GNSS) chips that are integrated into all modern mobile devices (smartphones). However, GNSS is not always a reliable source of position estimates due to multipath propagation and signal blockage. Moreover, integrating GNSS chips into all devices might have a negative impact on the battery life of future IoT applications. Therefore, alternative solutions to position estimation should be investigated and implemented in IoT applications. This Special Issue, âSmart Sensor Technologies for IoTâ aims to report on some of the recent research efforts on this increasingly important topic. The twelve accepted papers in this issue cover various aspects of Smart Sensor Technologies for IoT
A Vision and Framework for the High Altitude Platform Station (HAPS) Networks of the Future
A High Altitude Platform Station (HAPS) is a network node that operates in
the stratosphere at an of altitude around 20 km and is instrumental for
providing communication services. Precipitated by technological innovations in
the areas of autonomous avionics, array antennas, solar panel efficiency
levels, and battery energy densities, and fueled by flourishing industry
ecosystems, the HAPS has emerged as an indispensable component of
next-generations of wireless networks. In this article, we provide a vision and
framework for the HAPS networks of the future supported by a comprehensive and
state-of-the-art literature review. We highlight the unrealized potential of
HAPS systems and elaborate on their unique ability to serve metropolitan areas.
The latest advancements and promising technologies in the HAPS energy and
payload systems are discussed. The integration of the emerging Reconfigurable
Smart Surface (RSS) technology in the communications payload of HAPS systems
for providing a cost-effective deployment is proposed. A detailed overview of
the radio resource management in HAPS systems is presented along with
synergistic physical layer techniques, including Faster-Than-Nyquist (FTN)
signaling. Numerous aspects of handoff management in HAPS systems are
described. The notable contributions of Artificial Intelligence (AI) in HAPS,
including machine learning in the design, topology management, handoff, and
resource allocation aspects are emphasized. The extensive overview of the
literature we provide is crucial for substantiating our vision that depicts the
expected deployment opportunities and challenges in the next 10 years
(next-generation networks), as well as in the subsequent 10 years
(next-next-generation networks).Comment: To appear in IEEE Communications Surveys & Tutorial
SURVEY STUDY FOR VEHICULAR AD HOC NETWORKS PERFORMANCE IN CITY AND URBAN RESIDENTIAL AREAS
This thesis it survey study for VANET (Vehicular Ad-Hoc Networks) and it performance in city and urban residential areas, when the the number of vehicles on roads is increasing annually, due to the higher amount of traffic, there are more accidents associated with road traffic complexity. VANET can be used to detect dangerous situations which are forwarded to the driver assistant system by monitoring the traffic status.fi=OpinnÀytetyö kokotekstinÀ PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=LÀrdomsprov tillgÀngligt som fulltext i PDF-format
Massively parallel neural computation
Reverse-engineering the brain is one of the US National Academy of Engineeringâs
âGrand Challenges.â The structure of the brain can be examined at many different
levels, spanning many disciplines from low-level biology through psychology and
computer science. This thesis focusses on real-time computation of large neural
networks using the Izhikevich spiking neuron model.
Neural computation has been described as âembarrassingly parallelâ as each
neuron can be thought of as an independent system, with behaviour described
by a mathematical model. However, the real challenge lies in modelling neural
communication. While the connectivity of neurons has some parallels with that
of electrical systems, its high fan-out results in massive data processing and communication requirements when modelling neural communication, particularly for
real-time computations.
It is shown that memory bandwidth is the most significant constraint to the scale
of real-time neural computation, followed by communication bandwidth, which
leads to a decision to implement a neural computation system on a platform based
on a network of Field Programmable Gate Arrays (FPGAs), using commercial off-
the-shelf components with some custom supporting infrastructure. This brings implementation challenges, particularly lack of on-chip memory, but also many advantages, particularly high-speed transceivers. An algorithm to model neural communication that makes efficient use of memory and communication resources is
developed and then used to implement a neural computation system on the multi-
FPGA platform.
Finding suitable benchmark neural networks for a massively parallel neural computation system proves to be a challenge.A synthetic benchmark that has
biologically-plausible fan-out, spike frequency and spike volume is proposed and
used to evaluate the system. It is shown to be capable of computing the activity
of a network of 256k Izhikevich spiking neurons with a fan-out of 1k in real-time
using a network of 4 FPGA boards. This compares favourably with previous work,
with the added advantage of scalability to larger neural networks using more FPGAs.
It is concluded that communication must be considered as a first-class design constraint when implementing massively parallel neural computation systems
Straggler-Resilient Distributed Computing
In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of University of Bergen's products or services. Internal or personal use of this material is permitted. If interested in reprinting/republishing IEEE copyrighted material for advertising or promotional purposes or for creating new collective works for resale or redistribution, please go to http://www.ieee.org/publications_standards/publications/rights/rights_link.html to learn how to obtain a License from RightsLink.Utbredelsen av distribuerte datasystemer har Þkt betydelig de siste Ärene. Dette skyldes fÞrst og fremst at behovet for beregningskraft Þker raskere enn hastigheten til en enkelt datamaskin, slik at vi mÄ bruke flere datamaskiner for Ä mÞte etterspÞrselen, og at det blir stadig mer vanlig at systemer er spredt over et stort geografisk omrÄde. Dette paradigmeskiftet medfÞrer mange tekniske utfordringer. En av disse er knyttet til "straggler"-problemet, som er forÄrsaket av forsinkelsesvariasjoner i distribuerte systemer, der en beregning forsinkes av noen fÄ langsomme noder slik at andre noder mÄ vente fÞr de kan fortsette. Straggler-problemet kan svekke effektiviteten til distribuerte systemer betydelig i situasjoner der en enkelt node som opplever en midlertidig overbelastning kan lÄse et helt system.
I denne avhandlingen studerer vi metoder for Ă„ gjĂžre beregninger av forskjellige typer motstandsdyktige mot slike problemer, og dermed gjĂžre det mulig for et distribuert system Ă„ fortsette til tross for at noen noder ikke svarer i tide. Metodene vi foreslĂ„r er skreddersydde for spesielle typer beregninger. Vi foreslĂ„r metoder tilpasset distribuert matrise-vektor-multiplikasjon (som er en grunnleggende operasjon i mange typer beregninger), distribuert maskinlĂŠring og distribuert sporing av en tilfeldig prosess (for eksempel det Ă„ spore plasseringen til kjĂžretĂžy for Ă„ unngĂ„ kollisjon). De foreslĂ„tte metodene utnytter redundans som enten blir introdusert som en del av metoden, eller som naturlig eksisterer i det underliggende problemet, til Ă„ kompensere for manglende delberegninger. For en av de foreslĂ„tte metodene utnytter vi redundans for ogsĂ„ Ă„ Ăžke effektiviteten til kommunikasjonen mellom noder, og dermed redusere mengden data som mĂ„ kommuniseres over nettverket. I likhet med straggler-problemet kan slik kommunikasjon begrense effektiviteten i distribuerte systemer betydelig. De foreslĂ„tte metodene gir signifikante forbedringer i ventetid og pĂ„litelighet sammenlignet med tidligere metoder.The number and scale of distributed computing systems being built have increased significantly in recent years. Primarily, that is because: i) our computing needs are increasing at a much higher rate than computers are becoming faster, so we need to use more of them to meet demand, and ii) systems that are fundamentally distributed, e.g., because the components that make them up are geographically distributed, are becoming increasingly prevalent. This paradigm shift is the source of many engineering challenges. Among them is the straggler problem, which is a problem caused by latency variations in distributed systems, where faster nodes are held up by slower ones. The straggler problem can significantly impair the effectiveness of distributed systemsâa single node experiencing a transient outage (e.g., due to being overloaded) can lock up an entire system.
In this thesis, we consider schemes for making a range of computations resilient against such stragglers, thus allowing a distributed system to proceed in spite of some nodes failing to respond on time. The schemes we propose are tailored for particular computations. We propose schemes designed for distributed matrix-vector multiplication, which is a fundamental operation in many computing applications, distributed machine learningâin the form of a straggler-resilient first-order optimization methodâand distributed tracking of a time-varying process (e.g., tracking the location of a set of vehicles for a collision avoidance system). The proposed schemes rely on exploiting redundancy that is either introduced as part of the scheme, or exists naturally in the underlying problem, to compensate for missing results, i.e., they are a form of forward error correction for computations. Further, for one of the proposed schemes we exploit redundancy to also improve the effectiveness of multicasting, thus reducing the amount of data that needs to be communicated over the network. Such inter-node communication, like the straggler problem, can significantly limit the effectiveness of distributed systems. For the schemes we propose, we are able to show significant improvements in latency and reliability compared to previous schemes.Doktorgradsavhandlin
- âŠ