1,861 research outputs found
Improving relay based cellular networks performance in highly user congested and emergency situations
PhDRelay based cellular networks (RBCNs) are the technologies that incorporate multi-hop communication into traditional cellular networks. A RBCN can potentially support higher data rates, more stable radio coverage and more dynamic services. In reality, RBCNs still suffer from performance degradation in terms of high user congestion, base station failure and overloading in emergency situations. The focus of this thesis is to explore the potential to improve IEEE802.16j supported RBCN performance in user congestion and emergency situations using adjustments to the RF layer (by antenna adjustments or extensions using multi-hop) and cooperative adjustment algorithms, e.g. based on controlling frequency allocation centrally and using distributed approaches. The first part of this thesis designs and validates network reconfiguration algorithms for RBCN, including a cooperative antenna power control algorithm and a heuristic antenna tilting algorithm. The second part of this thesis investigates centralized and distributed dynamic frequency allocation for higher RBCN frequency efficiency, network resilience, and computation simplicity. It is demonstrated that these benefits mitigate user congestion and base station failure problems significantly. Additionally, interweaving coordinated dynamic frequency allocation and antenna tilting is investigated in order to obtain the benefits of both actions. The third part of this thesis incorporates Delay Tolerate Networking (DTN) technology into RBCN to let users self-organize to connect to functional base station through multi-hops supported by other users. Through the use of DTN, RBCN coverage and performance are improved. This thesis explores the augmentation of DTN routing protocols to let more un-covered users connect to base stations and improve network load balancin
A Systematic Literature Review on Task Allocation and Performance Management Techniques in Cloud Data Center
As cloud computing usage grows, cloud data centers play an increasingly
important role. To maximize resource utilization, ensure service quality, and
enhance system performance, it is crucial to allocate tasks and manage
performance effectively. The purpose of this study is to provide an extensive
analysis of task allocation and performance management techniques employed in
cloud data centers. The aim is to systematically categorize and organize
previous research by identifying the cloud computing methodologies, categories,
and gaps. A literature review was conducted, which included the analysis of 463
task allocations and 480 performance management papers. The review revealed
three task allocation research topics and seven performance management methods.
Task allocation research areas are resource allocation, load-Balancing, and
scheduling. Performance management includes monitoring and control, power and
energy management, resource utilization optimization, quality of service
management, fault management, virtual machine management, and network
management. The study proposes new techniques to enhance cloud computing work
allocation and performance management. Short-comings in each approach can guide
future research. The research's findings on cloud data center task allocation
and performance management can assist academics, practitioners, and cloud
service providers in optimizing their systems for dependability,
cost-effectiveness, and scalability. Innovative methodologies can steer future
research to fill gaps in the literature
A distributed channel allocation scheme for cellular network using intelligent software agents
PhDAbstract not availabl
Doctor of Philosophy
dissertationWe are seeing an extensive proliferation of wireless devices including various types and forms of sensor nodes that are increasingly becoming ingrained in our daily lives. There has been a significant growth in wireless devices capabilities as well. This proliferation and rapid growth of wireless devices and their capabilities has led to the development of many distributed sensing and computing applications. In this dissertation, we propose and evaluate novel, efficient approaches for localization and computation offloading that harness distributed sensing and computing in wireless networks. In a significant part of this dissertation, we exploit distributed sensing to create efficient localization applications. First, using the sensing power of a set of Radio frequency (RF) sensors, we propose energy efficient approaches for target tracking application. Second, leveraging the sensing power of a distributed set of existing wireless devices, e.g., smartphones, internet-of-things devices, laptops, and modems, etc., we propose a novel approach to locate spectrum offenders. Third, we build efficient sampling approaches to select mobile sensing devices required for spectrum offenders localization. We also enhance our sampling approaches to take into account selfish behaviors of mobile devices. Finally, we investigate an attack on location privacy where the location of people moving inside a private area can be inferred using the radio characteristics of wireless links that are leaked by legitimate transmitters deployed inside the private area, and develop the first solution to mitigate this attack. While we focus on harnessing distributed sensing for localization in a big part of this dissertation, in the remaining part of this dissertation, we harness the computing power of nearby wireless devices for a computation offloading application. Specially, we propose a multidimensional auction for allocating the tasks of a job among nearby mobile devices based on their computational capabilities and also the cost of computation at these devices with the goal of reducing the overall job completion time and being beneficial to all the parties involved
Resource and power management in next generation networks
The limits of today’s cellular communication systems are constantly being tested by
the exponential increase in mobile data traffic, a trend which is poised to continue
well into the next decade. Densification of cellular networks, by overlaying smaller
cells, i.e., micro, pico and femtocells, over the traditional macrocell, is seen as an
inevitable step in enabling future networks to support the expected increases in data
rate demand. Next generation networks will most certainly be more heterogeneous
as services will be offered via various types of points of access (PoAs). Indeed, besides
the traditional macro base station, it is expected that users will also be able to
access the network through a wide range of other PoAs: WiFi access points, remote
radio-heads (RRHs), small cell (i.e., micro, pico and femto) base stations or even
other users, when device-to-device (D2D) communications are supported, creating
thus a multi-tiered network architecture. This approach is expected to enhance the
capacity of current cellular networks, while patching up potential coverage gaps.
However, since available radio resources will be fully shared, the inter-cell interference
as well as the interference between the different tiers will pose a significant
challenge. To avoid severe degradation of network performance, properly managing
the interference is essential. In particular, techniques that mitigate interference such
Inter Cell Interference Coordination (ICIC) and enhanced ICIC (eICIC) have been
proposed in the literature to address the issue. In this thesis, we argue that interference
may be also addressed during radio resource scheduling tasks, by enabling
the network to make interference-aware resource allocation decisions.
Carrier aggregation technology, which allows the simultaneous use of several
component carriers, on the other hand, targets the lack of sufficiently large portions
of frequency spectrum; a problem that severely limits the capacity of wireless networks.
The aggregated carriers may, in general, belong to different frequency bands,
and have different bandwidths, thus they also may have very different signal propagation
characteristics. Integration of carrier aggregation in the network introduces
additional tasks and further complicates interference management, but also opens
up a range of possibilities for improving spectrum efficiency in addition to enhancing
capacity, which we aim to exploit. In this thesis, we first look at the resource allocation in problem in dense multitiered
networks with support for advanced features such as carrier aggregation and
device-to-device communications. For two-tiered networks with D2D support, we
propose a centralised, near optimal algorithm, based on dynamic programming principles,
that allows a central scheduler to make interference and traffic-aware scheduling
decisions, while taking into consideration the short-lived nature of D2D links.
As the complexity of the central scheduler increases exponentially with the number
of component carriers, we further propose a distributed heuristic algorithm to tackle
the resource allocation problem in carrier aggregation enabled dense networks. We
show that the solutions we propose perform significantly better than standard solutions
adopted in cellular networks such as eICIC coupled with Proportional Fair
scheduling, in several key metrics such as user throughput, timely delivery of content
and spectrum and energy efficiency, while ensuring fairness for backward compatible
devices.
Next, we investigate the potentiality to enhance network performance by enabling
the different nodes of the network to reduce and dynamically adjust the
transmit power of the different carriers to mitigate interference. Considering that
the different carriers may have different coverage areas, we propose to leverage this
diversity, to obtain high-performing network configurations. Thus, we model the
problem of carrier downlink transmit power setting, as a competitive game between
teams of PoAs, which enables us to derive distributed dynamic power setting algorithms.
Using these algorithms we reach stable configurations in the network,
known as Nash equilibria, which we show perform significantly better than fixed
power strategies coupled with eICIC
Computing on the Edge of the Network
Um Systeme der fünften Generation zellularer Kommunikationsnetze (5G) zu ermöglichen, sind Energie effiziente Architekturen erforderlich, die eine zuverlässige Serviceplattform für die Bereitstellung von 5G-Diensten und darüber hinaus bieten können. Device Enhanced Edge Computing ist eine Ableitung des Multi-Access Edge Computing (MEC), das Rechen- und Speicherressourcen direkt auf den Endgeräten bereitstellt. Die Bedeutung dieses Konzepts wird durch die steigenden Anforderungen von rechenintensiven Anwendungen mit extrem niedriger Latenzzeit belegt, die den MEC-Server allein und den drahtlosen Kanal überfordern. Diese Dissertation stellt ein Berechnungs-Auslagerungsframework mit Berücksichtigung von Energie, Mobilität und Anreizen in einem gerätegestützten MEC-System mit mehreren Benutzern und mehreren Aufgaben vor, das die gegenseitige Abhängigkeit der Aufgaben sowie die Latenzanforderungen der Anwendungen berücksichtigt.To enable fifth generation cellular communication network (5G) systems, energy efficient architectures are required that can provide a reliable service platform for the delivery of 5G services and beyond. Device Enhanced Edge Computing is a derivative of Multi-Access Edge Computing (MEC), which provides computing and storage resources directly on the end devices. The importance of this concept is evidenced by the increasing demands of ultra-low latency computationally intensive applications that overwhelm the MEC server alone and the wireless channel. This dissertation presents a computational offloading framework considering energy, mobility and incentives in a multi-user, multi-task device-based MEC system that takes into account task interdependence and application latency requirements
Multi-Core Parallel Routing
The recent increase in the amount of data (i.e., big data) led to higher data volumes to be transferred and processed over the network. Also, over the last years, the deployment of multi-core routers has grown rapidly. However, such big data transfers are not leveraging the powerful multi-core routers to the extent possible, particularly in the key function of routing. Our main goal is to find a way so we can use these cores more effectively and efficiently in routing the big data transfers. In this dissertation, we propose a novel approach to parallelize data transfers by leveraging the multi-core CPUs in the routers. Legacy routing protocols, e.g. OSPF for intra-domain routing, send data from source to destination on a shortest single path. We describe an end-to-end method to distribute data optimally on flows by using multiple paths. We generate new virtual topology substrates from the underlying router topology and perform shortest path routing on each substrate. With this framework, even though calculating shortest paths could be done with well-known techniques such as OSPF's Dijkstra implementation, finding optimal substrates so as to maximize the aggregate throughput over multiple end-to-end paths is still an NP-hard problem. We focus our efforts on solving the problem and design heuristics for substrate generation from a given router topology. Our heuristics' interim goal is to generate substrates in such a way that the shortest path between a source-destination pair on each substrate minimally overlaps with each other. Once these substrates are determined, we assign each substrate to a core in routers and employ a multi-path transport protocol, like MPTCP, to perform end-to-end parallel transfers
- …