807 research outputs found
Optimization of Beyond 5G Network Slicing for Smart City Applications
Transitioning from the current fifth-generation (5G) wireless technology, the advent of beyond 5G (B5G) signifies a pivotal stride toward sixth generation (6G) communication technology. B5G, at its essence, harnesses end-to-end (E2E) network slicing (NS) technology, enabling the simultaneous accommodation of multiple logical networks with distinct performance requirements on a shared physical infrastructure. At the forefront of this implementation lies the critical process of network slice design, a phase central to the realization of efficient smart city networks. This thesis assumes a key role in the network slicing life cycle, emphasizing the analysis and formulation of optimal procedures for configuring, customizing, and allocating E2E network slices. The focus extends to catering to the unique demands of smart city applications, encompassing critical areas such as emergency response, smart buildings, and video surveillance. By addressing the intricacies of network slice design, the study navigates through the complexities of tailoring slices to meet specific application needs, thereby contributing to the seamless integration of diverse services within the smart city framework. Addressing the core challenge of NS, which involves the allocation of virtual networks on the physical topology with optimal resource allocation, the thesis introduces a dual integer linear programming (ILP) optimization problem. This problem is formulated to jointly minimize the embedding cost and latency. However, given the NP-hard nature of this ILP, finding an efficient alternative becomes a significant hurdle. In response, this thesis introduces a novel heuristic approach the matroid-based modified greedy breadth-first search (MGBFS) algorithm. This pioneering algorithm leverages matroid properties to navigate the process of virtual network embedding and resource allocation. By introducing this novel heuristic approach, the research aims to provide near-optimal solutions, overcoming the computational complexities associated with the dual integer linear programming problem. The proposed MGBFS algorithm not only addresses the connectivity, cost, and latency constraints but also outperforms the benchmark model delivering solutions remarkably close to optimal. This innovative approach represents a substantial advancement in the optimization of smart city applications, promising heightened connectivity, efficiency, and resource utilization within the evolving landscape of B5G-enabled communication technology
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Ordonnancement sous contrainte mémoire en domptant la localité des données dans un modÚle de programmation à base de tùches
International audienceA now-classical way of meeting the increasing demand for computing speed by HPC applications is the use of GPUs and/or otheraccelerators. Such accelerators have their own memory, which is usually quite limited, and are connected to the main memorythrough a bus with bounded bandwidth. Thus, particular care should be devoted to data locality in order to avoid unnecessary datamovements. Task-based runtime schedulers have emerged as a convenient and efficient way to use such heterogeneous platforms.When processing an application, the scheduler has the knowledge of all tasks available for processing on a GPU, as well astheir input data dependencies. Hence, it is possible to produce a tasks processing order aiming at reducing the total processingtime through three objectives: minimizing data transfers, overlapping transfers and computation and optimizing the eviction ofpreviously-loaded data. In this paper, we focus on how to schedule tasks that share some of their input data (but are otherwiseindependent) on a single GPU. We provide a formal model of the problem, exhibit an optimal eviction strategy, and show thatordering tasks to minimize data movement is NP-complete. We review and adapt existing ordering strategies to this problem,and propose a new one based on task aggregation. We prove that the underlying problem of this new strategy is NP-complete,and prove the reasonable complexity of our proposed heuristic. These strategies have been implemented in the StarPU runtimesystem. We present their performance on tasks from tiled 2D, 3D matrix products, Cholesky factorization, randomized task order,randomized data pairs from the 2D matrix product as well as a sparse matrix product. We introduce a visual way to understandthese performance and lower bounds on the number of data loads for the 2D and 3D matrix products. Our experiments demonstratethat using our new strategy together with the optimal eviction policy reduces the amount of data movement as well as the totalprocessing time
Online Algorithms with Randomly Infused Advice
We introduce a novel method for the rigorous quantitative evaluation of online algorithms that relaxes the "radical worst-case" perspective of classic competitive analysis. In contrast to prior work, our method, referred to as randomly infused advice (RIA), does not make any assumptions about the input sequence and does not rely on the development of designated online algorithms. Rather, it can be applied to existing online randomized algorithms, introducing a means to evaluate their performance in scenarios that lie outside the radical worst-case regime.
More concretely, an online algorithm ALG with RIA benefits from pieces of advice generated by an omniscient but not entirely reliable oracle. The crux of the new method is that the advice is provided to ALG by writing it into the buffer ? from which ALG normally reads its random bits, hence allowing us to augment it through a very simple and non-intrusive interface. The (un)reliability of the oracle is captured via a parameter 0 ? ? ? 1 that determines the probability (per round) that the advice is successfully infused by the oracle; if the advice is not infused, which occurs with probability 1 - ?, then the buffer ? contains fresh random bits (as in the classic online setting).
The applicability of the new RIA method is demonstrated by applying it to three extensively studied online problems: paging, uniform metrical task systems, and online set cover. For these problems, we establish new upper bounds on the competitive ratio of classic online algorithms that improve as the infusion parameter ? increases. These are complemented with (often tight) lower bounds on the competitive ratio of online algorithms with RIA for the three problems
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Caching Connections in Matchings
Motivated by the desire to utilize a limited number of configurable optical
switches by recent advances in Software Defined Networks (SDNs), we define an
online problem which we call the Caching in Matchings problem. This problem has
a natural combinatorial structure and therefore may find additional
applications in theory and practice.
In the Caching in Matchings problem our cache consists of matchings of
connections between servers that form a bipartite graph. To cache a connection
we insert it into one of the matchings possibly evicting at most two other
connections from this matching. This problem resembles the problem known as
Connection Caching, where we also cache connections but our only restriction is
that they form a graph with bounded degree . Our results show a somewhat
surprising qualitative separation between the problems: The competitive ratio
of any online algorithm for caching in matchings must depend on the size of the
graph.
Specifically, we give a deterministic competitive and randomized competitive algorithms for caching in matchings, where is the
number of servers and is the number of matchings. We also show that the
competitive ratio of any deterministic algorithm is
and of any randomized algorithm is . In particular, the lower bound for
randomized algorithms is regardless of , and can be as high
as if , for example. We also show that if we
allow the algorithm to use at least matchings compared to used by
the optimum then we match the competitive ratios of connection catching which
are independent of . Interestingly, we also show that even a single extra
matching for the algorithm allows to get substantially better bounds
Efficient Algorithms and Hardness Results for the Weighted -Server Problem
In this paper, we study the weighted -server problem on the uniform metric
in both the offline and online settings. We start with the offline setting. In
contrast to the (unweighted) -server problem which has a polynomial-time
solution using min-cost flows, there are strong computational lower bounds for
the weighted -server problem, even on the uniform metric. Specifically, we
show that assuming the unique games conjecture, there are no polynomial-time
algorithms with a sub-polynomial approximation factor, even if we use
-resource augmentation for . Furthermore, if we consider the natural
LP relaxation of the problem, then obtaining a bounded integrality gap requires
us to use at least resource augmentation, where is the number of
distinct server weights. We complement these results by obtaining a
constant-approximation algorithm via LP rounding, with a resource augmentation
of for any constant .
In the online setting, an lower bound is known for the competitive
ratio of any randomized algorithm for the weighted -server problem on the
uniform metric. In contrast, we show that -resource augmentation can
bring the competitive ratio down by an exponential factor to only . Our online algorithm uses the two-stage approach of first
obtaining a fractional solution using the online primal-dual framework, and
then rounding it online.Comment: This paper will appear in the proceedings of APPROX 202
Online Metric Matching with Delay
Traditionally, an online algorithm must service a request upon its arrival. In many practical situations,
one can delay the service of a request in the hope of servicing it more efficiently in the near future. As
a result, the study of online algorithms with delay has recently gained considerable traction. For most
online problems with delay, competitive algorithms have been developed that are independent of
properties of the delay functions associated with each request. Interestingly, this is not the case for
the online min-cost perfect matching with delays (MPMD) problem, introduced by Emek et al.(STOC
2016).
In this thesis we show that some techniques can be modified to extend to larger classes of delay
functions, without affecting the competitive ratio. In the interest of designing competitive solutions for
the problem in a more general setting, we introduce the study of online problems with set delay.
Here, the delay cost at any time is given by an arbitrary function of the set of pending requests, rather than the sum over individual delay functions associated with each request. In particular, we study the
online min-cost perfect matching with set delay (MPMD-Set) problem, which provides a
generalisation of MPMD. In contrast to previous work, the new model allows us to study the problem
in the non-clairvoyant setting, i.e. where the future delay costs are unknown to the algorithm.
We prove that for MPMD-Set in the most general non-clairvoyant setting, there exists no competitive
algorithm. Motivated by this impossibility, we introduce a new class of delay functions called sizebased
and prove that for this version of the problem, there exist both non-clairvoyant deterministic
and randomised algorithms that are competitive in the number of requests. Our results reveal that the
quality of an online matching depends both on the algorithm's access to information about future
delay costs, and the properties of the delay function
- âŠ