21,316 research outputs found
Latency Bounds of Packet-Based Fronthaul for Cloud-RAN with Functionality Split
The emerging Cloud-RAN architecture within the fifth generation (5G) of
wireless networks plays a vital role in enabling higher flexibility and
granularity. On the other hand, Cloud-RAN architecture introduces an additional
link between the central, cloudified unit and the distributed radio unit,
namely fronthaul (FH). Therefore, the foreseen reliability and latency for 5G
services should also be provisioned over the FH link. In this paper, focusing
on Ethernet as FH, we present a reliable packet-based FH communication and
demonstrate the upper and lower bounds of latency that can be offered. These
bounds yield insights into the trade-off between reliability and latency, and
enable the architecture design through choice of splitting point, focusing on
high layer split between PDCP and RLC and low layer split between MAC and PHY,
under different FH bandwidth and traffic properties. Presented model is then
analyzed both numerically and through simulation, with two classes of 5G
services that are ultra reliable low latency (URLL) and enhanced mobile
broadband (eMBB).Comment: 6 pages, 7 figures, 3 tables, conference paper (ICC19
Certainty Closure: Reliable Constraint Reasoning with Incomplete or Erroneous Data
Constraint Programming (CP) has proved an effective paradigm to model and
solve difficult combinatorial satisfaction and optimisation problems from
disparate domains. Many such problems arising from the commercial world are
permeated by data uncertainty. Existing CP approaches that accommodate
uncertainty are less suited to uncertainty arising due to incomplete and
erroneous data, because they do not build reliable models and solutions
guaranteed to address the user's genuine problem as she perceives it. Other
fields such as reliable computation offer combinations of models and associated
methods to handle these types of uncertain data, but lack an expressive
framework characterising the resolution methodology independently of the model.
We present a unifying framework that extends the CP formalism in both model
and solutions, to tackle ill-defined combinatorial problems with incomplete or
erroneous data. The certainty closure framework brings together modelling and
solving methodologies from different fields into the CP paradigm to provide
reliable and efficient approches for uncertain constraint problems. We
demonstrate the applicability of the framework on a case study in network
diagnosis. We define resolution forms that give generic templates, and their
associated operational semantics, to derive practical solution methods for
reliable solutions.Comment: Revised versio
Techniques for the Fast Simulation of Models of Highly dependable Systems
With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system
On the Reliability of LTE Random Access: Performance Bounds for Machine-to-Machine Burst Resolution Time
Random Access Channel (RACH) has been identified as one of the major
bottlenecks for accommodating massive number of machine-to-machine (M2M) users
in LTE networks, especially for the case of burst arrival of connection
requests. As a consequence, the burst resolution problem has sparked a large
number of works in the area, analyzing and optimizing the average performance
of RACH. However, the understanding of what are the probabilistic performance
limits of RACH is still missing. To address this limitation, in the paper, we
investigate the reliability of RACH with access class barring (ACB). We model
RACH as a queuing system, and apply stochastic network calculus to derive
probabilistic performance bounds for burst resolution time, i.e., the worst
case time it takes to connect a burst of M2M devices to the base station. We
illustrate the accuracy of the proposed methodology and its potential
applications in performance assessment and system dimensioning.Comment: Presented at IEEE International Conference on Communications (ICC),
201
Optimal Resource Allocation for Network Protection Against Spreading Processes
We study the problem of containing spreading processes in arbitrary directed
networks by distributing protection resources throughout the nodes of the
network. We consider two types of protection resources are available: (i)
Preventive resources able to defend nodes against the spreading (such as
vaccines in a viral infection process), and (ii) corrective resources able to
neutralize the spreading after it has reached a node (such as antidotes). We
assume that both preventive and corrective resources have an associated cost
and study the problem of finding the cost-optimal distribution of resources
throughout the nodes of the network. We analyze these questions in the context
of viral spreading processes in directed networks. We study the following two
problems: (i) Given a fixed budget, find the optimal allocation of preventive
and corrective resources in the network to achieve the highest level of
containment, and (ii) when a budget is not specified, find the minimum budget
required to control the spreading process. We show that both resource
allocation problems can be solved in polynomial time using Geometric
Programming (GP) for arbitrary directed graphs of nonidentical nodes and a wide
class of cost functions. Furthermore, our approach allows to optimize
simultaneously over both preventive and corrective resources, even in the case
of cost functions being node-dependent. We illustrate our approach by designing
optimal protection strategies to contain an epidemic outbreak that propagates
through an air transportation network
- …