2,347 research outputs found
Connection availability analysis of span-restorable mesh networks
Dual-span failures are the key factor of the system unavailability in a mesh-restorable network with full restorability of single-span failures. Availability analysis based on reliability block diagrams is not suitable to describe failures of mesh-restorable networks with widely distributed and interdependent spare capacities. Therefore, a new concept of restoration-aware connection availability is proposed to facilitate the analysis. Specific models of span-oriented schemes are built and analyzed. By using the proposed computation method and presuming dual-span failures to be the only failure mode, we can exactly calculate the average connection unavailability with an arbitrary allocation rule for spare capacity and no knowledge of any restoration details, or the unavailability of a specific connection with known restoration details. Network performance with respect to connection unavailability, traffic loss, spare capacity consumption, and dual failure restorability is investigated in a case study for an optical span-restorable long-haul networ
A survey on fiber nonlinearity compensation for 400 Gbps and beyond optical communication systems
Optical communication systems represent the backbone of modern communication
networks. Since their deployment, different fiber technologies have been used
to deal with optical fiber impairments such as dispersion-shifted fibers and
dispersion-compensation fibers. In recent years, thanks to the introduction of
coherent detection based systems, fiber impairments can be mitigated using
digital signal processing (DSP) algorithms. Coherent systems are used in the
current 100 Gbps wavelength-division multiplexing (WDM) standard technology.
They allow the increase of spectral efficiency by using multi-level modulation
formats, and are combined with DSP techniques to combat the linear fiber
distortions. In addition to linear impairments, the next generation 400 Gbps/1
Tbps WDM systems are also more affected by the fiber nonlinearity due to the
Kerr effect. At high input power, the fiber nonlinear effects become more
important and their compensation is required to improve the transmission
performance. Several approaches have been proposed to deal with the fiber
nonlinearity. In this paper, after a brief description of the Kerr-induced
nonlinear effects, a survey on the fiber nonlinearity compensation (NLC)
techniques is provided. We focus on the well-known NLC techniques and discuss
their performance, as well as their implementation and complexity. An extension
of the inter-subcarrier nonlinear interference canceler approach is also
proposed. A performance evaluation of the well-known NLC techniques and the
proposed approach is provided in the context of Nyquist and super-Nyquist
superchannel systems.Comment: Accepted in the IEEE Communications Surveys and Tutorial
Optimal Algorithms for Near-Hitless Network Restoration via Diversity Coding
Diversity coding is a network restoration technique which offers near-hitless
restoration, while other state-of-the art techniques are significantly slower.
Furthermore, the extra spare capacity requirement of diversity coding is
competitive with the others. Previously, we developed heuristic algorithms to
employ diversity coding structures in networks with arbitrary topology. This
paper presents two algorithms to solve the network design problems using
diversity coding in an optimal manner. The first technique pre-provisions
static traffic whereas the second technique carries out the dynamic
provisioning of the traffic on-demand. In both cases, diversity coding results
in smaller restoration time, simpler synchronization, and much reduced
signaling complexity than the existing techniques in the literature. A Mixed
Integer Programming (MIP) formulation and an algorithm based on Integer Linear
Programming (ILP) are developed for pre-provisioning and dynamic provisioning,
respectively. Simulation results indicate that diversity coding has
significantly higher restoration speed than Shared Path Protection (SPP) and
p-cycle techniques. It requires more extra capacity than the p-cycle technique
and SPP. However, the increase in the total capacity is negligible compared to
the increase in the restoration speed.Comment: An old version of this paper is submitted to IEEE Globecom 2012
conferenc
New contention resolution techniques for optical burst switching
Optical burst switching (OBS) is a technology positioned between wavelength routing and optical packet switching that does not require optical buffering or packet-level parsing, and it is more efficient than circuit switching when the sustained traffic volume does not consume a full wavelength. However, several critical issues still need to be solved such as contention resolution without optical buffering which is a key determinant of packet-loss with a significant impact on network performance. Deflection routing is an approach for resolving contention by routing a contending packet to an output port other than the intended output port. In OBS networks, when contention between two bursts cannot be resolved through deflection routing, one of the bursts will be dropped. However, this scheme doesn’t take advantage of all the available resources in resolving contentions. Due to this, the performance of existing deflection routing scheme is not satisfactory. In this thesis, we propose and evaluate three new strategies which aim at resolving contention. We propose a new approach called Backtrack on Deflection Failure, which provides a second chance to blocked bursts when deflection failure occurs. The bursts in this scheme, when blocked, will get an opportunity to backtrack to the previous node and may get routed through any deflection route available at the previous node. Two variants are proposed for handling the backtracking delay involved in this scheme namely: (a) Increase in Initial Offset and (b) Open-Loop Reservation. Furthermore, we propose a third scheme called Bidirectional Reservation on Burst Drop in which bandwidth reservation is made in both the forward and the backward directions simultaneously. This scheme comes into effect only when control bursts get dropped due to bandwidth unavailability. The retransmitted control bursts will have larger offset value and because of this, they will have lower blocking probability than the original bursts. The performance of our schemes and of those proposed in the literature is studied through simulation. The parameters considered in evaluating these schemes are blocking probability, average throughput, and overall link utilization. The results obtained show that our schemes perform significantly better than their standard counterparts
Exact two-terminal reliability of some directed networks
The calculation of network reliability in a probabilistic context has long
been an issue of practical and academic importance. Conventional approaches
(determination of bounds, sums of disjoint products algorithms, Monte Carlo
evaluations, studies of the reliability polynomials, etc.) only provide
approximations when the network's size increases, even when nodes do not fail
and all edges have the same reliability p. We consider here a directed, generic
graph of arbitrary size mimicking real-life long-haul communication networks,
and give the exact, analytical solution for the two-terminal reliability. This
solution involves a product of transfer matrices, in which individual
reliabilities of edges and nodes are taken into account. The special case of
identical edge and node reliabilities (p and rho, respectively) is addressed.
We consider a case study based on a commonly-used configuration, and assess the
influence of the edges being directed (or not) on various measures of network
performance. While the two-terminal reliability, the failure frequency and the
failure rate of the connection are quite similar, the locations of complex
zeros of the two-terminal reliability polynomials exhibit strong differences,
and various structure transitions at specific values of rho. The present work
could be extended to provide a catalog of exactly solvable networks in terms of
reliability, which could be useful as building blocks for new and improved
bounds, as well as benchmarks, in the general case
The Mask: Masking the effects of Edge Nodes being unavailable
The arctic tundra is observed to collect data to be used for climate research. Data can be collected by cyber-physical computers with sensors. However, the arctic tundra has a limited availability of energy. Consequently, the nodes rely on batteries and sleep most of the time to increase the battery-limited operational lifetime. In addition, only a few nodes can expect to be in reach of a back-haul wireless data network. Consequently, the nodes have on-node wireless local area networks to reach nearby neighbor nodes.
To increase the availability for remote clients to the data collected by the nodes, a set of shadow nodes are used. These are always on, and always have access to a back-haul network. Data from an edge node on the arctic tundra propagates to the shadow nodes either directly over a back-haul network, or via a neighbor node with a back-haul network. The purpose is to make the data produced by an edge node available to a client even when the edge node sleeps or no network access is available.
A statistical analysis is done to characterize the prototype’s behavior under a set of edge-node behaviors. To validate the statistical analysis a prototype system is developed and used in a set of performance-measuring experiments. Experiments are done with 10 to 1,000,000 nodes, different probabilities of nodes being awake, and different probabilities of the back-haul network being available. Edge and shadow nodes are emulated as Go functions and executed on a high-performance computer with thousands of cores. Different wireless
networks are emulated albeit in a simplified way. A run-time simulation system is developed to control the prototype and conduct the experiments.
The results for the prototype show that if the single synchronization chance is low or the desired time to get the latest data should be minimized, an additional data delivery path should be considered on the edge node’s side. Synchronization via the right neighbor principle adds an extra communication channel which increases the data availability level by 50%-100%, but the resource demand grows by 30% per unit. The time required to get the latest data from edge nodes decreases by a factor of 1.75.
The results for the simulation show that the cumulative network throughput of approximately ≈ 2100 MB/s and the Generated Data Amount ≈ 25000 MB/s can be achieved at the cost of ≈ 80 KB RAM per emulated node.
The results show that the statistical analysis and the results from the prototype as used by the simulation system match, but the statistical expectation considers a limited range of factors. Statistically derived values can be used as the input for the simulation, where they would be adjusted to get a more comprehensive result.
The conclusions are that the Mask provides instant access to data storage for edge nodes. The Mask is fronted to clients which become able to retrieve the data asynchronously, even when edge nodes are offline
Space-Division Multiplexing in Data Center Networks: On Multi-Core Fiber Solutions and Crosstalk-Suppressed Resource Allocation
The rapid growth of traffic inside data centers caused by the increasing adoption of cloud services necessitates a scalable and cost-efficient networking infrastructure. Space-division multiplexing (SDM) is considered as a promising solution to overcome the optical network capacity crunch and support cost-effective network capacity scaling. Multi-core fiber (MCF) is regarded as the most feasible and efficient way to realize SDM networks, and its deployment inside data centers seems very likely as the issue of inter-core crosstalk (XT) is not severe over short link spans (<1  km ) compared to that in long-haul transmission. However, XT can still have a considerable effect in MCF over short distances, which can limit the transmission reach and in turn the data center’s size. XT can be further reduced by bi-directional transmission of optical signals in adjacent MCF cores. This paper evaluates the benefits of MCF-based SDM solutions in terms of maximizing the capacity and spatial efficiency of data center networks. To this end, we present an analytical model for XT in bi-directional normal step-index and trench-assisted MCFs and propose corresponding XT-aware core prioritization schemes. We further develop XT-aware spectrum resource allocation strategies aimed at relieving the complexity of online XT computation. These strategies divide the available spectrum into disjoint bands and incrementally add them to the pool of accessible resources based on the network conditions. Several combinations of core mapping and spectrum resource allocation algorithms are investigated for eight types of homogeneous MCFs comprising 7–61 cores, three different multiplexing schemes, and three data center network topologies with two traffic scenarios. Extensive simulation results show that combining bi-directional transmission in dense core fibers with tailored resource allocation schemes significantly increases the network capacity. Moreover, a multiplexing scheme that combines SDM and WDM can achieve up to 33 times higher link spatial efficiency and up to 300 times greater capacity compared to a WDM solution
Transmission of 5G signals in multicore fibers impaired by inter-core crosstalk
A capacidade de dados exigida pelo surgimento do 5G levou a mudanças na
arquitetura das redes sem fios passando a incluir fibras multinúcleo (MCFs, acrónimo anglo-saxónico de multicore fibers) no fronthaul. No entanto, a transmissão
de sinais nas MCFs é degradada pela interferência entre núcleos (ICXT, acrónimo anglo-saxónico de intercore crosstalk). Neste trabalho, o impacto da ICXT
sobre o desempenho na transmissão de sinais CPRI (acrónimo anglo-saxónico de
Common Public Radio Interface) numa rede de acesso 5G com detecção direta,
suportada por MCFs homogéneas com um acoplamento reduzido entre núcleos, é
estudado através de simulação numérica. A taxa de erros de bit (BER, acrónimo
anglo-saxónico de bit error rate), a análise de padrões de olho, a penalidade de
potência e a indisponibilidade são utilizadas como métricas para avaliar o impacto
da ICXT no desempenho do sistema, considerando dois modelos para a polariza-
ção dos sinais. Os resultados numéricos são obtidos através da combinação de
simulação de Monte Carlo com um método semi-analÃtico para avaliar a BER.
Para uma penalidade de potência de 1 dB, para sinais CPRI com FEC (acrónimo anglo-saxónico de forward-error correction), devido ao aumento do walkoff
da MCF de 1 ps/km para 50 ps/km, a tolerância dos sinais CPRI relativamente
à ICXT aumenta 1.4 dB. No entanto, para nÃveis de interferência que levam a
uma penalidade de potência de 1 dB, o sistema está praticamente indisponÃvel.
Para alcançar uma probabilidade de indisponibilidade de 10-5 usando sinais com
FEC, são necessários nÃveis de interferência muito mais reduzidos, abaixo de -27:8
dB e -24:8 dB, para sinais de polarização única e dupla, respectivamente. Este
trabalho demonstra que é essencial estudar a indisponibilidade em vez da penalidade de potência de 1 dB para garantir a qualidade do serviço em sistemas de
comunicação óptica com detecção direta suportados por MCFs homogéneas com
um acoplamento reduzido entre núcleos onde a ICXT domina a degradação do
desempenho.The data capacity demanded by the emergence of 5G lead to changes in the wireless network architecture with proposals including multicore fibers (MCFs) in the
fronthaul. However, the transmission of signals in MCFs is impaired by intercore crosstalk (ICXT). In this work, the impact of ICXT on the transmission
performance of Common Public Radio Interface (CPRI) signals in a 5G network
fronthaul supported by homogeneous weakly-coupled MCFs with direct detection
is studied by numerical simulation. Bit error rate (BER), eye-patterns analysis,
power penalty and outage probability are used as metrics to assess the ICXT
impact on the system performance, considering two models for the signals polarizations. The results are obtained by combining Monte Carlo simulation and a
semi-analytical method to assess numerically the BER.
For 1 dB power penalty, with forward error correction (FEC) CPRI signals, due
to the increase of the MCF walkoff from 1 ps/km to 50 ps/km, an improvement
of the tolerance of CPRI signals to ICXT of 1.4 dB is observed. However, for
crosstalk levels that lead to 1 dB power penalty, the system is unavailable with
very high outage probability. To reach a reasonable outage probability of 10−5
for FEC signals, much lower crosstalk levels, below -27:8 dB, and -24:8 dB, for
single and dual polarization signals, respectively, are required. Hence, this work
shows that it is essential to study the outage probability instead of the 1 dB power
penalty to guarantee quality of service in direct-detection optical communication
systems supported by weakly-coupled homogeneous MCFs and impaired by ICXT
Energy Efficiency and Quality of Services in Virtualized Cloud Radio Access Network
Cloud Radio Access Network (C-RAN) is being widely studied for soft and green fifth generation of Long Term Evolution - Advanced (LTE-A). The recent technology advancement in network virtualization function (NFV) and software defined radio (SDR) has enabled virtualization of Baseband Units (BBU) and sharing of underlying general purpose processing (GPP) infrastructure. Also, new innovations in optical transport network (OTN) such as Dark Fiber provides low latency and high bandwidth channels that can support C-RAN for more than forty-kilometer radius. All these advancements make C-RAN feasible and practical. Several virtualization strategies and architectures are proposed for C-RAN and it has been established that C-RAN offers higher energy efficiency and better resource utilization than the current decentralized radio access network (D-RAN). This project studies proposed resource utilization strategy and device a method to calculate power utilization. Then proposes and analyzes a new resource management and virtual BBU placement strategy for C-RAN based on demand prediction and inter-BBU communication load. The new approach is compared with existing state of art strategies with same input scenarios and load. The trade-offs between energy efficiency and quality of services is discussed. The project concludes with comparison between different strategies based on complexity of the system, performance in terms of service availability and optimization efficiency in different scenarios
- …