27 research outputs found
Clock Synchronisation Assisted Clock and Data Recovery for Sub-Nanosecond Data Centre Optical Switching
In current `Cloud' data centres, switching of data between servers is performed using deep hierarchies of interconnected electronic packet switches. Demand for network bandwidth from emerging data centre workloads, combined with the slowing of silicon transistor scaling, is leading to a widening gap between data centre traffic demand and electronically-switched data centre network capacity. All-optical switches could offer a future-proof alternative, with potentially under a third of the power consumption and cost of electronically-switched networks. However, the effective bandwidth of optical switches depends on their overall switching time. This is dominated by the clock and data recovery (CDR) locking time, which takes hundreds of nanoseconds in commercial receivers. Current data centre traffic is dominated by small packets that transmit in tens of nanoseconds, leading to low effective bandwidth, as a high proportion of receiver time is spent performing CDR locking instead of receiving data, removing the benefits of optical switching. High-performance optical switching requires sub-nanosecond CDR locking time to overcome this limitation. This thesis proposes, models, and demonstrates clock synchronisation assisted CDR, which can achieve this. This approach uses clock synchronisation to simplify the complexity of CDR versus previous asynchronous approaches. An analytical model of the technique is first derived that establishes its potential viability. Following this, two approaches to clock synchronisation assisted CDR are investigated: 1. Clock phase caching, which uses clock phase storage and regular updates in a 2km intra-building scale data centre network interconnected by single-mode optical fibre. 2. Single calibration clock synchronisation assisted CDR}, which leverages the 20 times lower thermal sensitivity of hollow core optical fibre versus single-mode fibre to synchronise a 100m cluster scale data centre network, with a single initial phase calibration step. Using a real-time FPGA-based optical switch testbed, sub-nanosecond CDR locking time was demonstrated for both approaches
Optical processing devices and techniques for next generation optical networks
Doutoramento em FísicaEste trabalho surge do interesse em substituir os nós de rede óptica baseados maioritariamente em electrónica por nós de rede baseados em tecnologia óptica. Espera-se que a tecnologia óptica permita maiores débitos binários na
rede, maior transparência e maior eficiência através de novos paradigmas de comutação. Segundo esta visão, utilizou-se o MZI-SOA, um dispositivo
semicondutor integrado hibridamente, para realizar funcionalidades de processamento óptico de sinal necessárias em nós de redes ópticas de nova
geração.
Nas novas redes ópticas são utilizados formatos de modulação avançados, com gestão da fase, pelo que foi estudado experimentalmente e por simulação o impacto da utilização destes formatos no desempenho do MZI-SOA na
conversão de comprimento de onda e formato, em várias condições de operação. Foram derivadas regras de utilização para funcionamento óptimo.
Foi também estudado o impacto da forma dos pulsos do sinal no desempenho do dispositivo.
De seguida, o MZI-SOA foi utilizado para realizar funcionalidades temporais ao nível do bit e do pacote. Foi investigada a operação de um conversor de multiplexagem por divisão no comprimento de onda para multiplexagem por
divisão temporal óptica, experimentalmente e por simulação, e de um compressor e descompressor de pacotes, por simulação. Para este último, foi
investigada a operação com o MZI-SOA baseado em amplificadores ópticos de semicondutor com geometria de poço quântico e ponto quântico. Foi também realizado experimentalmente um ermutador de intervalos temporais que explora o MZI-SOA como conversor de comprimento de onda e usa um banco de linhas de atraso ópticas para introduzir no sinal um atraso seleccionável.
Por fim, foi estudado analiticamente, experimentalmente e por simulação o impacto de diafonia em redes ópticas em diversas situações. Extendeu-se um modelo analítico de cálculo de desempenho para contemplar sinais distorcidos
e afectados por diafonia. Estudou-se o caso de sinais muito filtrados e afectados por diafonia e mostrou-se que, para determinar correctamente as
penalidades que ocorrem, ambos os efeitos devem ser considerados simultaneamente e não em separado. Foi estudada a escalabilidade limitada
por diafonia de um comutador de intervalos temporais baseado em MZI-SOA a operar como comutador espacial. Mostrou-se também que sinais afectados fortemente por não-linearidades podem causar penalidades de diafonia mais
elevadas do que sinais não afectados por não-linearidades.
Neste trabalho foi demonstrado que o MZI-SOA permite construir vários e pertinentes circuitos ópticos, funcionando como bloco fundamental de
construção, tendo sido o seu desempenho analisado, desde o nível de componente até ao nível de sistema. Tendo em conta as vantagens e
desvantagens do MZI-SOA e os desenvolvimentos recentes de outras tecnologias, foram sugeridos tópicos de investigação com o intuito de evoluir
para as redes ópticas de nova geração.The main motivation for this work is the desire to upgrade today’s opaque network nodes, which are plagued by inherent limitations of its constitutive
electronics, by all-optical transparent network nodes. The all-optical promise consists in delivering ever higher bit rates, more transparency, and
unsurpassed efficiency associated to sophisticated all-optical switching paradigms. In this light, the integrated MZI-SOA has been selected as the
fundamental building block for this investigation of all-optical processing techniques and functions necessary for developing the next generation alloptical networks.
Next generation optical networks will use advanced phase-managed modulation formats. Accordingly, the first simulation and experimental investigation assesses the performance of MZI-SOA based wavelength and format converter circuits for advanced modulation formats. Rules are derived
for ensuring optimal MZI-SOA operation. The impact of the pulse shape on both the wavelength and format conversion processes is also addressed.
More complex MZI-SOA based implementations of bit-level, and packet-level, time domain processing functions are analysed. A MZI-SOA based wavelength division multiplexing to time division multiplexing converter is experimentally
investigated and compared to similar simulation results. The performance of packet compressor and decompressor circuit schemes, based on quantum well and quantum dots SOA devices, is analysed through simulation techniques. A MZI-SOA wavelength converter based selectable packet delay time slot interchanger, which uses an optical delay line bank, is experimentally demonstrated.
Finally, the impact of crosstalk on all-optical networks is studied analytically, experimentally, and through simulations. An extant analytical model for
assessing the performance of crosstalk impaired signals is improved for dealing also with distorted signals. Using the extended model, it is shown that heavily filtered signals are more seriously affected by crosstalk than unfiltered signals.
Hence, accurate calculation of penalties stemming from both filtering and crosstalk, must model these effects jointly. The crosstalk limited scalability of a
MZI-SOA space switched time slot interchanger is also assessed employing this method. An additional study points to the conclusion that crosstalk caused by signals impaired by non-linear effects can have a more significant detrimental impact on optical systems performance than that of the crosstalk caused by a signal unimpaired by non-linearities.
On the whole, it has been demonstrated that the MZI-SOA is a suitable building block for a variety of optical processing circuits required for the next generation optical networks. Its performance capabilities have been established in several
optical circuits, from the component up to the system level. Next steps towards the implementation of next generation optical networks have been suggested according to the recent developments and the MZI-SOA’s strengths and
drawbacks, in order to pursue the goal of higher bit rate, more transparent, and efficient optical networks
Optical control plane: theory and algorithms
In this thesis we propose a novel way to achieve global network information dissemination in which some wavelengths are reserved exclusively for global control information exchange. We study the routing and wavelength assignment problem for the special communication pattern of non-blocking all-to-all broadcast in WDM optical networks. We provide efficient solutions to reduce the number of wavelengths needed for non-blocking all-to-all broadcast, in the absence of wavelength converters, for network information dissemination. We adopt an approach in which we consider all nodes to be tap-and-continue capable thus studying lighttrees rather than lightpaths. To the best of our knowledge, this thesis is the first to consider “tap-and-continue” capable nodes in the context of conflict-free all-to-all broadcast. The problem of all to-all broadcast using individual lightpaths has been proven to be an NP-complete problem [6]. We provide optimal RWA solutions for conflict-free all-to-all broadcast for some particular cases of regular topologies, namely the ring, the torus and the hypercube. We make an important contribution on hypercube decomposition into edge-disjoint structures. We also present near-optimal polynomial-time solutions for the general case of arbitrary topologies. Furthermore, we apply for the first time the “cactus” representation of all minimum edge-cuts of graphs with arbitrary topologies to the problem of all-to-all broadcast in optical networks. Using this representation recursively we obtain near-optimal results for the number of wavelengths needed by the non-blocking all-to-all broadcast. The second part of this thesis focuses on the more practical case of multi-hop RWA for non- blocking all-to-all broadcast in the presence of Optical-Electrical-Optical conversion. We propose two simple but efficient multi-hop RWA models. In addition to reducing the number of wavelengths we also concentrate on reducing the number of optical receivers, another important optical resource. We analyze these models on the ring and the hypercube, as special cases of regular topologies. Lastly, we develop a good upper-bound on the number of wavelengths in the case of non-blocking multi-hop all-to-all broadcast on networks with arbitrary topologies and offer a heuristic algorithm to achieve it. We propose a novel network partitioning method based on “virtual perfect matching” for use in the RWA heuristic algorithm
Small-world interconnection networks for large parallel computer systems
The use of small-world graphs as interconnection networks of multicomputers is proposed and analysed in this work. Small-world interconnection networks are constructed by adding (or modifying) edges to an underlying local graph. Graphs with a rich local structure but with a large diameter are shown to be the most suitable candidates for the underlying graph. Generation models based on random and deterministic wiring processes are proposed and analysed. For the random case basic properties such as degree, diameter, average length and bisection width are analysed, and the results show that a fast transition from a large diameter to a small diameter is experienced when the number of new edges introduced is increased. Random traffic analysis on these networks is undertaken, and it is shown that although the average latency experiences a similar reduction, networks with a small number of shortcuts have a tendency to saturate as most of the traffic flows through a small number of links. An analysis of the congestion of the networks corroborates this result and provides away of estimating the minimum number of shortcuts required to avoid saturation. To overcome these problems deterministic wiring is proposed and analysed. A Linear Feedback Shift Register is used to introduce shortcuts in the LFSR graphs. A simple routing algorithm has been constructed for the LFSR and extended with a greedy local optimisation technique. It has been shown that a small search depth gives good results and is less costly to implement than a full shortest path algorithm. The Hilbert graph on the other hand provides some additional characteristics, such as support for incremental expansion, efficient layout in two dimensional space (using two layers), and a small fixed degree of four. Small-world hypergraphs have also been studied. In particular incomplete hypermeshes have been introduced and analysed and it has been shown that they outperform the complete traditional implementations under a constant pinout argument. Since it has been shown that complete hypermeshes outperform the mesh, the torus, low dimensional m-ary d-cubes (with and without bypass channels), and multi-stage interconnection networks (when realistic decision times are accounted for and with a constant pinout), it follows that incomplete hypermeshes outperform them as well
Recommended from our members
Wavelengths switching and allocation algorithms in multicast technology using m-arity tree networks topology
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.In this thesis, the m-arity tree networks have been investigated to derive equations for their nodes, links and required wavelengths. The relationship among all parameters such as leaves nodes, destinations, paths and wavelengths has been found. Three situations have been explored, firstly when just one server and the leaves nodes are destinations, secondly when just one server and all other nodes are destinations, thirdly when all nodes are sources and destinations in the same time. The investigation has included binary, ternary, quaternary and finalized by general equations for all m-arity tree networks.
Moreover, a multicast technology is analysed in this thesis to transmit data carried by specific wavelengths to several clients. Wavelengths multicast switching is well examined to propose split-convert-split-convert (S-C-S-C) multicast switch which consists of light splitters and wavelengths converters. It has reduced group delay by 13% and 29% compared with split-convert (S-C) and split-convert-split (S-C-S) multicast switches respectively. The proposed switch has also increased the received signal power by a significant value which reaches 28% and 26.92% compared with S-C-S and S-C respectively.
In addition, wavelengths allocation algorithms in multicast technology are proposed in this thesis using tree networks topology. Distributed scheme is adopted by placing wavelength assignment controller in all parents’ nodes. Two distributed algorithms proposed shortest wavelength assignment (SWA) and highest number of destinations with shortest wavelength assignment (HND-SWA) algorithms to increase the received signal power, decrease group delay and reduce dispersion. The performance of the SWA algorithm was almost better or same as HND-SWA related to the power, dispersion and group delay but they are always better than other two algorithms. The required numbers of wavelengths and their utilised converters have been examined and calculated for the researched algorithms. The HND-SWA has recorded the superior performance compared with other algorithms. It has reduced number of utilised wavelengths up to about 19% and minimized number of the used wavelengths converters up to about 29%.
Finally, the centralised scheme is discussed and researched and proposed a centralised highest number of destinations (CHND) algorithm with static and dynamic scenarios to reduce network capacity decreasing (Cd) after each wavelengths allocation. The CDHND has reduced (Cd) by about 16.7% compared with the other algorithms
Virtualisation and resource allocation in MECEnabled metro optical networks
The appearance of new network services and the ever-increasing network traffic and number
of connected devices will push the evolution of current communication networks towards the
Future Internet.
In the area of optical networks, wavelength routed optical networks (WRONs) are evolving
to elastic optical networks (EONs) in which, thanks to the use of OFDM or Nyquist WDM,
it is possible to create super-channels with custom-size bandwidth. The basic element in
these networks is the lightpath, i.e., all-optical circuits between two network nodes. The
establishment of lightpaths requires the selection of the route that they will follow and the
portion of the spectrum to be used in order to carry the requested traffic from the source to
the destination node. That problem is known as the routing and spectrum assignment (RSA)
problem, and new algorithms must be proposed to address this design problem.
Some early studies on elastic optical networks studied gridless scenarios, in which a slice
of spectrum of variable size is assigned to a request. However, the most common approach to
the spectrum allocation is to divide the spectrum into slots of fixed width and allocate multiple,
consecutive spectrum slots to each lightpath, depending on the requested bandwidth. Moreover,
EONs also allow the proposal of more flexible routing and spectrum assignment techniques,
like the split-spectrum approach in which the request is divided into multiple "sub-lightpaths".
In this thesis, four RSA algorithms are proposed combining two different levels of
flexibility with the well-known k-shortest paths and first fit heuristics. After comparing the
performance of those methods, a novel spectrum assignment technique, Best Gap, is proposed
to overcome the inefficiencies emerged when combining the first fit heuristic with highly
flexible networks. A simulation study is presented to demonstrate that, thanks to the use of
Best Gap, EONs can exploit the network flexibility and reduce the blocking ratio.
On the other hand, operators must face profound architectural changes to increase the
adaptability and flexibility of networks and ease their management. Thanks to the use of
network function virtualisation (NFV), the necessary network functions that must be applied
to offer a service can be deployed as virtual appliances hosted by commodity servers, which
can be located in data centres, network nodes or even end-user premises. The appearance of
new computation and networking paradigms, like multi-access edge computing (MEC), may
facilitate the adaptation of communication networks to the new demands. Furthermore, the
use of MEC technology will enable the possibility of installing those virtual network functions
(VNFs) not only at data centres (DCs) and central offices (COs), traditional hosts of VFNs, but
also at the edge nodes of the network. Since data processing is performed closer to the enduser,
the latency associated to each service connection request can be reduced. MEC nodes
will be usually connected between them and with the DCs and COs by optical networks.
In such a scenario, deploying a network service requires completing two phases: the
VNF-placement, i.e., deciding the number and location of VNFs, and the VNF-chaining,
i.e., connecting the VNFs that the traffic associated to a service must transverse in order to
establish the connection. In the chaining process, not only the existence of VNFs with available
processing capacity, but the availability of network resources must be taken into account to
avoid the rejection of the connection request. Taking into consideration that the backhaul of
this scenario will be usually based on WRONs or EONs, it is necessary to design the virtual
topology (i.e., the set of lightpaths established in the networks) in order to transport the tra c
from one node to another. The process of designing the virtual topology includes deciding the
number of connections or lightpaths, allocating them a route and spectral resources, and finally
grooming the traffic into the created lightpaths.
Lastly, a failure in the equipment of a node in an NFV environment can cause the
disruption of the SCs traversing the node. This can cause the loss of huge amounts of data
and affect thousands of end-users. In consequence, it is key to provide the network with faultmanagement
techniques able to guarantee the resilience of the established connections when a
node fails.
For the mentioned reasons, it is necessary to design orchestration algorithms which solve
the VNF-placement, chaining and network resource allocation problems in 5G networks
with optical backhaul. Moreover, some versions of those algorithms must also implements
protection techniques to guarantee the resilience system in case of failure.
This thesis makes contribution in that line. Firstly, a genetic algorithm is proposed to solve
the VNF-placement and VNF-chaining problems in a 5G network with optical backhaul based
on star topology: GASM (genetic algorithm for effective service mapping). Then, we propose
a modification of that algorithm in order to be applied to dynamic scenarios in which the
reconfiguration of the planning is allowed. Furthermore, we enhanced the modified algorithm
to include a learning step, with the objective of improving the performance of the algorithm.
In this thesis, we also propose an algorithm to solve not only the VNF-placement and
VNF-chaining problems but also the design of the virtual topology, considering that a WRON
is deployed as the backhaul network connecting MEC nodes and CO. Moreover, a version
including individual VNF protection against node failure has been also proposed and the
effect of using shared/dedicated and end-to-end SC/individual VNF protection schemes are
also analysed.
Finally, a new algorithm that solves the VNF-placement and chaining problems and
the virtual topology design implementing a new chaining technique is also proposed.
Its corresponding versions implementing individual VNF protection are also presented.
Furthermore, since the method works with any type of WDM mesh topologies, a technoeconomic
study is presented to compare the effect of using different network topologies in
both the network performance and cost.Departamento de Teoría de la Señal y Comunicaciones e Ingeniería TelemáticaDoctorado en Tecnologías de la Información y las Telecomunicacione
VLSI decoding architectures: flexibility, robustness and performance
Stemming from previous studies on flexible LDPC decoders, this thesis work has been mainly focused on the development of flexible turbo and LDPC decoder designs, and on the narrowing of the power, area and speed gap they might present with respect to dedicated solutions. Additional studies have been carried out within the field of increased code performance and of decoder resiliency to hardware errors. The first chapter regroups several main contributions in the design and implementation of flexible channel decoders. The first part concerns the design of a Network-on-Chip (NoC) serving as an interconnection network for a partially parallel LDPC decoder. A best-fit NoC architecture is designed and a complete multi-standard turbo/LDPC decoder is designed and implemented. Every time the code is changed, the decoder must be reconfigured. A number of variables influence the duration of the reconfiguration process, starting from the involved codes down to decoder design choices. These are taken in account in the flexible decoder designed, and novel traffic reduction and optimization methods are then implemented. In the second chapter a study on the early stopping of iterations for LDPC decoders is presented. The energy expenditure of any LDPC decoder is directly linked to the iterative nature of the decoding algorithm. We propose an innovative multi-standard early stopping criterion for LDPC decoders that observes the evolution of simple metrics and relies on on-the-fly threshold computation. Its effectiveness is evaluated against existing techniques both in terms of saved iterations and, after implementation, in terms of actual energy saving. The third chapter portrays a study on the resilience of LDPC decoders under the effect of memory errors. Given that the purpose of channel decoders is to correct errors, LDPC decoders are intrinsically characterized by a certain degree of resistance to hardware faults. This characteristic, together with the soft nature of the stored values, results in LDPC decoders being affected differently according to the meaning of the wrong bits: ad-hoc error protection techniques, like the Unequal Error Protection devised in this chapter, can consequently be applied to different bits according to their significance. In the fourth chapter the serial concatenation of LDPC and turbo codes is presented. The concatenated FEC targets very high error correction capabilities, joining the performance of turbo codes at low SNR with that of LDPC codes at high SNR, and outperforming both current deep-space FEC schemes and concatenation-based FECs. A unified decoder for the concatenated scheme is subsequently propose