170 research outputs found
Configurable data center switch architectures
In this thesis, we explore alternative architectures for implementing con_gurable Data Center Switches along with the advantages that can be provided by such switches. Our first contribution centers around determining switch architectures that can be implemented on Field Programmable Gate Array (FPGA) to provide configurable switching protocols. In the process, we identify a gap in the availability of frameworks to realistically evaluate the performance of switch architectures in data centers and contribute a simulation framework that relies on realistic data center traffic patterns. Our framework is then used to evaluate the performance of currently existing as well as newly proposed FPGA-amenable switch designs. Through collaborative work with Meng and Papaphilippou, we establish that only small-medium range switches can be implemented on today's FPGAs. Our second contribution is a novel switch architecture that integrates a custom in-network hardware accelerator with a generic switch to accelerate Deep Neural Network training applications in data centers. Our proposed accelerator architecture is prototyped on an FPGA, and a scalability study is conducted to demonstrate the trade-offs of an FPGA implementation when compared to an ASIC implementation. In addition to the hardware prototype, we contribute a light weight load-balancing and congestion control protocol that leverages the unique communication patterns of ML data-parallel jobs to enable fair sharing of network resources across different jobs. Our large-scale simulations demonstrate the ability of our novel switch architecture and light weight congestion control protocol to both accelerate the training time of machine learning jobs by up to 1.34x and benefit other latency-sensitive applications by reducing their 99%-tile completion time by up to 4.5x. As for our final contribution, we identify the main requirements of in-network applications and propose a Network-on-Chip (NoC)-based architecture for supporting a heterogeneous set of applications. Observing the lack of tools to support such research, we provide a tool that can be used to evaluate NoC-based switch architectures.Open Acces
Efficient time slot assignment algorithms for TDM hierarchical and nonhierarchical switching systems
Two efficient time slot assignment algorithms, called the two-phase algorithm for the nonhierarchical and the three-phase algorithm for the hierarchical time-division multiplex (TDM) switching systems, are proposed. The simple idea behind these two algorithms is to schedule the traffic on the critical lines/trunks of a traffic matrix first. The time complexities of these two algorithms are found to be O(LN2) and O(LM2), where L is the frame length, N is the switch size, and M is the number of input/output users connected to a hierarchical TDM switch. Unlike conventional algorithms, they are fast, iterative and simple for hardware implementation. Since no backtracking is used, pipelined packet transmission and packet scheduling can be performed for reducing the scheduling complexity of a transmission matrix to O(N2) and O(M2), respectively. Extensive simulations reveal that the two proposed algorithms give close-to-optimal performance under various traffic conditions.published_or_final_versio
Experimental survey of FPGA-based monolithic switches and a novel queue balancer
This paper studies small to medium-sized monolithic switches for FPGA implementation and presents a novel switch design that achieves high algorithmic performance and FPGA implementation efficiency. Crossbar switches based on virtual output queues (VOQs) and variations have been rather popular for implementing switches on FPGAs, with applications in network switches, memory interconnects, network-on-chip (NoC) routers etc. The implementation efficiency of crossbar-based switches is well-documented on ASICs, though we show that their disadvantages can outweigh their advantages on FPGAs. One of the most important challenges in such input-queued switches is the requirement for iterative scheduling algorithms. In contrast to ASICs, this is more harmful on FPGAs, as the reduced operating frequency and narrower packets cannot “hide” multiple iterations of scheduling that are required to achieve a modest scheduling performance.Our proposed design uses an output-queued switch internally for simplifying scheduling, and a queue balancing technique to avoid queue fragmentation and reduce the need for memory-sharing VOQs. Its implementation approaches the scheduling performance of a state-of-the-art FPGA-based switch, while requiring considerably fewer resources
Adaptive Hybrid Switching Technique for Parallel Computing System
Parallel processing accelerates computations by solving a single problem using multiple compute nodes interconnected by a network. The scalability of a parallel system is limited byits ability to communicate and coordinate processing. Circuit switching, packet switchingand wormhole routing are dominant switching techniques. Our simulation results show that wormhole routing and circuit switching each excel under different types of traffic.This dissertation presents a hybrid switching technique that combines wormhole routing with circuit switching in a single switch using vrtual channels and time division multiplexing. The performance of this hybrid switch is significantly impacted by the effciency of traffic scheduling and thus, this dissertation also explores the design and scalability of hardware scheduling for the hybrid switch. In particular, we introduce two schedulers for crossbar networks: a greedy scheduler and an optimal scheduler that improves upon the resultsprovided by the greedy scheduler. For the time division multiplexing portion of the hybrid switch, this dissertation presents three allocation methods that combine wormhole switching with predictive circuit switching. We further extend this research from crossbar networks to fat tree interconnected networks with virtual channels. The global "level-wise" scheduling algorithm is presented and improves network utilization by 30% when compared to a switch-level algorithm. The performance of the hybrid switching is evaluated on a cycle accurate simulation framework that is also part of this dissertation research. Our experimental results demonstrate that the hybrid switch is capable of transferring both predictable traffics and unpredictable traffics successfully. By dynamically selecting the proper switching technique based on the type of communication traffic, the hybrid switch improves communication for most types of traffic
Optics and virtualization as data center network infrastructure
The emerging cloud services have motivated a fresh look at the design of data center network infrastructure in multiple layers. To transfer the huge amount of data generated by many data intensive applications, data center network has to be fast, scalable and power efficient. To support flexible and efficient sharing in cloud services, service providers deploy a virtualization layer as part of the data center infrastructure.
This thesis explores the design and performance analysis of data center network infrastructure in both physical network and virtualization layer. On the physical network design front, we present a hybrid packet/circuit switched network architecture which uses circuit switched optics to augment traditional packet-switched Ethernet in modern data centers. We show that this technique has substantial potential to improve bisection bandwidth and application performance in a cost-effective manner. To push the adoption of optical circuits in real cloud data centers, we further explore and address the circuit control issues in shared data center environments. On the virtualization layer, we present an analytical study on the network performance of virtualized data centers. Using Amazon EC2 as an experiment platform, we quantify the impact of virtualization on network performance in commercial cloud. Our findings provide valuable insights to both cloud users in moving legacy application into cloud and service providers in improving the virtualization infrastructure to support better cloud services
New cross-layer techniques for multi-criteria scheduling in large-scale systems
The global ecosystem of information technology (IT) is in transition to a new generation
of applications that require more and more intensive data acquisition, processing and
storage systems. As a result of that change towards data intensive computing, there is a
growing overlap between high performance computing (HPC) and Big Data techniques in
applications, since many HPC applications produce large volumes of data, and Big Data
needs HPC capabilities.
The hypothesis of this PhD. thesis is that the potential interoperability and convergence
of the HPC and Big Data systems are crucial for the future, being essential the unification
of both paradigms to address a broad spectrum of research domains. For this reason, the
main objective of this Phd. thesis is purposing and developing a monitoring system to
allow the HPC and Big Data convergence, thanks to giving information about behaviors of
applications in a system which execute both kind of them, giving information to improve
scalability, data locality, and to allow adaptability to large scale computers. To achieve
this goal, this work is focused on the design of resource monitoring and discovery to
exploit parallelism at all levels. These collected data are disseminated to facilitate global
improvements at the whole system, and, thus, avoid mismatches between layers. The
result is a two-level monitoring framework (both at node and application level) with
a low computational load, scalable, and that can communicate with different modules
thanks to an API provided for this purpose. All data collected is disseminated to facilitate
the implementation of improvements globally throughout the system, and thus avoid
mismatches between layers, which combined with the techniques applied to deal with fault
tolerance, makes the system robust and with high availability.
On the other hand, the developed framework includes a task scheduler capable of managing
the launch of applications, their migration between nodes, as well as the possibility
of dynamically increasing or decreasing the number of processes. All these thanks to the
cooperation with other modules that are integrated into LIMITLESS, and whose objective
is to optimize the execution of a stack of applications based on multi-criteria policies. This
scheduling mode is called coarse-grain scheduling based on monitoring.
For better performance and in order to further reduce the overhead during the monitorization,
different optimizations have been applied at different levels to try to reduce
communications between components, while trying to avoid the loss of information. To
achieve this objective, data filtering techniques, Machine Learning (ML) algorithms, and
Neural Networks (NN) have been used.
In order to improve the scheduling process and to design new multi-criteria scheduling
policies, the monitoring information has been combined with other ML algorithms to
identify (through classification algorithms) the applications and their execution phases,
doing offline profiling. Thanks to this feature, LIMITLESS can detect which phase is executing an application and tries to share the computational resources with other applications
that are compatible (there is no performance degradation between them when both are
running at the same time). This feature is called fine-grain scheduling, and can reduce the
makespan of the use cases while makes efficient use of the computational resources that
other applications do not use.El ecosistema global de las tecnologías de la información (IT) se encuentra en transición
a una nueva generación de aplicaciones que requieren sistemas de adquisición de datos,
procesamiento y almacenamiento cada vez más intensivo. Como resultado de ese cambio
hacia la computación intensiva de datos, existe una superposición, cada vez mayor, entre
la computación de alto rendimiento (HPC) y las técnicas Big Data en las aplicaciones,
pues muchas aplicaciones HPC producen grandes volúmenes de datos, y Big Data necesita
capacidades HPC.
La hipótesis de esta tesis es que hay un gran potencial en la interoperabilidad y
convergencia de los sistemas HPC y Big Data, siendo crucial para el futuro tratar una
unificación de ambos para hacer frente a un amplio espectro de problemas de investigación.
Por lo tanto, el objetivo principal de esta tesis es la propuesta y desarrollo de un sistema
de monitorización que facilite la convergencia de los paradigmas HPC y Big Data gracias
a la provisión de datos sobre el comportamiento de las aplicaciones en un entorno en
el que se pueden ejecutar aplicaciones de ambos mundos, ofreciendo información útil
para mejorar la escalabilidad, la explotación de la localidad de datos y la adaptabilidad
en los computadores de gran escala. Para lograr este objetivo, el foco se ha centrado en
el diseño de mecanismos de monitorización y localización de recursos para explotar el
paralelismo en todos los niveles de la pila del software. El resultado es un framework
de monitorización en dos niveles (tanto a nivel de nodo como de aplicación) con una
baja carga computacional, escalable, y que se puede comunicar con distintos módulos
gracias a una API proporcionada para tal objetivo. Todos datos recolectados se difunden
para facilitar la realización de mejoras de manera global en todo el sistema, y así evitar
desajustes entre capas, lo que combinado con las técnicas aplicadas para lidiar con la
tolerancia a fallos, hace que el sistema sea robusto y con una alta disponibilidad.
Por otro lado, el framework desarrollado incluye un planificador de tareas capaz de
gestionar el lanzamiento de aplicaciones, la migración de las mismas entre nodos, además
de la posibilidad de incrementar o disminuir su número de procesos de forma dinámica.
Todo ello gracias a la cooperación con otros módulos que se integran en LIMITLESS, y
cuyo objetivo es optimizar la ejecución de una pila de aplicaciones en base a políticas
multicriterio. Esta funcionalidad se llama planificación de grano grueso.
Para un mejor desempeño y con el objetivo de reducir más aún la carga durante la
ejecución, se han aplicado distintas optimizaciones en distintos niveles para tratar de
reducir las comunicaciones entre componentes, a la vez que se trata de evitar la pérdida
de información. Para lograr este objetivo se ha hecho uso de técnicas de filtrado de datos,
algoritmos de Machine Learning (ML), y Redes Neuronales (NN).
Finalmente, para obtener mejores resultados en la planificación de aplicaciones y
para diseñar nuevas políticas de planificación multi-criterio, los datos de monitorización recolectados han sido combinados con nuevos algoritmos de ML para identificar (por
medio de algoritmos de clasificación) aplicaciones y sus fases de ejecución. Todo ello
realizando tareas de profiling offline. Gracias a estas técnicas, LIMITLESS puede detectar
en qué fase de su ejecución se encuentra una determinada aplicación e intentar compartir los
recursos de computacionales con otras aplicaciones que sean compatibles (no se produce
una degradación del rendimiento entre ellas cuando ambas se ejecutan a la vez en el mismo
nodo). Esta funcionalidad se llama planificación de grano fino y puede reducir el tiempo
total de ejecución de la pila de aplicaciones en los casos de uso porque realiza un uso más
eficiente de los recursos de las máquinas.This PhD dissertation has been partially supported by the Spanish Ministry of Science and Innovation under an FPI fellowship associated to a National Project with reference TIN2016-79637-P (from July 1,
2018 to October 10, 2021)Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Félix García Carballeira.- Secretario: Pedro Ángel Cuenca Castillo.- Vocal: María Cristina V. Marinesc
Application of learning algorithms to traffic management in integrated services networks.
SIGLEAvailable from British Library Document Supply Centre-DSC:DXN027131 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
Weighted round robin scheduling in input-queued packet switches subject to deadline constraints
Ankara : Department of Electrical and Electronics Engineering and the Institute of Engineering and Science of Bilkent Univ., 2000.Thesis (Master's) -- Bilkent University, 2000.Includes bibliographical references leaves 59-63In this thesis work, the problem of scheduling deadline constrained traffic is studied.
The problem is explored in terms of Weighted Round Robin (WRR) service
discipline in input queued packet switches. Application of the problem may arise
in packet switching networks and Satellite-Switched Time Division Multiple Access
(SS/TDMA) systems. A new formulation of the problem is presented. The
main contribution of the thesis is a ’’backward extraction” technique to schedule
packet forwarding through the switch fabric. A number of heuristic algorithms,
each based on backward extraction, are proposed, and their performances are
studied via simulation. Numerical results show that the algorithms perform
significantly better than earlier proposed algorithms. The experimental results
strongly assert Philp and Liu conjecture.Rai, Idris AM.S
An In-Depth Analysis of the Slingshot Interconnect
The interconnect is one of the most critical components in large scale
computing systems, and its impact on the performance of applications is going
to increase with the system size. In this paper, we will describe Slingshot, an
interconnection network for large scale computing systems. Slingshot is based
on high-radix switches, which allow building exascale and hyperscale
datacenters networks with at most three switch-to-switch hops. Moreover,
Slingshot provides efficient adaptive routing and congestion control
algorithms, and highly tunable traffic classes. Slingshot uses an optimized
Ethernet protocol, which allows it to be interoperable with standard Ethernet
devices while providing high performance to HPC applications. We analyze the
extent to which Slingshot provides these features, evaluating it on
microbenchmarks and on several applications from the datacenter and AI worlds,
as well as on HPC applications. We find that applications running on Slingshot
are less affected by congestion compared to previous generation networks.Comment: To be published in Proceedings of The International Conference for
High Performance Computing Networking, Storage, and Analysis (SC '20) (2020
- …