13 research outputs found

    Packet Filtering Module For PFQ Packet Capturing Engine.

    Get PDF
    The evolution of commodity hardware is pushing parallelism forward as the key factor that can allow software to attain hardware-class performance while still retaining its advantages. On one side, commodity CPUs are providing more and more cores (the next-generation Intel Xeon E 7500 CPUs will soon make 10 cores processors a commodity product), with a complex cache hierarchy which makes aware data placement crucial to good performance. On the other side, server NIC‘s are adapting to these new trends by increasing themselves their level of parallelism. While traditional 1Gbps NICs exchanged data with the CPU through a single ring of shared memory buffers, modern 10Gbps cards support multiple queues: multiple cores can therefore receive and transmit packets in parallel. In particular, incoming packets can be de-multiplexed across CPUs based on a hash function (the so-called RSS technology) or on the MAC address (the VMD-q technology, designed for servers hosting multiple virtual machines). The Linux kernel has recently begun to support these new technologies. Though there is lot of network monitoring software‘s, most of them have not yet been designed with high parallelism in mind. Therefore a novel packet capturing engine, named PFQ was designed, that allows efficient capturing and in-kernel aggregation, as well as connection-aware load balancing. Such an engine is based on a novel lockless queue and allows parallel packet capturing to let the user-space application arbitrarily define its degree of parallelism. Therefore, both legacy applications and natively parallel ones can benefit from such capturing engine. In addition, PFQ outperforms its competitors both in terms of captured packets and CPU consumption. In this thesis, a new packet filtering block is designed implemented and added to the existing PFQ capture engine which helps in dropping out unnecessary packets before they are copied into the kernel space thus improves the overall performance of the engine considerably. Because network monitors often want only a small subset of network traffic, a dramatic performance gain is realized by filtering out unwanted packets in interrupt context

    Deux défis des Réseaux Logiciels : Relayage par le Nom et Vérification des Tables

    Get PDF
    The Internet changed the lives of network users: not only it affects users' habits, but it is also increasingly being shaped by network users' behavior.Several new services have been introduced during the past decades (i.e. file sharing, video streaming, cloud computing) to meet users' expectation.As a consequence, although the Internet infrastructure provides a good best-effort service to exchange information in a point-to-point fashion, this is not the principal need that todays users request. Current networks necessitate some major architectural changes in order to follow the upcoming requirements, but the experience of the past decades shows that bringing new features to the existing infrastructure may be slow.In this thesis work, we identify two main aspects of the Internet evolution: a “behavioral” aspect, which refers to a change occurred in the way users interact with the network, and a “structural” aspect, related to the evolution problem from an architectural point of view.The behavioral perspective states that there is a mismatch between the usage of the network and the actual functions it provides. While network devices implement the simple primitives of sending and receiving generic packets, users are really interested in different primitives, such as retrieving or consuming content. The structural perspective suggests that the problem of the slow evolution of the Internet infrastructure lies in its architectural design, that has been shown to be hardly upgradeable.On the one hand, to encounter the new network usage, the research community proposed the Named-data networking paradigm (NDN), which brings the content-based functionalities to network devices.On the other hand Software-defined networking (SDN) can be adopted to simplify the architectural evolution and shorten the upgrade-time thanks to its centralized software control plane, at the cost of a higher network complexity that can easily introduce some bugs. SDN verification is a novel research direction aiming to check the consistency and safety of network configurations by providing formal or empirical validation.The talk consists of two parts. In the first part, we focus on the behavioral aspect by presenting the design and evaluation of “Caesar”, a content router that advances the state-of-the-art by implementing content-based functionalities which may coexist with real network environments.In the second part, we target network misconfiguration diagnosis, and we present a framework for the analysis of the network topology and forwarding tables, which can be used to detect the presence of a loop at real-time and in real network environments.Cette thèse aborde des problèmes liés à deux aspects majeurs de l’évolution d’Internet : l’aspect >, qui correspond aux nouvelles interactions entre les utilisateurs et le réseau, et l’aspect >, lié aux changements d’Internet d’un point de vue architectural.Le manuscrit est composé d’un chapitre introductif qui donne les grandes lignes de recherche de ce travail de thèse, suivi d’un chapitre consacré à la description de l’état de l’art sur les deux aspects mentionnés ci-dessus. Parmi les solutions proposées par la communauté scientifique pour s'adapter à l’évolution d’Internet, deux nouveaux paradigmes réseaux sont particulièrement décrits : Information- Centric Networking (ICN) et Software-Defined Networking (SDN).La thèse continue avec la proposition de >, un dispositif réseau, inspiré par ICN, capable de gérer la distribution de contenus à partir de primitives de routage basées sur le nom des données et non les adresses des serveurs. Caesar est présenté dans deux chapitres, qui décrivent l’architecture et deux des principaux modules : le relayage et la gestion de la traçabilité des requêtes.La suite du manuscrit décrit un outil mathématique pour la détection efficace de boucles dans un réseau SDN d’un point de vue théorique. Les améliorations de l’algorithme proposé par rapport à l’état de l’art sont discutées.La thèse se conclue par un résumé des principaux résultats obtenus et une présentation des travaux en cours et futurs

    Singleton: System-wide Page Deduplication in Virtual Environments

    Get PDF
    ABSTRACT We consider the problem of providing memory-management in hypervisors and propose Singleton, a KVM-based systemwide page deduplication solution to increase memory usage efficiency. Specifically, we address the problem of doublecaching that occurs in KVM-the same disk blocks are cached at both the host(hypervisor) and the guest(VM) page-caches. Singleton's main components are identical-page sharing across guest virtual machines and an implementation of an exclusivecache for the host and guest page-cache hierarchy. We use and improve KSM-Kernel SamePage Merging to identify and share pages across guest virtual machines. We utilize guest memory-snapshots to scrub the host page-cache and maintain a single copy of a page across the host and the guests. Singleton operates on a completely black-box assumption-we do not modify the guest or assume anything about it's behaviour. We show that conventional operating system cache management techniques are sub-optimal for virtual environments, and how Singleton supplements and improves the existing Linux kernel memory management mechanisms. Singleton is able to improve the utilization of the host cache by reducing its size(by upto an order of magnitude), and increasing the cache-hit ratio(by factor of 2x). This translates into better VM performance(40% faster I/O). Singleton's unified page deduplication and host cache scrubbing is able to reclaim large amounts of memory and facilitates higher levels of memory overcommitment. The optimizations to page deduplication we have implemented keep the overhead down to less than 20% CPU utilization

    Effective techniques for understanding and improving data structure usage

    Get PDF
    Turing Award winner Niklaus Wirth famously noted, `Algorithms + Data Structures = Programs', and it follows that data structures should be carefully considered for effective application development. In fact, data structures are the main focus of program understanding, performance engineering, bug detection, and security enhancement, etc. Our research is aimed at providing effective techniques for analyzing and improving data structure usage in fundamentally new approaches: First, detecting data structures; identifying what data structures are used within an application is a critical step toward application understanding and performance engineering. Second, selecting efficient data structures; analyzing data structures' behavior can recognize improper use of data structures and suggest alternative data structures better suited for the current situation where the application runs. Third, detecting memory leaks for data structures; tracking data accesses with little overhead and their careful analysis can enable practical and accurate memory leak detection. Finally, offloading time-consuming data structure operations; By leveraging a dedicated helper thread that executes the operations on the behalf of the application thread, we can improve the overall performance of the application.Ph.D

    High Speed Networking In The Multi-Core Era

    Get PDF
    High speed networking is a demanding task that has traditionally been performed in dedicated, purpose built hardware or specialized network processors. These platforms sacrifice flexibility or programmability in favor of performance. Recently, there has been much interest in using multi-core general purpose processors for this task, which have the advantage of being easily programmable and upgradeable. The best way to exploit these new architectures for networking is an open question that has been the subject of much recent research. In this dissertation, I explore the best way to exploit multi-core general purpose processors for packet processing applications. This includes both new architectural organizations for the processors as well as changes to the systems software. I intend to demonstrate the efficacy of these techniques by using them to build an open and extensible network security and monitoring platform that can out perform existing solutions

    Σκακιστικές Μηχανές: Επισκόπηση Μεθοδολογιών και Υλοποίηση Προσέγγισης PVS/NNUE

    Get PDF
    Σε αυτή την πτυχιακή εργασία μελετώνται οι κύριες δομές μια σκακιστικής μηχανής καθώς και οι νεότερες τεχνικές εύρεσης βέλτιστης κίνησης με χρήση νευρωνικών δικτύων. Θα μελετηθούν οι κύριοι τρόποι αναπαράστασης της δομής του ταμπλό με ιδιαίτερη έμφαση σε αυτή των bitboards. Στην συνέχεια θα γίνει μια αναφορά στην μεθοδολογία παραγωγής κινήσεων με χρήση προ-υπολογισμένων πινάκων και τεχνικών τέλειου κατακερματισμού. Θα συζητηθούν οι διάφορες παραλλαγές των αλγορίθμων MCTS και PVS και οι τρόποι με τους οποίους μπορούν να παραλληλοποιηθούν για γρηγορότερη εκτέλεση. Τέλος θα παρουσιαστεί η νεότερη αρχιτεκτονική δικτύων NNUE για την στατική αξιολόγηση θέσεων και οι διαφορές της με πιο σύνθετα μοντέλα όπως αυτό του Alpha zero. Στα πλαίσια αυτής της εργασίας υλοποιήθηκε και μια σκακιστική μηχανή με χρήση του μοντέλου NNUE, του αλγορίθμου PVS με τις αντίστοιχες βελτιστοποιήσεις κλαδέματος και της αναπαράστασης bitboards. Οι λεπτομέρειες υλοποίησης παρέχονται στην τελευταία ενότητα.In this dissertation we study the primary components of chess engines, as well as the latest techniques for finding optimal moves using neural networks. The various board representation will be analysed, with special emphasis on that of bitboards. We will refer to the main methods for producing moves using pre-calculated lookup tables with perfect hashing. The various variants of MCTS and PVS algorithms and the ways in which they can be parallelised for faster execution will be discussed. Finally, the latest NNUE network architecture for static position evaluation and its differences with more complex models such as Alpha zero will be presented. As part of this work, a chess engine was developed using the NNUE model, the PVS algorithm with the corresponding pruning optimizations and the representation of bitboards. Implementation details are provided in the last section

    Characterization, classification and alignment of protein-protein interfaces

    Get PDF
    Protein structural models provide essential information for the research on protein-protein interactions. In this dissertation, we describe two projects on the analysis of protein interactions using structural information. The focus of the first is to characterize and classify different types of interactions. We discriminate between biological obligate and biological non-obligate interactions, and crystal packing contacts. To this end, we defined six interface properties and used them to compare the three types of interactions in a hand-curated dataset. Based on the analysis, a classifier, named NOXclass, was constructed using a support vector machine algorithm in order to generate predictions of interaction types. NOXclass was tested on a non-redundant dataset of 243 protein-protein interactions and reaches an accuracy of 91.8%. The program is benecial for structural biologists for the interpretation of protein quaternary structures and to form hypotheses about the nature of proteinprotein interactions when experimental data are yet unavailable. In the second part of the dissertation, we present Galinter, a novel program for the geometrical comparison of protein-protein interfaces. The Galinter program aims at identifying similar patterns of different non-covalent interactions at interfaces. It is a graph-based approach optimized for aligning non-covalent interactions. A scoring scheme was developed for estimating the statistical signicance of the alignments. We tested the Galinter method on a published dataset of interfaces. Galinter alignments agree with those delivered by methods based on interface residue comparison and backbone structure comparison. In addition, we applied Galinter on four medically relevant examples of protein mimicry. Our results are consistent with previous human-curated analysis. The Galinter program provides an intuitive method of comparative analysis and visualization of binding modes and may assist in the prediction of interaction partners, and the design and engineering of protein interactions and interaction inhibitors
    corecore