31 research outputs found

    An SNMP filesystem in userspace

    Get PDF
    Modern computer networks are constantly increasing in size and complexity. Despite this, data networks are a critical factor for the success of many organizations. Monitoring their health and operation sta- tus is fundamental, and usually performed through specific network man- agement architectures, developed and standardized in the last decades. On the other hand, file systems have become one of the best well known paradigms of human-computer interaction, and have been around since early days in the personal computer industry. In this paper we propose a file system interface to network management information, allowing users to open, edit and visualize network and systems operation information

    Advanced Tools for Performance Measurement

    Get PDF
    Tato práce prezentuje vstupně-výstupní vrstvu jádra Linux a ukazuje možnosti jejího ladění a optimalizace. Dále ukazuje nástroje, které je možno použít pro sledování systému a jejich výstupy. Práce se také soustřeďuje na kombinaci takových nástrojů, která by vedla k jednoduchému použití a komplexnímu výsledku sledování. Praktická část sestává z aplikace skriptů pro SystemTap a blktrace a z vlastního programu pro monitorování fragmentace s grafickým výstupem.This thesis presents the I/O layer of Linux kernel and shows various tools for tuning and optimization of its performance. Many tools are presented and their usage and outputs are studied. The thesis then focuses on the means of combining such tools to create more applicable methodology of system analysis and monitoring. The practical part consists of applying SystemTap scripts for blktrace subsystem and creating a fragmentation monitoring tool with graphical output.

    Analyse des performances de stockage, en mémoire et sur les périphériques d'entrée/sortie, à partir d'une trace d'exécution

    Get PDF
    Le stockage des données est vital pour l’industrie informatique. Les supports de stockage doivent être rapides et fiables pour répondre aux demandes croissantes des entreprises. Les technologies de stockage peuvent être classifiées en deux catégories principales : stockage de masse et stockage en mémoire. Le stockage de masse permet de sauvegarder une grande quantité de données à long terme. Les données sont enregistrées localement sur des périphériques d’entrée/sortie, comme les disques durs (HDD) et les Solid-State Drive (SSD), ou en ligne sur des systèmes de stockage distribué. Le stockage en mémoire permet de garder temporairement les données nécessaires pour les programmes en cours d’exécution. La mémoire vive est caractérisée par sa rapidité d’accès, indispensable pour fournir rapidement les données à l’unité de calcul du processeur. Les systèmes d’exploitation utilisent plusieurs mécanismes pour gérer les périphériques de stockage, par exemple les ordonnanceurs de disque et les allocateurs de mémoire. Le temps de traitement d’une requête de stockage est affecté par l’interaction entre plusieurs soussystèmes, ce qui complique la tâche de débogage. Les outils existants, comme les outils d’étalonnage, permettent de donner une vague idée sur la performance globale du système, mais ne permettent pas d’identifier précisément les causes d’une mauvaise performance. L’analyse dynamique par trace d’exécution est très utile pour l’étude de performance des systèmes. Le traçage permet de collecter des données précises sur le fonctionnement du système, ce qui permet de détecter des problèmes de performance difficilement identifiables. L’objectif de cette thèse est de fournir un outil permettant d’analyser les performances de stockage, en mémoire et sur les périphériques d’entrée/sortie, en se basant sur les traces d’exécution. Les défis relevés par cet outil sont : collecter les données nécessaires à l’analyse depuis le noyau et les programmes en mode utilisateur, limiter le surcoût du traçage et la taille des traces générées, synchroniser les différentes traces, fournir des analyses multiniveau couvrant plusieurs aspects de la performance et enfin proposer des abstractions permettant aux utilisateurs de facilement comprendre les traces.----------ABSTRACT: Data storage is an essential resource for the computer industry. Storage devices must be fast and reliable to meet the growing demands of the data-driven economy. Storage technologies can be classified into two main categories: mass storage and main memory storage. Mass storage can store large amounts of data persistently. Data is saved locally on input/output devices, such as Hard Disk Drives (HDD) and Solid-State Drives (SSD), or remotely on distributed storage systems. Main memory storage temporarily holds the necessary data for running programs. Main memory is characterized by its high access speed, essential to quickly provide data to the Central Processing Unit (CPU). Operating systems use several mechanisms to manage storage devices, such as disk schedulers and memory allocators. The processing time of a storage request is affected by the interaction between several subsystems, which complicates the debugging task. Existing tools, such as benchmarking tools, provide a general idea of the overall system performance, but do not accurately identify the causes of poor performance. Dynamic analysis through execution tracing is a solution for the detailed runtime analysis of storage systems. Tracing collects precise data about the internal behavior of the system, which helps in detecting performance problems that are difficult to identify. The goal of this thesis is to provide a tool to analyze storage performance based on lowlevel trace events. The main challenges addressed by this tool are: collecting the required data using kernel and userspace tracing, limiting the overhead of tracing and the size of the generated traces, synchronizing the traces collected from different sources, providing multi-level analyses covering several aspects of storage performance, and lastly proposing abstractions allowing users to easily understand the traces. We carefully designed and inserted the instrumentation needed for the analyses. The tracepoints provide full visibility into the system and track the lifecycle of storage requests, from creation to processing. The Linux Trace Toolkit Next Generation (LTTng), a free and low-overhead tracer, is used for data collection. This tracer is characterized by its stability, and efficiency with highly parallel applications, thanks to the lock-free synchronization mechanisms used to update the content of the trace buffers. We also contributed to the creation of a patch that allows LTTng to capture the call stacks of userspace events

    Perbandingan Kinerja Clustered File System pada Cloud Storage menggunakan GlusterFS dan Ceph

    Get PDF
    Perkembangan teknologi yang cepat menyebabkan kebutuhan penyimpanan data semakin berkembang. Salah satu untuk memperbesar kapasitas penyimpanannya dengan metode Clustered file system. Pada pengujian ini membandingkan kecepatan upload dan download file dan write/read file pada GlusterFS dan Ceph. Pengujian transfer file menggunakan file dengan ukuran 500MB, 10 kali pengujian, dan menggunakan aplikasi teracopy. Dari pengujian maka diperoleh hasil untuk upload file bahwa metode GlusterFS lebih cepat 11,5% daripada Ceph dengan rata-rata upload file GlusterFS lebih tinggi sebesar 3,57MB/s dan CephFS sebesar 3,20MB/s, hasil yang diperoleh untuk download file bahwa metode GlusterFS lebih cepat 11,3% daripada Ceph dengan rata-rata upload file GlusterFS lebih tinggi 4,13MB/s dan CephFS sebesar 3,71MB/s, hasil yang diperoleh untuk write file bahwa metode GlusterFS lebih cepat 106% daripada Ceph dengan perbandingan sebesar 11,34kB/s dan 8,05kB/s, untuk read file bahwa metode GlusterFS lebih cepat 37% daripada Ceph dengan perbandingan sebesar 3,10kB/s dan 2,25kB/s. Dari analisis tersebut bahwa metode GlusterFS lebih baik 100% dengan menggunakan 2 node yang masing-masing memiliki virtual disk yang dapat digabung dan mempercepat performancenya, sedangkan Ceph terbagi 3 node dimana 1 node digunakan sebagai MON yang berisikan penyimpanan metadata dan pool data yang memiliki proses lebih banyak sehingga mengakibatkan turunnya performance pada file system tersebu

    Participatory Cloud Computing: The Community Cloud Management Protocol

    Get PDF
    This thesis work takes an investigative approach into developing a middleware solution for managing services in a community cloud computing infrastructure predominantly made of interconnected low power wireless devices. The thesis extends itself slightly outside of this acute framing to ensure heterogeneity is accounted for. The developed framework, in its draft implementation, provides networks with value added functionality in a way which minimally impacts nodes on the network. Two sub-protocols are developed and successfully implemented in order to achieve efficient discovery and allocation of the community cloud resources. First results are promising as the systems developed show both low resource consumption in its application, but also the ability to effectively transfer services through the network while evenly distributing load amongst computing resources on the network

    Berkeley Packet Filter: theory, practice and perspectives

    Get PDF
    Inizialmente in molte versioni di Unix il meccanismo di filtraggio dei pacchetti era implementato nello spazio utente, richiedendo la copia di ogni pacchetto dallo spazio kernel, prima di essere filtrato. L'introduzione di BPF ha permesso di migliorare la performance consentendo il filtraggio dei pacchetti direttamente nel kernel space. La prima implementazione, classic BPF (cBPF) permette di iniettare dallo spazio utente codice assembly per la macchina virtuale che risiede nel kernel, in modo che il programmatore possa scrivere filtri personalizzati. L' utilizzo di cBPF e' stato esteso anche al filtraggio delle system call tramite l'implementazione di seccomp che sfrutta la stessa sintassi ma per agire su una struttura dati che rappresenta la system call eseguita, invece che il pacchetto di rete. Successivamente la macchina virtuale e' stata riscritta sia per adeguarla alle innovazioni riguardanti l'architettura dei moderni processori (e.g. piu' registri, piu' istruzioni) sia con lo scopo di aggiungere ulteriori funzionalita'. La nuova implementazione prende il nome di extended BPF (eBPF) e prevede molte piu' tipologie di programmi, la possibilita' di utilizzare mappe per la comunicazione tra user space e kernel space, la possibilita' di chiamare da un programma BPF altri programmi BPF o un sottoinsieme di funzioni del kernel chiamate helpers, nonche' la possibilita' di fissare alcuni oggetti in un file system virtuale in modo da poterli recuperare successivamente. Il supporto eBPF per seccomp non e' stato introdotto sebbene vi siano state alcune patch proposte. Il seguente lavoro di tesi ha l'obiettivo di introdurre cBPF ed eBPF, sia a livello teorico descrivendone le componenti, che pratico tramite la creazione di un repository github ed di una documentazione tecnica ed infine di analizzare alcuni aspetti relativi a seccomp

    Engineering Trustworthy Systems by Minimizing and Strengthening their TCBs using Trusted Computing

    Get PDF
    The Trusted Computing Base (TCB) describes the part of an IT system that is responsible for enforcing a certain security property of the system. In order to engineer a trustworthy system, the TCB must be as secure as possible. This can be achieved by reducing the number, size and complexity of components that are part of the TCB and by using hardened components as part of the TCB. Worst case scenario is for the TCB to span the complete IT system. Best case is for the TCB to be reduced to only a strengthened Root of Trust such as a Hardware Security Module (HSM). One such very secure HSMs with many capabilities is the Trusted Platform Module (TPM). This thesis demonstrates how the TCB of a system can be largely or even solely reduced to the TPM for a variety of security policies, especially in the embedded domain. The examined scenarios include the policies for securing of device resident data at rest also during firmware updates, the enforcement of firmware product lines at runtime, the securing of payment credentials in Plug and Charge controllers, the recording of audit trails over attestation data and a very generic role-based access management. In order to allow evaluating these different solutions, the notion of a dynamic lifecycle dimension for a TCB is introduced. Furthermore, an approach towards engineering such systems based on a formal framework is presented. These scenarios provide evidence for the potential to enforce even complex security policies in small and thus strong TCBs. The approach for implementing those policies can often be inspired by a formal methods based engineering process or by means of additive functional engineering, where a base system is expanded by increased functionality in each step. In either case, a trustworthy system with high assurance capabilities can be achieved
    corecore