73 research outputs found

    OS diversity for intrusion tolerance: Myth or reality?

    Get PDF
    One of the key benefits of using intrusion-tolerant systems is the possibility of ensuring correct behavior in the presence of attacks and intrusions. These security gains are directly dependent on the components exhibiting failure diversity. To what extent failure diversity is observed in practical deployment depends on how diverse are the components that constitute the system. In this paper we present a study with operating systems (OS) vulnerability data from the NIST National Vulnerability Database. We have analyzed the vulnerabilities of 11 different OSes over a period of roughly 15 years, to check how many of these vulnerabilities occur in more than one OS. We found this number to be low for several combinations of OSes. Hence, our analysis provides a strong indication that building a system with diverse OSes may be a useful technique to improve its intrusion tolerance capabilities

    Benchmarking of bare metal virtualization platforms on commodity hardware

    Get PDF
    In recent years, System Virtualization became a fundamental IT tool, whether it is type-2/hosted virtualization, mostly exploited by end-users in their personal computers, or type-1/bare metal, well established in IT departments and thoroughly used in modern datacenters as the very foundation of cloud computing. Though bare metal virtualization is meant to be deployed on server-grade hardware (for performance, stability and reliability reasons), properly configured desktop-class systems are often used as virtualization “servers”, due to their attractive performance/cost ratio. This paper presents the results of a study conducted on such systems, about the performance of Windows 10 and Ubuntu Server 16.04 guests, when deployed in what we believe are the type-1 platforms most in use today: VMware ESXi, Citrix XenServer, Microsoft Hyper-V, and KVM-based (represented by oVirt and Proxmox). Performance is measured using three synthetic benchmarks: PassMark for Windows, UnixBench for Ubuntu Server, and the cross-platform Flexible I/O Tester. The benchmarks results may be used to choose the most adequate type-1 platform (performance-wise), depending on guest OS, its performance requisites (CPU-bound, IO-bound, or balanced) and its storage type (local/remote) used.info:eu-repo/semantics/publishedVersio

    Evaluation of type-1 hypervisors on desktop-class virtualization hosts

    Get PDF
    System Virtualization has become a fundamental IT tool, whether it is type-2/hosted virtualization, mostly exploited by end-users in their personal computers, or type-1/bare metal, well established in IT departments and thoroughly used in modern datacenters as the very foundation of cloud computing. Though bare metal virtualization is meant to be deployed on server-grade hardware (for performance, stability and reliability reasons), properly configured desktop-class systems or workstations are often used as virtualization servers, due to their attractive performance/cost ratio. This paper presents the results of a study conducted on commodity virtualization servers, aiming to assess the performance of a representative set of the type-1 platforms mostly in use today: VMware ESXi, Citrix XenServer, Microsoft Hyper-V, oVirt and Proxmox. Hypervisor performance is indirectly measured through synthetic benchmarks performed on Windows 10 LTSB and Linux Ubuntu Server 16.04 guests: PassMark for Windows, UnixBench for Linux, and the cross-platform Flexible I/O Tester and iPerf3 benchmarks. The evaluation results may be used to guide the choice of the best type-1 platform (performance-wise), depending on the predominant guest OS, the performance patterns (CPUbound, IO-bound, or balanced) of that OS, its storage type (local/remote) and the required network-level performance.info:eu-repo/semantics/publishedVersio

    The development of an open-source forensics platform

    Get PDF
    The rate at which technology evolves by far outpaces the rate at which methods are developed to prevent and prosecute digital crime. This unfortunate situation may potentially allow computer criminals to commit crimes using technologies for which no proper forensic investigative technique currently exists. Such a scenario would ultimately allow criminals to go free due to the lack of evidence to prove their guilt. A solution to this problem would be for law enforcement agencies and governments to invest in the research and development of forensic technologies in an attempt to keep pace with the development of digital technologies. Such an investment could potentially allow new forensic techniques to be developed and released more frequently, thus matching the appearance of new computing devices on the market. A key element in improving the situation is to produce more research results, utilizing less resources, and by performing research more efficiently. This can be achieved by improving the process used to conduct forensic research. One of the problem areas in research and development is the development of prototypes to prove a concept or to test a hypothesis. An in-depth understanding of the extremely technical aspects of operating systems, such as file system structures and memory management, is required to allow forensic researchers to develop prototypes to prove their theories and techniques. The development of such prototypes is an extremely challenging task. It is complicated by the presence of minute details that, if ignored, may have a negative impact on the accuracy of results produced. If some of the complexities experienced in the development of prototypes could simply be removed from the equation, researchers may be able to produce more and better results with less effort, and thus ultimately speed up the forensic research process. This dissertation describes the development of a platform that facilitates the rapid development of forensic prototypes, thus allowing researchers to produce such prototypes utilizing less time and fewer resources. The purpose of the platform is to provide a set of rich features which are likely to be required by developers performing research prototyping. The proposed platform contributes to the development of prototypes using fewer resources and at a faster pace. The development of the platform, as well as various considerations that helped to shape its architecture and design, are the focus points of this dissertation. Topics such as digital forensic investigations, open-source software development, and the development of the proposed forensic platform are discussed. Another purpose of this dissertation is to serve as a proof-of-concept for the developed platform. The development of a selection of forensics prototypes, as well as the results obtained, are also discussed. CopyrightDissertation (MSc)--University of Pretoria, 2009.Computer Scienceunrestricte

    Auto-tuning compiler options for HPC

    Get PDF

    Measuring the Energy Consumption of Software written in C on x86-64 Processors

    Get PDF
    In 2016 German data centers consumed 12.4 terawatt-hours of electrical energy, which accounts for about 2% of Germany’s total energy consumption in that year. In 2020 this rose to 16 terawatt-hours or 2.9% of Germany’s total energy consumption in that year. The ever-increasing energy consumption of computers consequently leads to considerations to reduce it to save energy, money and to protect the environment. This thesis aims to answer fundamental questions about the energy consumption of software, e. g. how and how precise can a measurement be taken or if CPU load and energy consumption are correlated. An overview of measurement methods and the related software tooling was created. The most promising approach using software called 'Scaphandre' was chosen as the main basis and further developed. Different sorting algorithms were benchmarked to study their behavior regarding energy consumption. The resulting dataset was also used to answer the fundamental questions stated in the beginning. A replication and reproduction package was provided to enable the reproducibility of the results.Im Jahr 2016 verbrauchten deutsche Rechenzentren 12,4 Terawattstunden elektrische Energie, was etwa 2 % des gesamten Energieverbrauchs in Deutschland in diesem Jahr ausmacht. Im Jahr 2020 stieg dieser Wert auf 16 Terawattstunden bzw. 2,9 % des Gesamtenergieverbrauchs in Deutschland. Der stetig steigende Energieverbrauch von Computern führt folglich zu Überlegungen, diesen zu reduzieren, um Energie und Geld zu sparen und die Umwelt zu schützen. Ziel dieser Arbeit ist es, grundlegende Fragen zum Energieverbrauch von Software zu beantworten, z. B. wie und mit welcher Genauigkeit gemessen werden kann oder ob CPU-Last und Energieverbrauch korrelieren. Es wurde eine Übersicht über Messmethoden und die dazugehörigen Softwaretools erstellt. Der vielversprechendste Ansatz mit der Software 'Scaphandre' wurde als Hauptgrundlage ausgewählt und weiterentwickelt. Verschiedene Sortieralgorithmen wurden einem Benchmarking unterzogen, um ihr Verhalten hinsichtlich des Energieverbrauchs zu untersuchen. Der resultierende Datensatz wurde auch zur Beantwortung der eingangs gestellten grundlegenden Fragen verwendet. Ein Replikations- und Reproduktionspaket wurde bereitgestellt, um die Reproduzierbarkeit der Ergebnisse zu ermöglichen

    Modern web-programming language concurrency

    Get PDF
    This Masters Thesis compares Elixir, Go and JavaScript (Node.js) as programming language candi- dates for writing concurrent RESTful webservice backends. First we describe each of the languages. Next we compare the functional concurrency characteristics of the languages to each other. Finally we do scalability testing for each of the languages. Scalability testing is done using the Locust.io framework. For testing purposes we introduce for simple REST-api implementations for each of the languages. Result from the tests was that JavaScript performed the worst of the languages and Go was the most verbose language to program with
    • …
    corecore