397 research outputs found

    Virtual Machine Lifecycle Management in Grid and Cloud Computing

    Get PDF
    Virtualisierungstechnologie ist die Grundlage für zwei wichtige Konzepte: Virtualized Grid Computing und Cloud Computing. Ersteres ist eine Erweiterung des klassischen Grid Computing. Es hat zum Ziel, die Anforderungen kommerzieller Nutzer des Grid hinsichtlich der Isolation von gleichzeitig ausgeführten Batch-Jobs und der Sicherheit der zugehörigen Daten zu erfüllen. Dabei werden Anwendungen in virtuellen Maschinen ausgeführt, um sie voneinander zu isolieren und die von ihnen verarbeiteten Daten vor anderen Nutzern zu schützen. Darüber hinaus löst Virtualized Grid Computing das Problem der Softwarebereitstellung, eines der bestehenden Probleme des klassischen Grid Computing. Cloud Computing ist ein weiteres Konzept zur Verwendung von entfernten Ressourcen. Der Fokus dieser Dissertation bezüglich Cloud Computing liegt auf dem “Infrastructure as a Service Modell”, das Ideen des (Virtualized) Grid Computing mit einem neuartigen Geschäftsmodell kombiniert. Dieses besteht aus der Bereitstellung von virtuellen Maschinen auf Abruf und aus einem Tarifmodell, bei dem lediglich die tatsächliche Nutzung berechnet wird. Der Einsatz von Virtualisierungstechnologie erhöht die Auslastung der verwendeten (physischen) Rechnersysteme und vereinfacht deren Administration. So ist es beispielsweise möglich, eine virtuelle Maschine zu klonen oder einen Snapshot einer virtuellen Maschine zu erstellen, um zu einem definierten Zustand zurückkehren zu können. Jedoch sind noch nicht alle Probleme im Zusammenhang mit der Virtualisierungstechnologie gelöst. Insbesondere entstehen durch den Einsatz in den sehr dynamischen Umgebungen des Virtualized Grid Computing und des Cloud Computing neue Herausforderungen für die Virtualisierungstechnologie. Diese Dissertation befasst sich mit verschiedenen Aspekten des Einsatzes von Virtualisierungstechnologie in Virtualized Grid und Cloud Computing Umgebungen. Zunächst wird der Lebenszyklus von virtuellen Maschinen in diesen Umgebungen untersucht, und es werden Modelle dieses Lebenszyklus entwickelt. Anhand der entwickelten Modelle werden Probleme identifiziert und Lösungen für diese Probleme entwickelt. Der Fokus liegt dabei auf den Bereichen Speicherung, Bereitstellung und Ausführung von virtuellen Maschinen. Virtuelle Maschinen werden üblicherweise in so genannten Disk Images, also Abbildern von virtuellen Festplatten, gespeichert. Dieses Format hat nicht nur Einfluss auf die Speicherung von größeren Mengen virtueller Maschinen, sondern auch auf deren Bereitstellung. In den untersuchten Umgebungen hat es zwei konkrete Nachteile: es verschwendet Speicherplatz und es verhindert eine effiziente Bereitstellung von virtuellen Maschinen. Maßnahmen zur Steigerung der Sicherheit von virtuellen Maschinen haben auf alle drei genannten Bereiche Einfluss. Beispielsweise sollte vor der Bereitstellung einer virtuellen Maschine geprüft werden, ob die darin installierte Software noch aktuell ist. Weiterhin sollte die Ausführungsumgebung Möglichkeiten bereitstellen, um die virtuelle Infrastruktur wirksam zu überwachen. Die erste in dieser Dissertation vorgestellte Lösung ist das Konzept der Image Composition. Es beschreibt die Komposition eines kombinierten Disk Images aus mehreren Schichten. Dadurch können Teile der einzelnen Schichten, die von mehreren virtuellen Maschinen verwendet werden, zwischen diesen geteilt und somit der Speicherbedarf für die Gesamtheit der virtuellen Maschinen reduziert werden. Der Marvin Image Compositor ist die Umsetzung dieses Konzepts. Die zweite Lösung ist der Marvin Image Store, ein Speichersystem für virtuelle Maschinen, das nicht auf den traditionell genutzten Disk Images basiert, sondern die darin enthaltenen Daten und Metadaten auf eine effiziente Weise getrennt voneinander speichert. Weiterhin werden vier Lösungen vorgestellt, die die Sicherheit von virtuellen Maschine verbessern können: Der Update Checker ist eine Lösung, die es ermöglicht, veraltete Software in virtuellen Maschinen zu identifizieren. Dabei spielt es keine Rolle, ob die jeweilige virtuelle Maschine gerade ausgeführt wird oder nicht. Die zweite Sicherheitslösung ermöglicht es, mehrere virtuelle Maschinen, die auf dem Konzept der Image Composition basieren, zentral zu aktualisieren. Das bedeutet, dass die einmalige Installation einer neuen Softwareversion ausreichend ist, um mehrere virtuelle Maschinen auf den neuesten Stand zu bringen. Die dritte Sicherheitslösung namens Online Penetration Suite ermöglicht es, virtuelle Maschinen automatisiert nach Schwachstellen zu durchsuchen. Die Überwachung der virtuellen Infrastruktur auf allen Ebenen ist der Zweck der vierten Sicherheitslösung. Zusätzlich zur Überwachung ermöglicht diese Lösung auch eine automatische Reaktion auf sicherheitsrelevante Ereignisse. Schließlich wird ein Verfahren zur Migration von virtuellen Maschinen vorgestellt, welches auch ohne ein zentrales Speichersystem eine effiziente Migration ermöglicht

    Emerging Risks in the Marine Transportation System (MTS), 2001- 2021

    Get PDF
    How has maritime security evolved since 2001, and what challenges exist moving forward? This report provides an overview of the current state of maritime security with an emphasis on port security. It examines new risks that have arisen over the last twenty years, the different types of security challenges these risks pose, and how practitioners can better navigate these challenges. Building on interviews with 37 individuals immersed in maritime security protocols, we identify five major challenges in the modern maritime security environment: (1) new domains for exploitation, (2) big data and information processing, (3) attribution challenges, (4) technological innovations, and (5) globalization. We explore how these challenges increase the risk of small-scale, high-probability incidents against an increasingly vulnerable Marine Transportation System (MTS). We conclude by summarizing several measures that can improve resilience-building and mitigate these risks

    Mapping the Focal Points of WordPress: A Software and Critical Code Analysis

    Get PDF
    Programming languages or code can be examined through numerous analytical lenses. This project is a critical analysis of WordPress, a prevalent web content management system, applying four modes of inquiry. The project draws on theoretical perspectives and areas of study in media, software, platforms, code, language, and power structures. The applied research is based on Critical Code Studies, an interdisciplinary field of study that holds the potential as a theoretical lens and methodological toolkit to understand computational code beyond its function. The project begins with a critical code analysis of WordPress, examining its origins and source code and mapping selected vulnerabilities. An examination of the influence of digital and computational thinking follows this. The work also explores the intersection of code patching and vulnerability management and how code shapes our sense of control, trust, and empathy, ultimately arguing that a rhetorical-cultural lens can be used to better understand code\u27s controlling influence. Recurring themes throughout these analyses and observations are the connections to power and vulnerability in WordPress\u27 code and how cultural, processual, rhetorical, and ethical implications can be expressed through its code, creating a particular worldview. Code\u27s emergent properties help illustrate how human values and practices (e.g., empathy, aesthetics, language, and trust) become encoded in software design and how people perceive the software through its worldview. These connected analyses reveal cultural, processual, and vulnerability focal points and the influence these entanglements have concerning WordPress as code, software, and platform. WordPress is a complex sociotechnical platform worthy of further study, as is the interdisciplinary merging of theoretical perspectives and disciplines to critically examine code. Ultimately, this project helps further enrich the field by introducing focal points in code, examining sociocultural phenomena within the code, and offering techniques to apply critical code methods

    Evaluating and quantifying the feasibility and effectiveness of whole IT system moving target defenses

    Get PDF
    Doctor of PhilosophyComputing and Information SciencesScott A. DeLoachXinming (Simon) OuThe Moving Target Defense (MTD) concept has been proposed as an approach to rebalance the security landscape by increasing uncertainty and apparent complexity for attackers, reducing their window of opportunity, and raising the costs of their reconnaissance and attack efforts. Intuitively, the idea of applying MTD techniques to a whole IT system should provide enhanced security; however, little research has been done to show that it is feasible or beneficial to the system’s security. This dissertation presents an MTD platform at the whole IT system level in which any component of the IT system can be automatically and reliably replaced with a fresh new one. A component is simply a virtual machine (VM) instance or a cluster of instances. There are a number of security benefits when leveraging such an MTD platform. Replacing a VM instance with a new one with the most up-to-date operating system and applications eliminates security problems caused by unpatched vulnerabilities and all the privileges the attacker has obtained on the old instance. Configuration parameters for the new instance, such as IP address, port numbers for services, and credentials, can be changed from the old ones, invalidating the knowledge the attackers already obtained and forcing them to redo the work to re-compromise the new instance. In spite of these obvious security benefits, building a system that supports live replacement with minimal to no disruption to the IT system’s normal operations is difficult. Modern enterprise IT systems have complex dependencies among services so that changing even a single instance will almost certainly disrupt the dependent services. Therefore, the replacement of instances must be carefully orchestrated with updating the settings of the dependent instances. This orchestration of changes is notoriously error-prone if done manually, however, limited tool support is available to automate this process. We designed and built a framework (ANCOR) that captures the requirements and needs of a whole IT system (in particular, dependencies among various services) and compiles them into a working IT system. ANCOR is at the core of the proposed MTD platform (ANCOR-MTD) and enables automated live instance replacements. In order to evaluate the platform’s practicality, this dissertation presents a series of experiments on multiple IT systems that show negligible (statistically non-significant) performance impacts. To evaluate the platform’s efficacy, this research analyzes costs versus security benefits by quantifying the outcome (sizes of potential attack windows) in terms of the number of adaptations, and demonstrates that an IT system deployed and managed using the proposed MTD platform will increase attack difficulty

    Integration of generic operating systems in partitioned architectures

    Get PDF
    Tese de mestrado, Engenharia Informática (Arquitectura, Sistemas e Redes de Computadores), Universidade de Lisboa, Faculdade de Ciências, 2009The Integrated Modular Avionics (IMA) specification defines a partitioned environment hosting multiple avionics functions of different criticalities on a shared computing platform. ARINC 653, one of the specifications related to the IMA concept, defines a standard interface between the software applications and the underlying operating system. Both these specifications come from the world of civil aviation, but they are getting interest from space industry partners, who have identified common requirements to those of aeronautic applications. Within the scope of this interest, the AIR architecture was defined, under a contract from the European Space Agency (ESA). AIR provides temporal and spatial segregation, and foresees the use of different operating systems in each partition. Temporal segregation is achieved through the fixed cyclic scheduling of computing resources to partitions. The present work extends the foreseen partition operating system (POS) heterogeneity to generic non-real-time operating systems. This was motivated by documented difficulties in porting applications to RTOSs, and by the notion that proper integration of a non-real-time POS will not compromise the timeliness of critical real-time functions. For this purpose, Linux is used as a case study. An embedded variant of Linux is built and evaluated regarding its adequacy as a POS in the AIR architecture. To guarantee safe integration, a solution based on the Linux paravirtualization interface, paravirt-ops, is proposed. In the course of these activities, the AIR architecture definition was also subject to improvements. The most significant one, motivated by the intended increased POS heterogeneity, was the introduction of a new component, the AIR Partition OS Adaptation Layer (PAL). The AIR PAL provides greater POS-independence to the major components of the AIR architecture, easing their independent certification efforts. Other improvements provide enhanced timeliness mechanisms, such as mode-based schedules and process deadline violation monitoring.A especificação Integrated Modular Avionics (IMA) define um ambiente compartimentado com funções de aviónica de diferentes criticalidades a coexistir numa plataforma computacional. A especificação relacionada ARINC 653 define uma interface padrão entre as aplicações e o sistema operativo subjacente. Ambas as especificações provêm do mundo da aviónica, mas estão a ganhar o interesse de parceiros da indústria espacial, que identificaram requisitos em comum entre as aplicações aeronáuticas e espaciais. No âmbito deste interesse, foi definida a arquitectura AIR, sob contrato da Agência Espacial Europeia (ESA). Esta arquitectura fornece segregação temporale espacial, e prevê o uso de diferentes sistemas operativos em cada partição. A segregação temporal é obtida através do escalonamento fixo e cíclico dos recursos às partições. Este trabalho estende a heterogeneidade prevista entre os sistemas operativos das partições (POS). Tal foi motivado pelas dificuldades documentadas em portar aplicações para sistemas operativos de tempo-real, e pela noção de que a integração apropriada de um POS não-tempo-real não comprometerá a pontualidade das funções críticas de tempo-real. Para este efeito, o Linux foi utilizado como caso de estudo. Uma variante embedida de Linux é construída e avaliada quanto à sua adequação como POS na arquitectura AIR. Para garantir uma integração segura, é proposta uma solução baseada na interface de paravirtualização do Linux, paravirt-ops. No decurso destas actividades, foram também feitas melhorias à definição da arquitectura AIR. O mais significante, motivado pelo pretendido aumento da heterogeneidade entre POSs, foi a introdução de um novo componente, AIR Partition OS Adaptation Layer (PAL). Este componente proporciona aos principais componentes da arquitectura AIR maior independência face ao POS, facilitando os esforços para a sua certificação independente. Outros melhoramentos fornecem mecanismos avançados de pontualidade, como mode-based schedules e monitorização de incumprimento de metas temporais de processos.ESA/ITI - European Space Agency Innovation Triangular Initiative (through ESTEC Contract 21217/07/NL/CB-Project AIR-II) and FCT - Fundação para a Ciência e Tecnologia (through the Multiannual Funding Programme

    DEALING WITH NEXT-GENERATION MALWARE

    Get PDF
    Malicious programs are a serious problem that threatens the security of billions of Internet users. Today's malware authors are motivated by the easy financial gain they can obtain by selling on the underground market the information stolen from the infected hosts. To maximize their profit, miscreants continuously improve their creations to make them more and more resilient against anti-malware solutions. This increasing sophistication in malicious code led to next-generation malware, a new class of threats that exploit the limitations of state-of-the-art anti-malware products to bypass security protections and eventually evade detection. Unfortunately, current anti-malware technologies are inadequate to face next-generation malware. For this reason, in this dissertation we propose novel techniques to address the shortcomings of defensive technologies and to enhance current state-of-the-art security solutions. Dynamic behavior-based analysis is a very promising approach to automatically understand the behaviors a malicious program may exhibit at run-time. However, behavior-based solutions still present several limitations. First of all, these techniques may give incomplete results because the execution environments in which they are applied are synthetic and do not faithfully resemble the environments of end-users, the intended targets of the malicious activities. To overcome this problem, we present a new framework for improving behavior-based analysis of suspicious programs, that allows an end-user to delegate security labs the execution and the analysis of a program and to force the program to behave as if it were executed directly in the environment of the former. Our evaluation demonstrated that the proposed framework allows security labs to improve the completeness of the analysis, by analyzing a piece of malware on behalf of multiple end-users simultaneously, while performing a fine-grained analysis of the behavior of the program with no computational cost for the end-users. Another drawback of state-of-the-art defensive solutions is non-transparency: malicious programs are often able to determine that their execution is being monitored, and thus they can tamper with the analysis to avoid detection, or simply behave innocuously to mislead the anti-malware tool. At this aim, we propose a generic framework to perform complex dynamic system-level analyses of deployed production systems. By leveraging hardware support for virtualization available nowadays on all commodity machines, our framework is completely transparent to the system under analysis and it guarantees isolation of the analysis tools running on top of it. The internals of the kernel of the running system need not to be modified and the whole platform runs unaware of the framework. Once the framework has been installed, even kernel-level malware cannot detect it or affect its execution. This is accomplished by installing a minimalistic virtual machine monitor and migrating the system, as it runs, into a virtual machine. To demonstrate the potentials of our framework we developed an interactive kernel debugger, named HyperDbg. As HyperDbg can be used to monitor any critical system component, it is suitable to analyze even malicious programs that include kernel-level modules. Despite all the progress anti-malware technologies can make, perfect malware detection remains an undecidable problem. When it is not possible to prevent a malicious threat from infecting a system, post-infection remediation remains the only viable possibility. However, if the machine has already been compromised, the execution of the remediation tool could be tampered by the malware that is running on the system. To address this problem we present Conqueror, a software-based attestation scheme for tamper-proof code execution on untrusted legacy systems. Besides providing load-time attestation of a piece of code, Conqueror also ensures run-time integrity. Conqueror constitutes a valid alternative to trusted computing platforms, for systems lacking specialized hardware for attestation. We implemented a prototype, specific for the Intel x86 architecture, and evaluated the proposed scheme. Our evaluation showed that, compared to competitors, Conqueror is resistant to both static and dynamic attacks. We believe Conqueror and our transparent dynamic analysis framework constitute important building blocks for creating new security applications. To demonstrate this claim, we leverage the aforementioned solutions to realize HyperSleuth, an infrastructure to securely perform live forensic analysis of potentially compromised production systems. HyperSleuth provides a trusted execution environment that guarantees an attacker controlling the system cannot interfere with the analysis and cannot tamper with the results. The framework can be installed as the system runs, without a reboot and without loosing any volatile data. Moreover, the analysis can be periodically and safely interrupted to resume normal execution of the system. On top of HyperSleuth we implemented three forensic analysis tools: a lazy physical memory dumper, a lie detector, and a system call tracer. The experimental evaluation we conducted demonstrated that even time consuming analyses, such as the dump of the content of the physical memory, can be securely performed without interrupting the services offered by the system

    The Translocal Event and the Polyrhythmic Diagram

    Get PDF
    This thesis identifies and analyses the key creative protocols in translocal performance practice, and ends with suggestions for new forms of transversal live and mediated performance practice, informed by theory. It argues that ontologies of emergence in dynamic systems nourish contemporary practice in the digital arts. Feedback in self-organised, recursive systems and organisms elicit change, and change transforms. The arguments trace concepts from chaos and complexity theory to virtual multiplicity, relationality, intuition and individuation (in the work of Bergson, Deleuze, Guattari, Simondon, Massumi, and other process theorists). It then examines the intersection of methodologies in philosophy, science and art and the radical contingencies implicit in the technicity of real-time, collaborative composition. Simultaneous forces or tendencies such as perception/memory, content/ expression and instinct/intellect produce composites (experience, meaning, and intuition- respectively) that affect the sensation of interplay. The translocal event is itself a diagram - an interstice between the forces of the local and the global, between the tendencies of the individual and the collective. The translocal is a point of reference for exploring the distribution of affect, parameters of control and emergent aesthetics. Translocal interplay, enabled by digital technologies and network protocols, is ontogenetic and autopoietic; diagrammatic and synaesthetic; intuitive and transductive. KeyWorx is a software application developed for realtime, distributed, multimodal media processing. As a technological tool created by artists, KeyWorx supports this intuitive type of creative experience: a real-time, translocal “jamming” that transduces the lived experience of a “biogram,” a synaesthetic hinge-dimension. The emerging aesthetics are processual – intuitive, diagrammatic and transversal
    corecore