34 research outputs found

    Replication and Caching Systems for the support of VMs stored in File Systems with Snapshots

    Get PDF
    Recently, in a relatively short timeframe, there were fundamental changes in the way computing power is used. Virtualisation technology has changed both the model of a data centre’s infrastructure and the way physical computers are now managed. This shift is a consequence of today’s fast deployment rate of Virtual Machines (VM) in a high consolidation environment with minimal need for human management. New approaches to virtualisation techniques are being developed at a surprisingly fast rate, leading to a new exciting and vibrating ecosystem of platforms and services. We see the big industry players tackling problems such as Desktop Virtualisation with moderate success, but completely ignoring the computation power already present in their clients’ infrastructures and, instead, opting for a costly solution based on powerful new machines. There’s still room for improvement in Virtual Desktop Infrastructure (VDI) and development of new architectures that take advantage of the computation power available at the user’s desk, with a minimum effort on the management side; Infrastructure for Client-Based Desktops (iCBD) is one of these projects. This thesis focuses on the development of mechanisms for the replication and caching of VM images stored in a local filesystem, albeit one with the ability to perform snapshots. In this work, there are some challenges to address: the proposed architecture must be entirely distributed and completely integrated with the already existing client-based VDI platform; and it must be able to efficiently cope with very large, read-only files, (some of them snapshots) and handle their multiple versions. This work will also explore the challenges and advantages of deploying such a system in a high throughput network, with both high availability and scalability while efficiently supporting a large number of users (and their workstations)

    A Pattern-Based Approach to Scaffold the IT Infrastructure Design Process

    Get PDF
    Context. The design of Information Technology (IT) infrastructures is a challenging task since it implies proficiency in several areas that are rarely mastered by a single person, thus raising communication problems among those in charge of conceiving, deploying, operating and maintaining/managing them. Most IT infrastructure designs are based on proprietary models, known as blueprints or product-oriented architectures, defined by vendors to facilitate the configuration of a particular solution, based upon their services and products portfolio. Existing blueprints can be facilitators in the design of solutions for a particular vendor or technology. However, since organizations may have infrastructure components from multiple vendors, the use of blueprints aligned with commercial product(s) may cause integration problems among these components and can lead to vendor lock-in. Additionally, these blueprints have a short lifecycle, due to their association with product version(s) or a specific technology, which hampers their usage as a tool for the reuse of IT infrastructure knowledge. Objectives. The objectives of this dissertation are (i) to mitigate the inability to reuse knowledge in terms of best practices in the design of IT infrastructures and, (ii) to simplify the usage of this knowledge, making the IT infrastructure designs simpler, quicker and better documented, while facilitating the integration of components from different vendors and minimizing the communication problems between teams. Method. We conducted an online survey and performed a systematic literature review to support the state of the art and to provide evidence that this research was relevant and had not been conducted before. A model-driven approach was also used for the formalization and empirical validation of well-formedness rules to enhance the overall process of designing IT infrastructures. To simplify and support the design process, a modeling tool, including its abstract and concrete syntaxes was also extended to include the main contributions of this dissertation. Results. We obtained 123 responses to the online survey. Their majority were from people with more than 15 years experience with IT infrastructures. The respondents confirmed our claims regarding the lack of formality and documentation problems on knowledge transfer and only 19% considered that their current practices to represent IT Infrastructures are efficient. A language for modeling IT Infrastructures including an abstract and concrete syntax is proposed to address the problem of informality in their design. A catalog of IT Infrastructure patterns is also proposed to allow expressing best practices in their design. The modeling tool was also evaluated and according to 84% of the respondents, this approach decreases the effort associated with IT infrastructure design and 89% considered that the use of a repository with infrastructure patterns, will help to improve the overall quality of IT infrastructures representations. A controlled experiment was also performed to assess the effectiveness of both the proposed language and the pattern-based IT infrastructure design process supported by the tool. Conclusion. With this work, we contribute to improve the current state of the art in the design of IT infrastructures replacing the ad-hoc methods with more formal ones to address the problems of ambiguity, traceability and documentation, among others, that characterize most of IT infrastructure representations. Categories and Subject Descriptors:C.0 [Computer Systems Organization]: System architecture; D.2.10 [Software Engineering]: Design-Methodologies; D.2.11 [Software Engineering]: Software Architectures-Patterns

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    AI-assisted patent prior art searching - feasibility study

    Get PDF
    This study seeks to understand the feasibility, technical complexities and effectiveness of using artificial intelligence (AI) solutions to improve operational processes of registering IP rights. The Intellectual Property Office commissioned Cardiff University to undertake this research. The research was funded through the BEIS Regulators’ Pioneer Fund (RPF). The RPF fund was set up to help address barriers to innovation in the UK economy

    A generic artifact-driven approach for provisioning, configuring, and managing infrastructure resources in the cloud

    Get PDF
    Provisioning, configuration, and management of infrastructure resources in the cloud is difficult due to diverse APIs offered by cloud providers. Because approaches for a common API are still in an early stage and may not be broadly accepted, individual artifacts can be used to interact with different providers. They require generic properties to describe the configuration of infrastructure resources and combine them with provider-specific information provided by the user. Such generic properties are determined in this thesis by looking at the infrastructure offerings of 14 different providers. The artifacts can be made available in public repositories similar to configuration management scripts originating in the DevOps community. However, trust in their good nature is a challenge because in contrast to configuration management scripts they are executed in a shared management environment. To control and restrict the actions they are performing in this shared environment, a method to confine their execution has been developed. The Linux security module Tomoyo has been chosen as a foundation for this. A policy associated with each artifact describes the artifact's permissions in detail. The artifacts are used in the context of the OASIS Topology and Orchestration Specifiction for Cloud Applications (TOSCA), an emerging standard supported by a number of industry partners. This standard allows to model a topology of resources to be provisioned at a provider. Each infrastructure resource, such as a virtual machine, gets an artifact assigned for provisioning purposes. Based on this standard, two simple tools as well as artifacts for four providers were developed. They show the viability of this artifact-driven approach

    International Standard Payload Rack to International Space Station, Software Interface Control Document Part 1: International Space Station Program Revision M

    Get PDF
    The purpose of this document is to provide definition of the software interface requirements between the International Standard Payload Rack (ISPR) plus C&DH related non-rack end items and the International Space Station (ISS) flight elements. Related non-rack end items include portable payloads in any of the modules, payloads on the Japanese Experiment Module Exposed Facility (JEM EF), payloads on the Columbus Exposed Payload Facility (COL EPF), and attached payloads mounted on the ISS truss

    AI-assisted patent prior art searching - feasibility study

    Get PDF
    This study seeks to understand the feasibility, technical complexities and effectiveness of using artificial intelligence (AI) solutions to improve operational processes of registering IP rights. The Intellectual Property Office commissioned Cardiff University to undertake this research. The research was funded through the BEIS Regulators’ Pioneer Fund (RPF). The RPF fund was set up to help address barriers to innovation in the UK economy

    Techniques to Protect Confidentiality and Integrity of Persistent and In-Memory Data

    Get PDF
    Today computers store and analyze valuable and sensitive data. As a result we need to protect this data against confidentiality and integrity violations that can result in the illicit release, loss, or modification of a user’s and an organization’s sensitive data such as personal media content or client records. Existing techniques protecting confidentiality and integrity lack either efficiency or are vulnerable to malicious attacks. In this thesis we suggest techniques, Guardat and ERIM, to efficiently and robustly protect persistent and in-memory data. To protect the confidentiality and integrity of persistent data, clients specify per-file policies to Guardat declaratively, concisely and separately from code. Guardat enforces policies by mediating I/O in the storage layer. In contrast to prior techniques, we protect against accidental or malicious circumvention of higher software layers. We present the design and prototype implementation, and demonstrate that Guardat efficiently enforces example policies in a web server. To protect the confidentiality and integrity of in-memory data, ERIM isolates sensitive data using Intel Memory Protection Keys (MPK), a recent x86 extension to partition the address space. However, MPK does not protect against malicious attacks by itself. We prevent malicious attacks by combining MPK with call gates to trusted entry points and ahead-of-time binary inspection. In contrast to existing techniques, ERIM efficiently protects frequently-used session keys of web servers, an in-memory reference monitor’s private state, and managed runtimes from native libraries. These use cases result in high switch rates of the order of 10 5 –10 6 switches/s. Our experiments demonstrate less then 1% runtime overhead per 100,000 switches/s, thus outperforming existing techniques.Computer speichern und analysieren wertvolle und sensitive Daten. Das hat zur Folge, dass wir diese Daten gegen Vertraulichkeits- und IntegritĂ€tsverletzungen schĂŒtzen mĂŒssen. Andernfalls droht die unerlaubte Freigabe, der Verlust oder die Modifikation der Daten. Existierende Methoden schĂŒtzen die Vertraulichkeit und IntegritĂ€t unzureichend, da sie ineffizient und anfĂ€llig fĂŒr mutwillige Angriffe sind. In dieser Doktorarbeit stellen wir zwei Methoden, Guardat und ERIM, vor, die persistente Daten und Daten im Arbeitsspeicher effizient und widerstandsfĂ€hig beschĂŒtzen. Um die Vertraulichkeit und IntegritĂ€t persistenter Daten zu schĂŒtzen, verknĂŒpfen Nutzer fĂŒr jede Datei Richtlinien in Guardat. Guardat ĂŒberprĂŒft diese Richtlinien fĂŒr jeden Zugriff und setzt diese im Speichermedium durch. Im Gegensatz zu existierenden Methoden, beschĂŒtzt Guardat vor mutwilligem Umgehen. Wir beschreiben die Methode, eine Implementierung und evaluieren die Effizienz von Beispielrichtlinien. Um die Vertraulichkeit und IntegritĂ€t von Daten im Arbeitsspeicher zu schĂŒtzen, isoliert ERIM sensitive Daten mit Hilfe von Intel Memory Protection Keys (MPK), eine neue x86 Erweiterung, um den Arbeitsspeicher aufzuteilen. Da MPK allerdings nicht gegen mutwillige Angriffe schĂŒtzt, verhindert ERIM diese, indem es MPK mit widerstandsfĂ€higen Wechseln der Speicherbereiche und einer BinĂ€rcodeĂŒberprĂŒfung kombiniert. Im Gegensatz zu existierenden Methoden, beschĂŒtzt ERIM effizient hĂ€ufig genutzte SitzungsschlĂŒssel, Zustandsvariablen eines Referenzmonitors und verwaltete Laufzeitumgebungen von nativen Bibliotheken. Unsere Experimente zeigen, dass weniger als 1% Laufzeitmehraufwand je 100.000 Wechseloperationen pro Sekunde notwendig sind

    Techniques for Processing TCP/IP Flow Content in Network Switches at Gigabit Line Rates

    Get PDF
    The growth of the Internet has enabled it to become a critical component used by businesses, governments and individuals. While most of the trafïŹc on the Internet is legitimate, a proportion of the trafïŹc includes worms, computer viruses, network intrusions, computer espionage, security breaches and illegal behavior. This rogue trafïŹc causes computer and network outages, reduces network throughput, and costs governments and companies billions of dollars each year. This dissertation investigates the problems associated with TCP stream processing in high-speed networks. It describes an architecture that simpliïŹes the processing of TCP data streams in these environments and presents a hardware circuit capable of TCP stream processing on multi-gigabit networks for millions of simultaneous network connections. Live Internet trafïŹc is analyzed using this new TCP processing circuit
    corecore