18 research outputs found

    {SoK}: {An} Analysis of Protocol Design: Avoiding Traps for Implementation and Deployment

    No full text
    Today's Internet utilizes a multitude of different protocols. While some of these protocols were first implemented and used and later documented, other were first specified and then implemented. Regardless of how protocols came to be, their definitions can contain traps that lead to insecure implementations or deployments. A classical example is insufficiently strict authentication requirements in a protocol specification. The resulting Misconfigurations, i.e., not enabling strong authentication, are common root causes for Internet security incidents. Indeed, Internet protocols have been commonly designed without security in mind which leads to a multitude of misconfiguration traps. While this is slowly changing, to strict security considerations can have a similarly bad effect. Due to complex implementations and insufficient documentation, security features may remain unused, leaving deployments vulnerable. In this paper we provide a systematization of the security traps found in common Internet protocols. By separating protocols in four classes we identify major factors that lead to common security traps. These insights together with observations about end-user centric usability and security by default are then used to derive recommendations for improving existing and designing new protocols---without such security sensitive traps for operators, implementors and users

    {SoK}: {An} Analysis of Protocol Design: Avoiding Traps for Implementation and Deployment

    No full text
    Today's Internet utilizes a multitude of different protocols. While some of these protocols were first implemented and used and later documented, other were first specified and then implemented. Regardless of how protocols came to be, their definitions can contain traps that lead to insecure implementations or deployments. A classical example is insufficiently strict authentication requirements in a protocol specification. The resulting Misconfigurations, i.e., not enabling strong authentication, are common root causes for Internet security incidents. Indeed, Internet protocols have been commonly designed without security in mind which leads to a multitude of misconfiguration traps. While this is slowly changing, to strict security considerations can have a similarly bad effect. Due to complex implementations and insufficient documentation, security features may remain unused, leaving deployments vulnerable. In this paper we provide a systematization of the security traps found in common Internet protocols. By separating protocols in four classes we identify major factors that lead to common security traps. These insights together with observations about end-user centric usability and security by default are then used to derive recommendations for improving existing and designing new protocols---without such security sensitive traps for operators, implementors and users

    M2: Malleable Metal as a Service

    Full text link
    Existing bare-metal cloud services that provide users with physical nodes have a number of serious disadvantage over their virtual alternatives, including slow provisioning times, difficulty for users to release nodes and then reuse them to handle changes in demand, and poor tolerance to failures. We introduce M2, a bare-metal cloud service that uses network-mounted boot drives to overcome these disadvantages. We describe the architecture and implementation of M2 and compare its agility, scalability, and performance to existing systems. We show that M2 can reduce provisioning time by over 50% while offering richer functionality, and comparable run-time performance with respect to tools that provision images into local disks. M2 is open source and available at https://github.com/CCI-MOC/ims.Comment: IEEE International Conference on Cloud Engineering 201

    DHCPv6 Options for Network Boot

    Full text link

    S2Dedup: SGX-enabled Secure Deduplication

    Get PDF
    Secure deduplication allows removing duplicate content at third-party storage services while preserving the privacy of users’ data. However, current solutions are built with strict designs that cannot be adapted to storage service and applications with different security and performance requirements. We present S2Dedup, a trusted hardware-based privacy-preserving deduplication system designed to support multiple security schemes that enable different levels of performance, security guarantees and space savings. An in-depth evaluation shows these trade-offs for the distinct Intel SGX-based secure schemes supported by our prototype. Moreover, we propose a novel Epoch and Exact Frequency scheme that prevents frequency analysis leakage attacks present in current deterministic approaches for secure deduplication while maintaining similar performance and space savings to state-of-the-art approaches

    Evaluation and Deployment of a Private Cloud Framework at DI-FCT-NOVA

    Get PDF
    In today’s technological landscape, there is an ever-increasing demand for computing resources for simulations, machine learning, or other use-cases. This demand can be seen across the business world, with the success of Amazon’s AWS and Microsoft’s Azure offer- ings, which provide a cloud of on-demand computing resources to any paying customer. The necessity for computing resources is no less felt in the academic world, where departments are forced to assign researchers and teachers to time-consuming system administrator roles, to allocate resources to users, leading to delays and wasted potential. Allowing researchers to request computing resources and then get them, on-demand, with minimal input from any administrative staff, is a great boon. Not only does it increase productivity of the administrators themselves, but it also allows users (teachers, researchers and students) to get the resources they need faster, and more securely. This goal is attainable through the use of a cloud management framework to assist in the administration of a department’s computing infrastructure. This dissertation aims to provide a critical evaluation on the adequacy of three cloud management frameworks, evaluating the requirements for a private cloud at the DI- FCT-NOVA, as well as which features of the selected cloud framework may be used in the fulfilment of the department’s needs. The final goal is to architect and deploy the selected framework to DI-FCT-NOVA, which will give the department a maintainable state-of-the-art private cloud deployment, capable of adequately responding to the needs of its users.No cenário tecnológico atual, existe uma necessidade crescente por recursos computaci- onais quer para simulações, aprendizagem automática, ou outros fins. Essa necessidade pode ser vista no mundo dos negócios, traduzindo-se no sucesso da Amazon AWS e a da Microsoft Azure, entre outras, que oferecem clouds de recursos computacionais a qualquer cliente, sujeito a diferentes formas de pagamento. A necessidade de recursos computacionais não é menos sentida no mundo académico, onde departamentos são forçados a atribuir a investigadores e professores tarefas onerosas que desperdiçam o seu potencial, como administração de sistemas computacionais com o fim de alocar recursos quem deles necessita (docentes, investigadores e estudantes). Permitir que se peçam recursos computacionais, e estes sejam alocados com o mínimo de interacção de uma equipa administrativa, é um grande benefício. Isto não só aumenta a produtividade dos próprios administradores, como também permite que se obtenham os recursos mais depressa, e de forma mais segura. Esta meta é alcançável através do uso de uma framework de gestão de cloud, cujo objectivo é assistir na administração da infraestrutura computacional de um departamento. Esta dissertação tem como objectivo fornecer uma avaliação crítica da adequação de três frameworks de gestão de cloud, avaliar os requisitos necessários para uma cloud privada no DI-FCT-NOVA, e identificar que funcionalidades da framework selecionada podem ser utilizadas para a satisfação dos requisitos indicados. O objectivo final é dese- nhar e instalar a framework selecionada no DI-FCT-NOVA, oferecendo assim uma cloud privada de última geração, capaz de responder adequadamente às necessidades dos seus utilizadores - docentes, investigadores e estudantes

    Turnstile File Transfer: A Unidirectional System for Medium-Security Isolated Clusters

    Get PDF
    Data transfer between isolated clusters is imperative for cybersecurity education, research, and testing. Such techniques facilitate hands-on cybersecurity learning in isolated clusters, allow cybersecurity students to practice with various hacking tools, and develop professional cybersecurity technical skills. Educators often use these remote learning environments for research as well. Researchers and students use these isolated environments to test sophisticated hardware, software, and procedures using full-fledged operating systems, networks, and applications. Virus and malware researchers may wish to release suspected malicious software in a controlled environment to observe their behavior better or gain the information needed to assist their reverse engineering processes. The isolation prevents harm to networked systems. However, there are times when the data is required to move in such quantities or speeds that it makes downloading onto an intermediate device untenable. This study proposes a novel turnstile model, a mechanism for one-way file transfer from one enterprise system to another without allowing data leakage. This system protects data integrity and security by connecting the isolated environment to the external network via a locked-down interconnection. Using medium-security isolated clusters, the researchers successfully developed a unidirectional file transfer system that acts as a one-way “turnstile” for secure file transfer between systems not connected to the internet or other external networks. The Turnstile system (source code available at github.com/monnin/turnstile) provides unidirectional file transfer between two computer systems. The solution enabled data to be transferred from a source system to a destination system without allowing any data to be transferred back in the opposite direction. The researchers found an automated process of transferring external files to isolated clusters optimized the transfer speed of external files to isolated clusters using Linux distributions and commands

    A Pattern-Based Approach to Scaffold the IT Infrastructure Design Process

    Get PDF
    Context. The design of Information Technology (IT) infrastructures is a challenging task since it implies proficiency in several areas that are rarely mastered by a single person, thus raising communication problems among those in charge of conceiving, deploying, operating and maintaining/managing them. Most IT infrastructure designs are based on proprietary models, known as blueprints or product-oriented architectures, defined by vendors to facilitate the configuration of a particular solution, based upon their services and products portfolio. Existing blueprints can be facilitators in the design of solutions for a particular vendor or technology. However, since organizations may have infrastructure components from multiple vendors, the use of blueprints aligned with commercial product(s) may cause integration problems among these components and can lead to vendor lock-in. Additionally, these blueprints have a short lifecycle, due to their association with product version(s) or a specific technology, which hampers their usage as a tool for the reuse of IT infrastructure knowledge. Objectives. The objectives of this dissertation are (i) to mitigate the inability to reuse knowledge in terms of best practices in the design of IT infrastructures and, (ii) to simplify the usage of this knowledge, making the IT infrastructure designs simpler, quicker and better documented, while facilitating the integration of components from different vendors and minimizing the communication problems between teams. Method. We conducted an online survey and performed a systematic literature review to support the state of the art and to provide evidence that this research was relevant and had not been conducted before. A model-driven approach was also used for the formalization and empirical validation of well-formedness rules to enhance the overall process of designing IT infrastructures. To simplify and support the design process, a modeling tool, including its abstract and concrete syntaxes was also extended to include the main contributions of this dissertation. Results. We obtained 123 responses to the online survey. Their majority were from people with more than 15 years experience with IT infrastructures. The respondents confirmed our claims regarding the lack of formality and documentation problems on knowledge transfer and only 19% considered that their current practices to represent IT Infrastructures are efficient. A language for modeling IT Infrastructures including an abstract and concrete syntax is proposed to address the problem of informality in their design. A catalog of IT Infrastructure patterns is also proposed to allow expressing best practices in their design. The modeling tool was also evaluated and according to 84% of the respondents, this approach decreases the effort associated with IT infrastructure design and 89% considered that the use of a repository with infrastructure patterns, will help to improve the overall quality of IT infrastructures representations. A controlled experiment was also performed to assess the effectiveness of both the proposed language and the pattern-based IT infrastructure design process supported by the tool. Conclusion. With this work, we contribute to improve the current state of the art in the design of IT infrastructures replacing the ad-hoc methods with more formal ones to address the problems of ambiguity, traceability and documentation, among others, that characterize most of IT infrastructure representations. Categories and Subject Descriptors:C.0 [Computer Systems Organization]: System architecture; D.2.10 [Software Engineering]: Design-Methodologies; D.2.11 [Software Engineering]: Software Architectures-Patterns

    Techniques to Protect Confidentiality and Integrity of Persistent and In-Memory Data

    Get PDF
    Today computers store and analyze valuable and sensitive data. As a result we need to protect this data against confidentiality and integrity violations that can result in the illicit release, loss, or modification of a user’s and an organization’s sensitive data such as personal media content or client records. Existing techniques protecting confidentiality and integrity lack either efficiency or are vulnerable to malicious attacks. In this thesis we suggest techniques, Guardat and ERIM, to efficiently and robustly protect persistent and in-memory data. To protect the confidentiality and integrity of persistent data, clients specify per-file policies to Guardat declaratively, concisely and separately from code. Guardat enforces policies by mediating I/O in the storage layer. In contrast to prior techniques, we protect against accidental or malicious circumvention of higher software layers. We present the design and prototype implementation, and demonstrate that Guardat efficiently enforces example policies in a web server. To protect the confidentiality and integrity of in-memory data, ERIM isolates sensitive data using Intel Memory Protection Keys (MPK), a recent x86 extension to partition the address space. However, MPK does not protect against malicious attacks by itself. We prevent malicious attacks by combining MPK with call gates to trusted entry points and ahead-of-time binary inspection. In contrast to existing techniques, ERIM efficiently protects frequently-used session keys of web servers, an in-memory reference monitor’s private state, and managed runtimes from native libraries. These use cases result in high switch rates of the order of 10 5 –10 6 switches/s. Our experiments demonstrate less then 1% runtime overhead per 100,000 switches/s, thus outperforming existing techniques.Computer speichern und analysieren wertvolle und sensitive Daten. Das hat zur Folge, dass wir diese Daten gegen Vertraulichkeits- und Integritätsverletzungen schützen müssen. Andernfalls droht die unerlaubte Freigabe, der Verlust oder die Modifikation der Daten. Existierende Methoden schützen die Vertraulichkeit und Integrität unzureichend, da sie ineffizient und anfällig für mutwillige Angriffe sind. In dieser Doktorarbeit stellen wir zwei Methoden, Guardat und ERIM, vor, die persistente Daten und Daten im Arbeitsspeicher effizient und widerstandsfähig beschützen. Um die Vertraulichkeit und Integrität persistenter Daten zu schützen, verknüpfen Nutzer für jede Datei Richtlinien in Guardat. Guardat überprüft diese Richtlinien für jeden Zugriff und setzt diese im Speichermedium durch. Im Gegensatz zu existierenden Methoden, beschützt Guardat vor mutwilligem Umgehen. Wir beschreiben die Methode, eine Implementierung und evaluieren die Effizienz von Beispielrichtlinien. Um die Vertraulichkeit und Integrität von Daten im Arbeitsspeicher zu schützen, isoliert ERIM sensitive Daten mit Hilfe von Intel Memory Protection Keys (MPK), eine neue x86 Erweiterung, um den Arbeitsspeicher aufzuteilen. Da MPK allerdings nicht gegen mutwillige Angriffe schützt, verhindert ERIM diese, indem es MPK mit widerstandsfähigen Wechseln der Speicherbereiche und einer Binärcodeüberprüfung kombiniert. Im Gegensatz zu existierenden Methoden, beschützt ERIM effizient häufig genutzte Sitzungsschlüssel, Zustandsvariablen eines Referenzmonitors und verwaltete Laufzeitumgebungen von nativen Bibliotheken. Unsere Experimente zeigen, dass weniger als 1% Laufzeitmehraufwand je 100.000 Wechseloperationen pro Sekunde notwendig sind

    Dinamización de cargas de trabajo HPC/HTC: conciliando modelos “onpremise” y “cloud computing”

    Get PDF
    ABSTRACT: Cloud computing has grown tremendously in the last decade, evolving from being a mere technological concept to be considered a full business model. Some entities like companies or research groups, that have a need for computational power are beginning to consider a full migration of their systems to the cloud. However, following the trend of full migration to the cloud might not be the optimal option, perhaps, not everything is black and white and the answer could be found somewhere in between. Although great efforts are being made in the development and implementation of the so called hybrid cloud by companies that manage the biggest commercial cloud environments, namely Google, Amazon and Microsoft, most of them are focused in the creation of software developing platforms like in the case of Azure Stack from Microsoft Azure, that helps to develop hybrid applications that can be executed both locally and in the cloud. Meanwhile, the provisioning of execution environments for HPC/HTC applications seems that to be relegated to the background. In part, this could be because currently there is a low demand for these environments. This low demand can be motivated by many factors among which it is worth highlighting the necessity of having really specialised hardware, the overhead introduced by virtualization and last, but not least, the economic impact usually associated to this kind of customized infrastructures in contrast with more standard ones. With these limitations in mind and the fact that, in most of the cases, complete migration to the cloud is limited by the previous existence of a local infrastructure that provides computing and storage resources, this thesis explores an intermediate path between on-premise (local) and cloud computing. This kind of solution will allow an HPC/HTC user to benefit from the cloud schema in a transparent way, maintaining the on-premise environment that he is so used with, and also being able to execute jobs in both paradigms. To achieve this, the Hybrid-Infrastructure-as-a-Service Manager (HIaaS-M) framework is created. This framework tries to join both computing paradigms by automating the interaction between them in a way that is eficient and completely transparent to the user. This framework is especially design to be integrated into already existing infrastructures (on-premise); in other words, without the need of changing any of the existing software pieces. The framework is a standalone software that communicates with the existing systems, minimizing the impact that changing the base software and/or infrastructure of an entity could cause. This document describes the whole development process of this modular and configurable framework which allows the integration of previously existing infrastructures with one created in the cloud through a cloud infrastructure provider, adding the alternative of executing jobs in any of the cloud environments provided by cloud providers thanks to the Apache Libcloud library. This document concludes with a proof of concept made over a development cluster (called "cluster2") hosted in the 3MARES Data Processing Center at the Science Faculty of the University of Cantabria. This deployment on a similar to real life environment has allowed to identify the main advantages of the framework, as well as improvement that could be made that are expressed in a suggested roadmap for future work.RESUMEN: Con el enorme crecimiento que ha experimentado la computación en la nube durante la última década, evolucionando desde un concepto tecnológico a ser considerada un modelo de negocio completo, las entidades que demandan recursos computacionales se empiezan a plantear la migración completa de estos recursos a la nube. Sin embargo, seguir esta tendencia de tener todo en la nube puede no ser necesariamente la mejor opción y, quizás, como en muchas otras cosas, la respuesta esté en algo intermedio. Si bien actualmente las principales compañías que gestionan los grandes entornos cloud comerciales, como son Google, Microsoft o Amazon, están llevando a cabo grandes esfuerzos en el desarrollo e implementación de la llamada nube híbrida, estos concentran principalmente su atención en la evolución de plataformas para desarrollo de software, como ocurre por ejemplo en el caso de Microsoft Azure, con su producto Azure Stack, destinado al desarrollo y ejecución de aplicaciones híbridas (que pueden ejecutar tanto on-premise como en cloud), mientras que los entornos para la ejecución de aplicaciones HPC/HTC, parece que quedan relegados a un segundo plano. Su baja demanda actual viene motivada por diversas causas, entre las que caben destacar la necesidad de hardware muy específico (en muchas ocasiones de muy altas prestaciones), los problemas derivados del overhead creado por la virtualización, y no menos importante, el factor económico, ya que este tipo de infraestructuras personalizadas pueden alcanzar un precio mucho mayor que las de configuración más estándar. Sabiendo esto y teniendo en cuenta que, en la inmensa mayoría de casos, migrar completamente a la nube puede ser desaconsejable debido entre otras cosas, a la existencia previa de una infraestructura local que provee de los recursos necesarios, en cuanto a computación y almacenamiento se refiere, este trabajo pretende explorar una solución a medio camino entre la computación on-premise y la computación en la nube, o también llamada cloud-computing, que permita a un usuario HPC/HTC beneficiarse de la computación en la nube sin prescindir del entorno on-premise al que está acostumbrado, siendo capaz de ejecutar trabajos en ambos entornos. Para ello, se ha desarrollado el framework Hybrid-Infrastructure-as-a-Service Manager (HIaaS-M), que pretende conciliar los dos paradigmas, automatizando la interacción entre ellos de forma eficiente y completamente transparente para el usuario. Este framework está especialmente diseñado para su integración en infraestructuras ya existentes (onpremise) de forma también transparente, es decir, sin necesidad de modificar ninguna pieza software. Su ejecución se realiza de manera independiente, como un programa autónomo, que se comunica con los sistemas existentes, minimizando así el impacto que puedan suponer posibles cambios en las piezas software que componen la infraestructura donde se vaya a implantar. A lo largo de esta memoria se describe el proceso completo de desarrollo de este framework, modular y configurable, el cual permite la integración de una infraestructura computacional existente con la proporcionada por un entorno cloud, añadiendo la posibilidad de ejecutar trabajos en prácticamente cualquiera de los entornos cloud apoyándose fundamentalmente en el uso de la librería Libcloud. El trabajo culmina con una prueba de concepto realizada sobre el cluster en desarrollo (de nombre cluster2) ubicado en el CPD 3Mares de la Facultad de Ciencias de la Universidad de Cantabria. Este último paso nos ha permitido concluir el trabajo identificando las ventajas del framework así como algunas consideraciones a tener en cuenta para trabajos futuros.Máster en Ingeniería Informátic
    corecore