15 research outputs found

    A protocol for programmable smart cards

    Get PDF
    This paper presents an open protocol for interoperability across multi-vendor programmable smart cards. It allows exposition of on-card storage and cryptographic services to host applications in a unified, card-independent way. Its design, inspired by the standardization of on-card Java language and cryptographic API, has been kept as generic and modular as possible. The protocol security model has been designed with the aim of allowing multiple applications to use the services exposed by the same card, with either a cooperative or a no-interference approach, depending on application requirements. With respect to existing protocols for smart card interoperability, defining sophisticated card services intended to be hard-coded into the device hardware, this protocol is intended to be implemented in software on programmable smart cards

    Enforcing Security and Assurance Properties in Cloud Environment

    Get PDF
    International audienceBefore deploying their infrastructure (resources, data, communications, ...) on a Cloud computing platform, companies want to be sure that it will be properly secured. At deployment time, the company provides a security policy describing its security requirements through a set of properties. Once its infrastructure deployed, the company want to be assured that this policy is applied and enforced. But describing and enforcing security properties and getting strong evidences of it is a complex task. To address this issue, in [1], we have proposed a language that can be used to express both security and assurance properties on distributed resources. Then, we have shown how these global properties can be cut into a set of properties to be enforced locally. In this paper, we show how these local properties can be used to automatically configure security mechanisms. Our language is context-based which allows it to be easily adapted to any resource naming systems e.g., Linux and Android (with SELinux) or PostgreSQL. Moreover, by abstracting low-level functionalities (e.g., deny write to a file) through capabilities, our language remains independent from the security mechanisms. These capabilities can then be combined into security and assurance properties in order to provide high-level functionalities, such as confidentiality or integrity. Furthermore, we propose a global architecture that receives these properties and automatically configures the security and assurance mechanisms accordingly. Finally, we express the security and assurance policies of an industrial environment for a commercialized product and show how its security is enforced

    A Framework for Federated Two-Factor Authentication Enabling Cost-Effective Secure Access to Distributed Cyberinfrastructure

    Get PDF
    As cyber attacks become increasingly sophisticated, the security measures used to mitigate the risks must also increase in sophistication. One time password (OTP) systems provide strong authentication because security credentials are not reusable, thus thwarting credential replay attacks. The credential changes regularly, making brute-force attacks significantly more difficult. In high performance computing, end users may require access to resources housed at several different service provider locations. The ability to share a strong token between multiple computing resources reduces cost and complexity. The National Science Foundation (NSF) Extreme Science and Engineering Discovery Environment (XSEDE) provides access to digital resources, including supercomputers, data resources, and software tools. XSEDE will offer centralized strong authentication for services amongst service providers that leverage their own user databases and security profiles. This work implements a scalable framework built on standards to provide federated secure access to distributed cyberinfrastructure

    Estudo de caso: migração de estações para software livre em domínio Microsoft

    Get PDF
    Grande parte do custo com licenciamento de softwares na área de TI vem do sistema operacional e das ferramentas de escritório usados nas estações de trabalho (Pois necessitam de um número grande de licenças), o uso dos Softwares Livres nelas auxilia a reduzir o Custo Total de Propriedade (TCO) nas empresas. Contudo, a migração de software proprietário para livre em estações de trabalho requer, entre outros esforços, a definição de onde os usuários dessas estações irão se autenticar e um conhecimento técnico mais avançado para alterar vários parâmetros em diversos arquivos de configuração. Esse projeto visa apresentar um modelo de rede onde exista apenas uma base para autenticação de usuários que usam estações com softwares livres ou proprietários. Para isso, será criado um ambiente com um controlador de domínio Microsoft e estações com sistema operacional livre e proprietário; também será desenvolvido um software com uma interface simples, que poderá ser usado por pessoas sem conhecimentos técnicos avançados para adicionar as estações ao domínio de uma maneira fácil, sem que haja necessidade de abrir e alterar manualmente vários arquivos de configuração. Serão utilizados o Windows Server 2003 e o Active Directory da Microsoft para criação de um Controlador de Domínio e também o compilador Free Pascal e a IDE Lazarus para o desenvolvimento do software das estações de trabalho com o sistema operacional GNU/Linux

    FreeIPA - URI Based Access Management

    Get PDF
    Cílem práce je navržení a implementace řízení přístupu na základě URI požadovaného zdroje. Pro implementaci bylo jako základ použito rozšíření Host Based Access Control v nástroji pro správu identit FreeIPA. Zároveň bylo třeba rozšířit související infrastrukturu, především program SSSD. Jako příklad aplikace využívající HBAC na základě URI byl implementován autorizační modul pro Apache HTTP Server. Zásadním řešeným problémem byl návrh infrastruktury pro komunikaci nezbytných parametrů a návrh strategie vyhodnocení HBAC pravidel definujících přístupová práva. Kompletní řešení bylo předvedeno na příkladu zabezpečení instance webové aplikace Wordpress.The goal of this thesis is designing and implementing access management based on URI of the requested resource. Host Based Access Control in the identity management tool FreeIPA was used as a basis for implementation. Furthermore, it was necessary to enhance the related infrastructure, namely the SSSD tool. The authorization module for Apache HTTP Server was used as an example of the application using URI-based HBAC. The main solved problem was design of the infrastructure for communication of the necessary parameters and strategy proposal for evaluating HBAC rules which define the access rights. The complete solution was demonstrated on the example of securing an instance of the web application Wordpress.

    Access control in semantic information systems

    Get PDF
    Access control has evolved in file systems. Early access control was limited and didn't handle identities. Access control then shifted to develop concepts such as identities. The next progression was the ability to take these identities and use lists to control what those identities can do. At this point we start to see more areas implementing access control such as web information systems. Web information systems has themselves started to raise the profile of semantic information. As semantic information systems start to expand new opportunities in access control become available to be explored. This dissertation introduces an experimental file system. The file system explores the concept of utilising metadata in a file system. The metadata is supported through the use of a database system. The introduction of the database enables the use of features such as views within the file system. Databases also provide a rich query language to utilise when nding information. The database aides the development of semantic meaning for the metadata stored. This provides greater meaning to the metadata and enables a platform for rethinking access contro

    Aplicação da análise de causa raiz em sistemas de detecção de intrusões

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Ciência da Computação.Os Sistemas de Detecção de Intrusões são ferramentas especializadas na análise do comportamento de um computador ou rede, visando a detecção de indícios de intrusão nestes meios. Entre os benefícios de sua utilização estão a possibilidade de receber notificações na forma de alertas a respeito das intrusões, executar contramedidas em tempo real ou armazenar uma cópia dos pacotes para análise futura. A exposição dos computadores na Internet e a crescente intensidade na freqüência e variedade de ataques vêm causando uma sobrecarga na quantidade de informações manipuladas e exibidas pelos Sistemas de Detecção de Intrusões aos administradores do sistema. Logo, a busca por novos conceitos para análise dos alertas pode ajudar na tarefa de manter computadores, redes e informações livres de ameaças. A Análise de Causa Raiz é uma metodologia que permite a investigação detalhada e progressiva de incidentes isolados. Muito utilizada em ambientes industriais, no segmento aeroespacial e na medicina, a Análise de Causa Raiz envolve o estudo de dois ou mais acontecimentos correlacionados com o objetivo de identificar como e por que um problema aconteceu, de forma que seja possível evitar sua recorrência. A proposta deste trabalho é aplicar a Análise de Causa Raiz nos Sistemas de Detecção de Intrusões com o objetivo de melhorar a qualidade das informações apresentadas ao administrador do sistema. Este objetivo é alcançado através da aplicação da Análise nas 10 vulnerabilidades mais críticas para os sistemas UNIX segundo o Instituto SANS e na adição de interpretação de Análise de Causa Raiz para as regras correspondentes presentes no Sistema de Detecção de Intrusões Snort

    High performance computing in the cloud

    Get PDF
    In recent years, the interest in both scientific and business workflows has increased. A workflow is composed of a series of tools, which should be executed in a predefined order to perform an analysis. Traditionally, these workflows were executed in a manual way, sending the output of one tool to the next one in the analysis process. Many applications to execute workflows automatically, appeared recently. These applications ease the work of the users while executing their analyses. In addition, from the computational point of view, some workflows require a significant amount of resources. Consequently, workflow execution moved from single workstations to distributed environments such as Grids or Clouds. Data management and tasks scheduling are required to execute workflows in an efficient way in such environments. In this thesis, we propose a cloud-based HPC environment, focusing on tasks scheduling, resources auto-scaling, data management and simplifying the access to the resources with software clients. First, the cloud computing infrastructure is devised, which includes the base software (i.e. OpenStack) plus several additional modules aimed at improving authentication (i.e. LDAP) and data management (i.e. GridFTP, Globus Online and CloudFuse). Second, built on top of the mentioned infrastructure, the TORQUE distributed resources manager and the Maui scheduler have been configured to schedule and distribute tasks to the cloud-based workers. To reduce the number of idle nodes and the incurred cost of the active cloud resources, we also propose a configurable auto-scaling technique, which is able to scale the execution cluster depending on the workload. Additionally, in order to simplify tasks submission to the TORQUE execution cluster, we have interconnected the Galaxy workflows management system with it, therefore users benefit from a simple way to execute their tasks. Finally, we conducted an experimental evaluation, composed by a number of different studies with synthetic and real-world applications, to show the behaviour of the auto-scaled execution cluster managed by TORQUE and Maui. All experiments have been performed by using an OpenStack cloud computing environment and the benchmarked applications correspond to the benchmarking suite, which is specially designed for workflows scheduling in the cloud computing environment. Cybershake, Ligo and Montage have been the selected synthetic applications from the benchmarking suite. GECKO and a GWAS pipeline represent the real-world test use cases, both having a diverse and heterogeneous set of tasks.The numerous technological advances in data acquisition techniques allow the massive production of enormous amounts of data in diverse fields such as astronomy, health and social networks. Nowadays, only a small part of this data can be analysed because of the lack of computational resources. High Performance Computing (HPC) strategies represent the single choice to analyse such overwhelming amount of data. However, in general, HPC techniques require the use of big and expensive computing and storage infrastructures, usually not affordable or available for most users. Cloud computing, where users pay for the resources they need and when they actually need them, appears as an interesting alternative. Besides the savings in hardware infrastructure, cloud computing offers further advantages such as the removal of installation, administration and supplying requirements. In addition, it enables users to use better hardware than the one they can usually afford, scale the resources depending on their needs, and a greater fault-tolerance, amongst others. The efficient utilisation of HPC resources becomes a fundamental task, particularly in cloud computing. We need to consider the cost of using HPC resources, specially in the case of cloud-based infrastructures, where users have to pay for storing, transferring and analysing data. Therefore, it is really important the usage of generic tasks scheduling and auto-scaling techniques to efficiently exploit the computational resources. It is equally important to make these tasks user-friendly through the development of tools/applications (software clients), which act as interface between the user and the infrastructure
    corecore