1,472 research outputs found

    Performance Evaluation of Distributed Security Protocols Using Discrete Event Simulation

    Get PDF
    The Border Gateway Protocol (BGP) that manages inter-domain routing on the Internet lacks security. Protective measures using public key cryptography introduce complexities and costs. To support authentication and other security functionality in large networks, we need public key infrastructures (PKIs). Protocols that distribute and validate certificates introduce additional complexities and costs. The certification path building algorithm that helps users establish trust on certificates in the distributed network environment is particularly complicated. Neither routing security nor PKI come for free. Prior to this work, the research study on performance issues of these large-scale distributed security systems was minimal. In this thesis, we evaluate the performance of BGP security protocols and PKI systems. We answer the questions about how the performance affects protocol behaviors and how we can improve the efficiency of these distributed protocols to bring them one step closer to reality. The complexity of the Internet makes an analytical approach difficult; and the scale of Internet makes empirical approaches also unworkable. Consequently, we take the approach of simulation. We have built the simulation frameworks to model a number of BGP security protocols and the PKI system. We have identified performance problems of Secure BGP (S-BGP), a primary BGP security protocol, and proposed and evaluated Signature Amortization (S-A) and Aggregated Path Authentication (APA) schemes that significantly improve efficiency of S-BGP without compromising security. We have also built a simulation framework for general PKI systems and evaluated certification path building algorithms, a critical part of establishing trust in Internet-scale PKI, and used this framework to improve algorithm performance

    Machine Learning as a Service for High Energy Physics on heterogeneous computing resources

    Get PDF
    Machine Learning (ML) techniques in the High-Energy Physics (HEP) domain are ubiquitous and will play a significant role also in the upcoming High-Luminosity LHC (HL-LHC) upgrade foreseen at CERN: a huge amount of data will be produced by LHC and collected by the ex- periments, facing challenges at the exascale. Despite ML models are successfully applied in many use-cases (online and offline reconstruction, particle identification, detector simulation, Monte Carlo generation, just to name a few) there is a constant seek for scalable, performant, and production-quality operations of ML-enabled workflows. In addition, the scenario is complicated by the gap among HEP physicists and ML experts, caused by the specificity of some parts of the HEP typical workflows and solutions, and by the difficulty to formulate HEP problems in a way that match the skills of the Computer Science (CS) and ML community and hence its potential ability to step in and help. Among other factors, one of the technical obstacles resides in the difference of data-formats used by ML-practitioners and physicists, where the former use mostly flat-format data representations while the latter use to store data in tree-based objects via the ROOT data format. Another obstacle to further development of ML techniques in HEP resides in the difficulty to secure the adequate computing resources for training and inference of ML models, in a scalable and transparent way in terms of CPU vs GPU vs TPU vs other resources, as well as local vs cloud resources. This yields a technical barrier that prevents a relatively large portion of HEP physicists from fully accessing the potential of ML-enabled systems for scientific research. In order to close this gap, a Machine Learning as a Service for HEP (MLaaS4HEP) solution is presented as a product of R&D activities within the CMS experiment. It offers a service that is capable to directly read ROOT-based data, use the ML solution provided by the user, and ultimately serve predictions by pre-trained ML models “as a service” accessible via HTTP protocol. This solution can be used by physicists or experts outside of HEP domain and it provides access to local or remote data storage without requiring any modification or integration with the experiment specific framework. Moreover, MLaaS4HEP is built with a modular design allowing independent resource allocation that opens up a possibility to train ML models on PB-size datasets remotely accessible from the WLCG sites without physically downloading data into local storage. To prove the feasibility and utility of the MLaaS4HEP service with large datasets and thus be ready for the next future when an increase of data produced is expected, an exploration of different hardware resources is required. In particular, this work aims to provide the MLaaS4HEP service transparent access to heterogeneous resources, which opens up the usage of more powerful resources without requiring any effort from the user side during the access and use phase

    Managing Access Control in Virtual Private Networks

    Get PDF
    Virtual Private Network technology allows remote network users to benefit from resources on a private network as if their host machines actually resided on the network. However, each resource on a network may also have its own access control policies, which may be completely unrelated to network access. Thus users� access to a network (even by VPN technology) does not guarantee their access to the sought resources. With the introduction of more complicated access privileges, such as delegated access, it is conceivable for a scenario to arise where a user can access a network remotely (because of direct permissions from the network administrator or by delegated permission) but cannot access any resources on the network. There is, therefore, a need for a network access control mechanism that understands the privileges of each remote network user on one hand, and the access control policies of various network resources on the other hand, and so can aid a remote user in accessing these resources based on the user\u27s privileges. This research presents a software solution in the form of a centralized access control framework called an Access Control Service (ACS), that can grant remote users network presence and simultaneously aid them in accessing various network resources with varying access control policies. At the same time, the ACS provides a centralized framework for administrators to manage access to their resources. The ACS achieves these objectives using VPN technology, network address translation and by proxying various authentication protocols on behalf of remote users

    Understanding the trust relationships of the web PKI

    Get PDF
    TLS and the applications it secures (e.g., email, online banking, social media) rely on the web PKI to provide authentication. Without strong authentication guarantees, a capable attacker can impersonate trusted network entities and undermine both data integrity and confidentiality. At its core, the web PKI succeeds as a global authentication system because of the scalability afforded by trust. Instead of requiring every network entity to directly authenticate every other network entity, network entities trust certification authorities (CAs) to perform authentication on their behalf. Prior work has extensively studied the TLS protocol and CA authentication of network entities (i.e., certificate issuance), but few have examined even the most foundational aspect of trust management and understood which CAs are trusted by which TLS user agents, and why. One major reason for this disparity is the opacity of trust management in two regards: difficult data access and poor specifications. It is relatively easy to acquire and test popular TLS client/server software and issued certificates. On the other hand, tracking trust policies/deployments and evaluating CA operations is less straightforward, but just as important for securing the web PKI. This dissertation is one of the first attempts to overcome trust management opacity. By observing new measurement perspectives and developing novel fingerprinting techniques, we discover the CAs that operate trust anchors, the default trust anchors that popular TLS user agents rely on, and a general class of injected trust anchors: TLS interceptors. This research not only facilitates new ecosystem visibility, it also provides an empirical grounding for trust management specification and evaluation. Furthermore, our findings point to many instances of questionable, and sometimes broken, security practices such as improperly identified CAs, inadvertent and overly permissive trust, and trivially exploitable injected trust. We argue that most of these issues stem from inadequate transparency, and that explicit mechanisms for linking trust anchors and root stores to their origins would help remedy these problems

    A secure architecture enabling end-user privacy in the context of commercial wide-area location-enhanced web services

    Get PDF
    Mobile location-based services have raised privacy concerns amongst mobile phone users who may need to supply their identity and location information to untrustworthy third parties in order to access these applications. Widespread acceptance of such services may therefore depend on how privacy sensitive information will be handled in order to restore users’ confidence in what could become the “killer app” of 3G networks. The work reported in this thesis is part of a larger project to provide a secure architecture to enable the delivery of location-based services over the Internet. The security of transactions and in particular the privacy of the information transmitted has been the focus of our research. In order to protect mobile users’ identities, we have designed and implemented a proxy-based middleware called the Orient Platform together with its Orient Protocol, capable of translating their real identity into pseudonyms. In order to protect users’ privacy in terms of location information, we have designed and implemented a Location Blurring algorithm that intentionally downgrades the quality of location information to be used by location-based services. The algorithm takes into account a blurring factor set by the mobile user at her convenience and blurs her location by preventing real-time tracking by unauthorized entities. While it penalizes continuous location tracking, it returns accurate and reliable information in response to sporadic location queries. Finally, in order to protect the transactions and provide end-to-end security between all the entities involved, we have designed and implemented a Public Key Infrastructure based on a Security Mediator (SEM) architecture. The cryptographic algorithms used are identitybased, which makes digital certificate retrieval, path validation and revocation redundant in our environment. In particular we have designed and implemented a cryptographic scheme based on Hess’ work [108], which represents, to our knowledge, the first identity-based signature scheme in the SEM setting. A special private key generation process has also been developed in order to enable entities to use a single private key in conjunction with multiple pseudonyms, which significantly simplifies key management. We believe our approach satisfies the security requirements of mobile users and can help restore their confidence in location-based services

    Simulated penetration testing and mitigation analysis

    Get PDF
    Da Unternehmensnetzwerke und Internetdienste stetig komplexer werden, wird es immer schwieriger, installierte Programme, Schwachstellen und Sicherheitsprotokolle zu überblicken. Die Idee hinter simuliertem Penetrationstesten ist es, Informationen über ein Netzwerk in ein formales Modell zu transferiern und darin einen Angreifer zu simulieren. Diesem Modell fügen wir einen Verteidiger hinzu, der mittels eigener Aktionen versucht, die Fähigkeiten des Angreifers zu minimieren. Dieses zwei-Spieler Handlungsplanungsproblem nennen wir Stackelberg planning. Ziel ist es, Administratoren, Penetrationstestern und der Führungsebene dabei zu helfen, die Schwachstellen großer Netzwerke zu identifizieren und kosteneffiziente Gegenmaßnahmen vorzuschlagen. Wir schaffen in dieser Dissertation erstens die formalen und algorithmischen Grundlagen von Stackelberg planning. Indem wir dabei auf klassischen Planungsproblemen aufbauen, können wir von gut erforschten Heuristiken und anderen Techniken zur Analysebeschleunigung, z.B. symbolischer Suche, profitieren. Zweitens entwerfen wir einen Formalismus für Privilegien-Eskalation und demonstrieren die Anwendbarkeit unserer Simulation auf lokale Computernetzwerke. Drittens wenden wir unsere Simulation auf internetweite Szenarien an und untersuchen die Robustheit sowohl der E-Mail-Infrastruktur als auch von Webseiten. Viertens ermöglichen wir mittels webbasierter Benutzeroberflächen den leichten Zugang zu unseren Tools und Analyseergebnissen.As corporate networks and Internet services are becoming increasingly more complex, it is hard to keep an overview over all deployed software, their potential vulnerabilities, and all existing security protocols. Simulated penetration testing was proposed to extend regular penetration testing by transferring gathered information about a network into a formal model and simulate an attacker in this model. Having a formal model of a network enables us to add a defender trying to mitigate the capabilities of the attacker with their own actions. We name this two-player planning task Stackelberg planning. The goal behind this is to help administrators, penetration testing consultants, and the management level at finding weak spots of large computer infrastructure and suggesting cost-effective mitigations to lower the security risk. In this thesis, we first lay the formal and algorithmic foundations for Stackelberg planning tasks. By building it in a classical planning framework, we can benefit from well-studied heuristics, pruning techniques, and other approaches to speed up the search, for example symbolic search. Second, we design a theory for privilege escalation and demonstrate the applicability of our framework to local computer networks. Third, we apply our framework to Internet-wide scenarios by investigating the robustness of both the email infrastructure and the web. Fourth, we make our findings and our toolchain easily accessible via web-based user interfaces

    Practical realisation and elimination of an ECC-related software bug attack

    Get PDF
    We analyse and exploit implementation features in OpenSSL version 0.9.8g which permit an attack against ECDH-based functionality. The attack, although more general, can recover the entire (static) private key from an associated SSL server via 633633 adaptive queries when the NIST curve P-256 is used. One can view it as a software-oriented analogue of the bug attack concept due to Biham et al. and, consequently, as the first bug attack to be successfully applied against a real-world system. In addition to the attack and a posteriori countermeasures, we show that formal verification, while rarely used at present, is a viable means of detecting the features which the attack hinges on. Based on the security implications of the attack and the extra justification posed by the possibility of intentionally incorrect implementations in collaborative software development, we conclude that applying and extending the coverage of formal verification to augment existing test strategies for OpenSSL-like software should be deemed a worthwhile, long-term challenge.This work has been supported in part by EPSRC via grant EP/H001689/1 and by project SMART, funded by ENIAC Joint Undertaking (GA 120224)

    Exploring the use of blockchain in academic management systems

    Get PDF
    Trabalho de projeto de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2020A tecnologia blockchain é um tópico em voga, com a sua visibilidade a estar directamente associada às criptomoedas, cuja grande expansão foi impulsionada pela Bitcoin (BTC). Apesar desta “explosão” recente, os conceitos base da tecnologia blockchain já têm décadas. O Hashcash, por exemplo, já utilizava um sistema de validação semelhante à Bitcoin para mitigar spam e ataques de negação de serviço. Embora os termos Bitcoin e blockchain tenham uma forte conotação, não são equivalentes. De forma concreta, uma blockchain é uma estrutura de dados que armazena informação numa cadeia de blocos ligados (block+chain) e cuja sequência é validada através de criptografia, particularmente funções de síntese (hash). Aumentar a cadeia requer que haja consenso entre os participantes sobre qual o próximo bloco a adicionar, consenso esse que é atingido por diversos mecanismos como sendo o Nakamoto consensos da Bitcoin, baseado em Proof-of-Work. Uma vez adicionado um bloco, essa informação é propagada aos participantes, existindo uma representação global e coesa da blockchain para todos. Um ponto importante é que não é possível apagar dados uma vez inseridos, o que, combinado com a verificação criptográfica e o uso de consenso, torna-a resistente à manipulação do seu conteúdo. É claro que esta tecnologia tem um grande potencial disruptivo, o que por sua vez tem causado uma “corrida ao ouro” digital, levando a que muitas organizações criem as suas próprias blockchains para posicionar-se no mercado. Os casos de uso já comprovados aproveitam as características da blockchain e variam entre aplicações financeiras baseadas em tokens que tiram partido da descentralização e a transposição para blockchain de actividades em que a auditabilidade e resistência à manipulação é altamente valorizada como sendo a gestão de cadeias logísticas. A qubIT, enquanto criadora de soluções de sofware, propôs este projecto para explorar o uso desta tecnologia no seu âmbito principal de negócio, os sistemas de gestão académica. Foi assim ideada uma prova de conceito com o objectivo de permitir validar documentos académicos assim como partilhar dados entre (e através das) instâncias Fenix das instituições da Universidade de Lisboa. Inspirando-se num dos casos de uso comprovados de blockchain, o rastreio de cadeia logística, a visão desta prova de conceito poderia ser descrita como “rastreio de documentação académica”. A questão a responder é: como podemos avaliar a validade de um documento académico para além da assinatura digital que lhe está aposta? Um exemplo seria o caso em que o documento está assinado por alguém que, naquele momento, não tinha legitimidade para o fazer. O objectivo é publicar informação numa blockchain e, através desta, estabelecer um registo cronológico imutável que pode ser utilizado para adicionar maior legitimidade e transparência a um documento académico. Existem dois principais desafios para o uso de blockchain para documentos académicos. O primeiro provém do cumprimento do RGPD. Por exemplo, sabendo que os dados ficam de forma permanente na blockchain, como lidar com a revogação de consentimento sobre dados já publicados. Relacionado com este, surge também a falta de suporte legislativo para valores armazenados na blockchain (não está estabelecido o seu valor probatório), especialmente quando comparada com o suporte existente para as assinaturas digitais. O primeiro passo foi escolher uma blockchain para a implementação. Começámos por decidir entre a utilização de uma rede aberta, já existente, ou criar a nossa rede através de uma blockchain permissionada. Aqui analisámos o uso de Ethereum. Com a sua rede e suporte para várias linguagens de desenvolvimento, era uma opção viável. Tinha a vantagem de podermos beneficiar da infraestrutura já existente, incorrendo apenas nos custos associados ao seu uso. Contudo, devido a esses custos e à inexistência de regulamentação para os enquadrar, optámos por criar a nossa própria infraestrutura através de uma solução permissionada. Para tal, analisámos várias implementações até que a escolha foi reduzida a dois candidatos, Fabric e Corda, que comparámos extensivamente. A abordagem tomada contrastou-os em três áreas: governança, que dados são visíveis e como são acedidos pelos participantes; suporte, quão activa é a equipa de desenvolvimento e como são resolvidos incidentes; e arquitectura, explorando os aspectos técnicos relacionados com suporte para linguagens de programação ou armazenamento de dados. No fim, foi escolhido o Corda por ser baseado em Java, podendo assim ser rapidamente integrado com o conjunto de tecnologias já utilizadas e acelerar o desenvolvimento da prova de conceito. Contribuindo para esse processo foi também criado um ambiente de desenvolvimento com as ferramentas necessárias para a criação de aplicações Corda (CorDapps). A salientar, contudo, que o Corda não é estritamente uma blockchain, mas sim um livro-razão (ledger) distribuído. Esta distinção aplica-se porque, no Corda, não existe um estado global partilhado mas sim conjuntos de registos que são visíveis para as entidades consoante as transacções em que participam. Utilizando a versão open source do Corda foi construída a Academic CorDapp. Esta aplicação permitia que fosse publicada e partilhada informação entre várias instânciasdo Fenix. Os dados publicados agregavam informação relativa a estudantes, cursos, notas e documentos e, com o seu registo no ledger, permitiam estabelecer uma cronologia imutável do progresso académico de um estudante numa instituição. Esta aplicação funcionava de forma autónoma, através da linha de comandos, com a sua integração no Fenix condicionada a um parecer favorável após análise pela equipa de soluções de negócio. Nesta avaliação, determinou-se que o Corda tinha alguns aspectos técnicos mais fracos (p.ex., suporte para bases de dados) mas era capaz de cumprir os requisitos para integração. O ponto principal levantando na avaliação foi que a proposta de valor adicionado pelo uso de blockchain era insuficiente quando comparada aos mecanismos de assinatura digital qualificada já existentes, especialmente se utilizado o padrão LTV. Foram então re-avaliadas as opções que tínhamos previamente excluído no processo de escolha do Corda. A solução em Ethereum continuou inviável pelas razões que tinham determinado a sua exclusão inicial: falta de suporte legal assim como a dificuldade em estimar os custos de operação. Foi discutida a possibilidade de reimplementar a aplicação utilizando o Fabric ou outra solução permissionada. Isto, apesar de resolver as limitações tecnológicas do Corda, não conseguia sanar a comparação com as assinaturas digitais que comprometia a viabilidade comercial. Assim sendo, a equipa deu um parecer desfavorável e decidiu que o projecto não iria avançar para a fase de integração em produção. Não obstante este desfecho, este projecto foi frutífero para estabelecer uma base de conhecimento sobre as implementações blockchain existentes, permitindo guiar adequadamente projectos futuros, assim como deixou preparadas ferramentas para acelerar o processo de desenvolvimento e aumentar a competitividade em futuras oportunidades neste espaço de negócio.The term blockchain is a “hot topic” in technology, with its visibility in the mainstream being boosted by the cryptocurrency boom brought forth by Bitcoin (BTC). An enterprise solutions developer, qubIT proposed this project to explore the use of blockchain technology in their main business scope, academic management systems. For that, a proof of concept was devised that used blockchain with the goal of providing document validation and data sharing across (and through) Fenix instances of ULisboa’s schools. Taking inspiration from a proven use case, supply chain traceability, the project’s vision could be described as “academic document transparency”. The main concerns regarding the project’s outcome were possible GDPR issues (e.g., revoking consent) and a lack of regulatory backing when compared to digital signatures. First step was to choose a blockchain implementation. We started by deciding between using an open, existing blockchain network or building it using a permissioned blockchain. For this we considered Ethereum. It was a viable option but, due to regulatory concerns, we decided to opt for a permissioned blockchain. After exploring different blockchain offerings, we narrowed the choice down to Fabric and Corda, comparing them in depth. In the end, Corda was chosen and its open source version was used for the Academic CorDapp. This application allowed Fenix instances to publish information on a shared ledger, using it to chronologically track a student’s academic achievements. It functioned in a stand-alone manner, with a favorable evaluation from the Business Solutions team determining its integration into Fenix production systems. Corda was found lacking in some technical aspects (e.g., database support) but capable. The key issue was that the value added by using blockchain was insufficient when compared to qualified digital signature mechanisms, specifically the LTV standard, which led to the decision of not pursuing the project any further. Despite this, this project had a positive outcome in building know-how regarding blockchain that will help guide future ventures in this area
    corecore