10 research outputs found

    Desing and evaluation of novel authentication, authorization and border protection mechanisms for modern information security architectures

    Get PDF
    En los últimos años, las vidas real y digital de las personas están más entrelazadas que nunca, lo que ha dado lugar a que la información de los usuarios haya adquirido un valor incalculable tanto para las empresas como para los atacantes. Mientras tanto, las consecuencias derivadas del uso inadecuado de dicha información son cada vez más preocupantes. El número de brechas de seguridad sigue aumentando cada día y las arquitecturas de seguridad de la información, si se diseñan correctamente, son la apuesta más segura para romper esta tendencia ascendente.Esta tesis contribuye en tres de los pilares fundamentales de cualquier arquitectura de seguridad de la información—autenticación, autorización y seguridad de los datos en tránsito—mejorando la seguridad y privacidad provista a la información involucrada. En primer lugar, la autenticación tiene como objetivo verificar que el usuario es quien dice ser. Del mismo modo que otras tareas que requieren de interacción por parte del usuario, en la autenticación es fundamental mantener el balance entre seguridad y usabilidad. Por ello, hemos diseñado una metodología de autenticación basada en el fotopletismograma (PPG). En la metodología propuesta, el modelo de cada usuario contiene un conjunto de ciclos aislados de su señal PPG, mientras que la distancia de Manhattan se utiliza para calcular la distancia entre modelos. Dicha metodología se ha evaluado prestando especial atención a los resultados a largo plazo. Los resultados obtenidos muestran que los impresionantes valores de error que se pueden obtener a corto plazo (valores de EER por debajo del 1%) crecen rápidamente cuando el tiempo entre la creación del modelo y la evaluación aumenta (el EER aumenta hasta el 20% durante las primeras 24 horas, valor que permanece estable desde ese momento). Aunque los valores de error encontrados en el largo plazo pueden ser demasiado altos para permitir que el PPG sea utilizado como una alternativa de autenticación confiable por si mismo, este puede ser utilizado de forma complementaria (e.g. como segundo factor de autenticación) junto a otras alternativas de autenticación, mejorándolas con interesantes propiedades, como la prueba de vida.Tras una correcta autenticación, el proceso de autorización determina si la acción solicitada al sistema debería permitirse o no. Como indican las nuevas leyes de protección de datos, los usuarios son los dueños reales de su información, y por ello deberían contar con los métodos necesarios para gestionar su información digital de forma efectiva. El framework OAuth, que permite a los usuarios autorizar a una aplicación de terceros a acceder a sus recursos protegidos, puede considerarse la primera solución en esta línea. En este framework, la autorización del usuario se encarna en un token de acceso que la tercera parte debe presentar cada vez que desee acceder a un recurso del usuario. Para desatar todo su potencial, hemos extendido dicho framework desde tres perspectivas diferentes. En primer lugar, hemos propuesto un protocolo que permite al servidor de autorización verificar que el usuario se encuentra presente cada vez que la aplicación de terceros solicita acceso a uno de sus recursos. Esta comprobación se realiza mediante una autenticación transparente basada en las señales biométricas adquiridas por los relojes inteligentes y/o las pulseras de actividad y puede mitigar las graves consecuencias de la exfiltración de tokens de acceso en muchas situaciones. En segundo lugar, hemos desarrollado un nuevo protocolo para autorizar a aplicaciones de terceros a acceder a los datos del usuario cuando estas aplicaciones no son aplicaciones web, sino que se sirven a través de plataformas de mensajería. El protocolo propuesto no lidia únicamente con los aspectos relacionados con la usabilidad (permitiendo realizar el proceso de autorización mediante el mismo interfaz que el usuario estaba utilizando para consumir el servicio, i.e. la plataforma de mensajería) sino que también aborda los problemas de seguridad que surgen derivados de este nuevo escenario. Finalmente, hemos mostrado un protocolo donde el usuario que requiere de acceso a los recursos protegidos no es el dueño de estos. Este nuevo mecanismo se basa en un nuevo tipo de concesión OAuth (grant type) para la interacción entre el servidor de autorización y ambos usuarios, y un perfil de OPA para la definición y evaluación de políticas de acceso. En un intento de acceso a los recursos, el dueño de estos podría ser consultado interactivamente para aprobar el acceso, habilitando de esta forma la delegación usuario a usuario. Después de unas autenticación y autorización exitosas, el usuario consigue acceso al recurso protegido. La seguridad de los datos en tránsito se encarga de proteger la información mientras es transmitida del dispositivo del usuario al servidor de recursos y viceversa. El cifrado, al tiempo que mantiene la información a salvo de los curiosos, también evita que los dispositivos de seguridad puedan cumplir su función—por ejemplo, los firewalls son incapaces de inspeccionar la información cifrada en busca de amenazas. Sin embargo, mostrar la información de los usuarios a dichos dispositivos podría suponer un problema de privacidad en ciertos escenarios. Por ello, hemos propuesto un método basado en Computación Segura Multiparte (SMC) que permite realizar las funciones de red sin comprometer la privacidad del tráfico. Esta aproximación aprovecha el paralelismo intrínseco a los escenarios de red, así como el uso adaptativo de diferentes representaciones de la función de red para adecuar la ejecución al estado de la red en cada momento. En nuestras pruebas hemos analizado el desencriptado seguro del tráfico utilizando el algoritmo Chacha20, mostrando que somos capaces de evaluar el tráfico introduciendo latencias realmente bajas (menores de 3ms) cuando la carga de la red permanece suficientemente baja, mientras que podemos procesar hasta 1.89 Gbps incrementando la latencia introducida. Teniendo en cuenta todo esto, a pesar de la penalización de rendimiento que se ha asociado tradicionalmente a las aplicaciones de Computación Segura, hemos presentado un método eficiente y flexible que podría lanzar la evaluación segura de las funciones de red a escenarios reales.<br /

    Software Defined Application Delivery Networking

    Get PDF
    In this thesis we present the architecture, design, and prototype implementation details of AppFabric. AppFabric is a next generation application delivery platform for easily creating, managing and controlling massively distributed and very dynamic application deployments that may span multiple datacenters. Over the last few years, the need for more flexibility, finer control, and automatic management of large (and messy) datacenters has stimulated technologies for virtualizing the infrastructure components and placing them under software-based management and control; generically called Software-defined Infrastructure (SDI). However, current applications are not designed to leverage this dynamism and flexibility offered by SDI and they mostly depend on a mix of different techniques including manual configuration, specialized appliances (middleboxes), and (mostly) proprietary middleware solutions together with a team of extremely conscientious and talented system engineers to get their applications deployed and running. AppFabric, 1) automates the whole control and management stack of application deployment and delivery, 2) allows application architects to define logical workflows consisting of application servers, message-level middleboxes, packet-level middleboxes and network services (both, local and wide-area) composed over application-level routing policies, and 3) provides the abstraction of an application cloud that allows the application to dynamically (and automatically) expand and shrink its distributed footprint across multiple geographically distributed datacenters operated by different cloud providers. The architecture consists of a hierarchical control plane system called Lighthouse and a fully distributed data plane design (with no special hardware components such as service orchestrators, load balancers, message brokers, etc.) called OpenADN . The current implementation (under active development) consists of ~10000 lines of python and C code. AppFabric will allow applications to fully leverage the opportunities provided by modern virtualized Software-Defined Infrastructures. It will serve as the platform for deploying massively distributed, and extremely dynamic next generation application use-cases, including: Internet-of-Things/Cyber-Physical Systems: Through support for managing distributed gather-aggregate topologies common to most Internet-of-Things(IoT) and Cyber-Physical Systems(CPS) use-cases. By their very nature, IoT and CPS use cases are massively distributed and have different levels of computation and storage requirements at different locations. Also, they have variable latency requirements for their different distributed sites. Some services, such as device controllers, in an Iot/CPS application workflow may need to gather, process and forward data under near-real time constraints and hence need to be as close to the device as possible. Other services may need more computation to process aggregated data to drive long term business intelligence functions. AppFabric has been designed to provide support for such very dynamic, highly diversified and massively distributed application use-cases. Network Function Virtualization: Through support for heterogeneous workflows, application-aware networking, and network-aware application deployments, AppFabric will enable new partnerships between Application Service Providers (ASPs) and Network Service Providers (NSPs). An application workflow in AppFabric may comprise of application services, packet and message-level middleboxes, and network transport services chained together over an application-level routing substrate. The Application-level routing substrate allows policy-based service chaining where the application may specify policies for routing their application traffic over different services based on application-level content or context. Virtual worlds/multiplayer games: Through support for creating, managing and controlling dynamic and distributed application clouds needed by these applications. AppFabric allows the application to easily specify policies to dynamically grow and shrink the application\u27s footprint over different geographical sites, on-demand. Mobile Apps: Through support for extremely diversified and very dynamic application contexts typical of such applications. Also, AppFabric provides support for automatically managing massively distributed service deployment and controlling application traffic based on application-level policies. This allows mobile applications to provide the best Quality-of-Experience to its users without This thesis is the first to handle and provide a complete solution for such a complex and relevant architectural problem that is expected to touch each of our lives by enabling exciting new application use-cases that are not possible today. Also, AppFabric is a non-proprietary platform that is expected to spawn lots of innovations both in the design of the platform itself and the features it provides to applications. AppFabric still needs many iterations, both in terms of design and implementation maturity. This thesis is not the end of journey for AppFabric but rather just the beginning

    Design and Verification of Specialised Security Goals for Protocol Families

    Get PDF
    Communication Protocols form a fundamental backbone of our modern information networks. These protocols provide a framework to describe how agents - Computers, Smartphones, RFID Tags and more - should structure their communication. As a result, the security of these protocols is implicitly trusted to protect our personal data. In 1997, Lowe presented ‘A Hierarchy of Authentication Specifications’, formalising a set of security requirements that might be expected of communication protocols. The value of these requirements is that they can be formally tested and verified against a protocol specification. This allows a user to have confidence that their communications are protected in ways that are uniformly defined and universally agreed upon. Since that time, the range of objectives and applications of real-world protocols has grown. Novel requirements - such as checking the physical distance between participants, or evolving trust assumptions of intermediate nodes on the network - mean that new attack vectors are found on a frequent basis. The challenge, then, is to define security goals which will guarantee security, even when the nature of these attacks is not known. In this thesis, a methodology for the design of security goals is created. It is used to define a collection of specialised security goals for protocols in multiple different families, by considering tailor-made models for these specific scenarios. For complex requirements, theorems are proved that simplify analysis, allowing the verification of security goals to be efficiently modelled in automated prover tools

    Stealth Key Exchange and Confined Access to the Record Protocol Data in TLS 1.3

    Get PDF
    We show how to embed a covert key exchange sub protocol within a regular TLS 1.3 execution, generating a stealth key in addition to the regular session keys. The idea, which has appeared in the literature before, is to use the exchanged nonces to transport another key value. Our contribution is to give a rigorous model and analysis of the security of such embedded key exchanges, requiring that the stealth key remains secure even if the regular key is under adversarial control. Specifically for our stealth version of the TLS 1.3 protocol we show that this extra key is secure in this setting under the common assumptions about the TLS protocol. As an application of stealth key exchange we discuss sanitizable channel protocols, where a designated party can partly access and modify payload data in a channel protocol. This may be, for instance, an intrusion detection system monitoring the incoming traffic for malicious content and putting suspicious parts in quarantine. The noteworthy feature, inherited from the stealth key exchange part, is that the sender and receiver can use the extra key to still communicate securely and covertly within the sanitizable channel, e.g., by pre-encrypting confidential parts and making only dedicated parts available to the sanitizer. We discuss how such sanitizable channels can be implemented with authenticated encryption schemes like GCM or ChaChaPoly. In combination with our stealth key exchange protocol, we thus derive a full-fledged sanitizable connection protocol, including key establishment, which perfectly complies with regular TLS 1.3 traffic on the network level. We also assess the potential effectiveness of the approach for the intrusion detection system Snort

    Mitigating Botnet-based DDoS Attacks against Web Servers

    Get PDF
    Distributed denial-of-service (DDoS) attacks have become wide-spread on the Internet. They continuously target retail merchants, financial companies and government institutions, disrupting the availability of their online resources and causing millions of dollars of financial losses. Software vulnerabilities and proliferation of malware have helped create a class of application-level DDoS attacks using networks of compromised hosts (botnets). In a botnet-based DDoS attack, an attacker orders large numbers of bots to send seemingly regular HTTP and HTTPS requests to a web server, so as to deplete the server's CPU, disk, or memory capacity. Researchers have proposed client authentication mechanisms, such as CAPTCHA puzzles, to distinguish bot traffic from legitimate client activity and discard bot-originated packets. However, CAPTCHA authentication is vulnerable to denial-of-service and artificial intelligence attacks. This dissertation proposes that clients instead use hardware tokens to authenticate in a federated authentication environment. The federated authentication solution must resist both man-in-the-middle and denial-of-service attacks. The proposed system architecture uses the Kerberos protocol to satisfy both requirements. This work proposes novel extensions to Kerberos to make it more suitable for generic web authentication. A server could verify client credentials and blacklist repeated offenders. Traffic from blacklisted clients, however, still traverses the server's network stack and consumes server resources. This work proposes Sentinel, a dedicated front-end network device that intercepts server-bound traffic, verifies authentication credentials and filters blacklisted traffic before it reaches the server. Using a front-end device also allows transparently deploying hardware acceleration using network co-processors. Network co-processors can discard blacklisted traffic at the hardware level before it wastes front-end host resources. We implement the proposed system architecture by integrating existing software applications and libraries. We validate the system implementation by evaluating its performance under DDoS attacks consisting of floods of HTTP and HTTPS requests

    Managing Networked IoT Assets Using Practical and Scalable Traffic Inference

    Full text link
    The Internet has recently witnessed unprecedented growth of a class of connected assets called the Internet of Things (IoT). Due to relatively immature manufacturing processes and limited computing resources, IoTs have inadequate device-level security measures, exposing the Internet to various cyber risks. Therefore, network-level security has been considered a practical and scalable approach for securing IoTs, but this cannot be employed without discovering the connected devices and characterizing their behavior. Prior research leveraged predictable patterns in IoT network traffic to develop inference models. However, they fall short of expectations in addressing practical challenges, preventing them from being deployed in production settings. This thesis identifies four practical challenges and develops techniques to address them which can help secure businesses and protect user privacy against growing cyber threats. My first contribution balances prediction gains against computing costs of traffic features for IoT traffic classification and monitoring. I develop a method to find the best set of specialized models for multi-view classification that can reach an average accuracy of 99%, i.e., a similar accuracy compared to existing works but reducing the cost by a factor of 6. I develop a hierarchy of one-class models per asset class, each at certain granularity, to progressively monitor IoT traffic. My second contribution addresses the challenges of measurement costs and data quality. I develop an inference method that uses stochastic and deterministic modeling to predict IoT devices in home networks from opaque and coarse-grained IPFIX flow data. Evaluations show that false positive rates can be reduced by 75% compared to related work without significantly affecting true positives. My third contribution focuses on the challenge of concept drifts by analyzing over six million flow records collected from 12 real home networks. I develop several inference strategies and compare their performance under concept drift, particularly when labeled data is unavailable in the testing phase. Finally, my fourth contribution studies the resilience of machine learning models against adversarial attacks with a specific focus on decision tree-based models. I develop methods to quantify the vulnerability of a given decision tree-based model against data-driven adversarial attacks and refine vulnerable decision trees, making them robust against 92% of adversarial attacks

    Software-Defined Networking: A Comprehensive Survey

    Get PDF
    peer reviewedThe Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this - ew paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms—with a focus on aspects such as resiliency, scalability, performance, security, and dependability—as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment

    Protecting applications using trusted execution environments

    Get PDF
    While cloud computing has been broadly adopted, companies that deal with sensitive data are still reluctant to do so due to privacy concerns or legal restrictions. Vulnerabilities in complex cloud infrastructures, resource sharing among tenants, and malicious insiders pose a real threat to the confidentiality and integrity of sensitive customer data. In recent years trusted execution environments (TEEs), hardware-enforced isolated regions that can protect code and data from the rest of the system, have become available as part of commodity CPUs. However, designing applications for the execution within TEEs requires careful consideration of the elevated threats that come with running in a fully untrusted environment. Interaction with the environment should be minimised, but some cooperation with the untrusted host is required, e.g. for disk and network I/O, via a host interface. Implementing this interface while maintaining the security of sensitive application code and data is a fundamental challenge. This thesis addresses this challenge and discusses how TEEs can be leveraged to secure existing applications efficiently and effectively in untrusted environments. We explore this in the context of three systems that deal with the protection of TEE applications and their host interfaces: SGX-LKL is a library operating system that can run full unmodified applications within TEEs with a minimal general-purpose host interface. By providing broad system support inside the TEE, the reliance on the untrusted host can be reduced to a minimal set of low-level operations that cannot be performed inside the enclave. SGX-LKL provides transparent protection of the host interface and for both disk and network I/O. Glamdring is a framework for the semi-automated partitioning of TEE applications into an untrusted and a trusted compartment. Based on source-level annotations, it uses either dynamic or static code analysis to identify sensitive parts of an application. Taking into account the objectives of a small TCB size and low host interface complexity, it defines an application-specific host interface and generates partitioned application code. EnclaveDB is a secure database using Intel SGX based on a partitioned in-memory database engine. The core of EnclaveDB is its logging and recovery protocol for transaction durability. For this, it relies on the database log managed and persisted by the untrusted database server. EnclaveDB protects against advanced host interface attacks and ensures the confidentiality, integrity, and freshness of sensitive data.Open Acces

    DDoS Capability and Readiness - Evidence from Australian Organisations

    Get PDF
    A common perception of cyber defence is that it should protect systems and data from malicious attacks, ideally keeping attackers outside of secure perimeters and preventing entry. Much of the effort in traditional cyber security defence is focused on removing gaps in security design and preventing those with legitimate permissions from becoming a gateway or resource for those seeking illegitimate access. By contrast, Distributed Denial of Service (DDoS) attacks do not use application backdoors or software vulnerabilities to create their impact. They instead utilise legitimate entry points and knowledge of system processes for illegitimate purposes. DDoS seeks to overwhelm system and infrastructure resources so that legitimate requests are prevented from reaching their intended destination. For this thesis, a literature review was performed using sources from two perspectives. Reviews of both industry literature and academic literature were combined to build a balanced view of knowledge of this area. Industry and academic literature revealed that DDoS is outpacing internet growth, with vandalism, criminal and ideological motivations rising to prominence. From a defence perspective, the human factor remains a weak link in cyber security due to proneness for mistakes, oversights and the variance in approach and methods expressed by differing cultures. How cyber security is perceived, approached, and applied can have a critical effect on the overall outcome achieved, even when similar technologies are implemented. In addition, variance in the technical capabilities of those responsible for the implementation may create further gaps and vulnerabilities. While discussing technical challenges and theoretical concepts, existing literature failed to cover the experiences held by the victim organisations, or the thoughts and feelings of their personnel. This thesis addresses these identified gaps through exploratory research, which used a mix of descriptive and qualitative analysis to develop results and conclusions. The websites of 60 Australian organisations were analysed to uncover the level and quality of cyber security information they were willing to share and the methods and processes they used to engage with their audience. In addition, semi-structured interviews were conducted with 30 employees from around half of those websites analysed. These were analysed using NVivo12 qualitative analysis software. The difficulty experienced with attracting willing participants reflected the comfort that organisations showed with sharing cyber security information and experiences. However, themes found within the results show that, while DDoS is considered a valid threat, without encouragement to collaborate and standardise minimum security levels, firms may be missing out on valuable strategies to improve their cyber security postures. Further, this reluctance to share leads organisations to rely on their own internal skill and expertise, thus failing to realise the benefits of established frameworks and increased diversity in the workforce. Along with the size of the participant pool, other limitations included the diversity of participants and the impact of COVID-19 which may have influenced participants' thoughts and reflections. These limitations however, present opportunity for future studies using greater participant numbers or a narrower target focus. Either option would be beneficial to the recommendations of this study which were made from a practical, social, theoretical and policy perspective. On a practical and social level, organisational capabilities suffer due to the lack of information sharing and this extends to the community when similar restrictions prevent collaboration. Sharing of knowledge and experiences while protecting sensitive information is a worthy goal and this is something that can lead to improved defence. However, while improved understanding is one way to reduce the impact of cyber-attacks, the introduction of minimum cyber security standards for products, could reduce the ease at which devices can be used to facilitate attacks, but only if policy and effective governance ensures product compliance with legislation. One positive side to COVID-19's push to remote working, was an increase in digital literacy. As more roles were temporarily removed from their traditional physical workplace, many employees needed to rapidly accelerate their digital competency to continue their employment. To assist this transition, organisations acted to implement technology solutions that eased the ability for these roles to be undertaken remotely and as a consequence, they opened up these roles to a greater pool of available candidates. Many of these roles are no longer limited to the geographical location of potential employees or traditional hours of availability. Many of these roles could be accessed from almost anywhere, at any time, which had a positive effect on organisational capability and digital sustainability
    corecore