17 research outputs found

    Information Leakage Attacks and Countermeasures

    Get PDF
    The scientific community has been consistently working on the pervasive problem of information leakage, uncovering numerous attack vectors, and proposing various countermeasures. Despite these efforts, leakage incidents remain prevalent, as the complexity of systems and protocols increases, and sophisticated modeling methods become more accessible to adversaries. This work studies how information leakages manifest in and impact interconnected systems and their users. We first focus on online communications and investigate leakages in the Transport Layer Security protocol (TLS). Using modern machine learning models, we show that an eavesdropping adversary can efficiently exploit meta-information (e.g., packet size) not protected by the TLS’ encryption to launch fingerprinting attacks at an unprecedented scale even under non-optimal conditions. We then turn our attention to ultrasonic communications, and discuss their security shortcomings and how adversaries could exploit them to compromise anonymity network users (even though they aim to offer a greater level of privacy compared to TLS). Following up on these, we delve into physical layer leakages that concern a wide array of (networked) systems such as servers, embedded nodes, Tor relays, and hardware cryptocurrency wallets. We revisit location-based side-channel attacks and develop an exploitation neural network. Our model demonstrates the capabilities of a modern adversary but also presents an inexpensive tool to be used by auditors for detecting such leakages early on during the development cycle. Subsequently, we investigate techniques that further minimize the impact of leakages found in production components. Our proposed system design distributes both the custody of secrets and the cryptographic operation execution across several components, thus making the exploitation of leaks difficult

    Intelligent systems for efficiency and security

    Get PDF
    As computing becomes ubiquitous and personalized, resources like energy, storage and time are becoming increasingly scarce and, at the same time, computing systems must deliver in multiple dimensions, such as high performance, quality of service, reliability, security and low power. Building such computers is hard, particularly when the operating environment is becoming more dynamic, and systems are becoming heterogeneous and distributed. Unfortunately, computers today manage resources with many ad hoc heuristics that are suboptimal, unsafe, and cannot be composed across the computer’s subsystems. Continuing this approach has severe consequences: underperforming systems, resource waste, information loss, and even life endangerment. This dissertation research develops computing systems which, through intelligent adaptation, deliver efficiency along multiple dimensions. The key idea is to manage computers with principled methods from formal control. It is with these methods that the multiple subsystems of a computer sense their environment and configure themselves to meet system-wide goals. To achieve the goal of intelligent systems, this dissertation makes a series of contributions, each building on the previous. First, it introduces the use of formal MIMO (Multiple Input Multiple Output) control for processors, to simultaneously optimize many goals like performance, power, and temperature. Second, it develops the Yukta control system, which uses coordinated formal controllers in different layers of the stack (hardware and operating system). Third, it uses robust control to develop a fast, globally coordinated and decentralized control framework called Tangram, for heterogeneous computers. Finally, it presents Maya, a defense against power side-channel attacks that uses formal control to reshape the power dissipated by a computer, confusing the attacker. The ideas in the dissertation have been demonstrated successfully with several prototypes, including one built along with AMD (Advanced Micro Devices, Inc.) engineers. These designs significantly outperformed the state of the art. The research in this dissertation brought formal control closer to computer architecture and has been well-received in both domains. It has the first application of full-fledged MIMO control for processors, the first use of robust control in computer systems, and the first application of formal control for side-channel defense. It makes a significant stride towards intelligent systems that are efficient, secure and reliable

    Distributed services across the network from edge to core

    Get PDF
    The current internet architecture is evolving from a simple carrier of bits to a platform able to provide multiple complex services running across the entire Network Service Provider (NSP) infrastructure. This calls for increased flexibility in resource management and allocation to provide dedicated, on-demand network services, leveraging a distributed infrastructure consisting of heterogeneous devices. More specifically, NSPs rely on a plethora of low-cost Customer Premise Equipment (CPE), as well as more powerful appliances at the edge of the network and in dedicated data-centers. Currently a great research effort is spent to provide this flexibility through Fog computing, Network Functions Virtualization (NFV), and data plane programmability. Fog computing or Edge computing extends the compute and storage capabilities to the edge of the network, closer to the rapidly growing number of connected devices and applications that consume cloud services and generate massive amounts of data. A complementary technology is NFV, a network architecture concept targeting the execution of software Network Functions (NFs) in isolated Virtual Machines (VMs), potentially sharing a pool of general-purpose hosts, rather than running on dedicated hardware (i.e., appliances). Such a solution enables virtual network appliances (i.e., VMs executing network functions) to be provisioned, allocated a different amount of resources, and possibly moved across data centers in little time, which is key in ensuring that the network can keep up with the flexibility in the provisioning and deployment of virtual hosts in today’s virtualized data centers. Moreover, recent advances in networking hardware have introduced new programmable network devices that can efficiently execute complex operations at line rate. As a result, NFs can be (partially or entirely) folded into the network, speeding up the execution of distributed services. The work described in this Ph.D. thesis aims at showing how various network services can be deployed throughout the NSP infrastructure, accommodating to the different hardware capabilities of various appliances, by applying and extending the above-mentioned solutions. First, we consider a data center environment and the deployment of (virtualized) NFs. In this scenario, we introduce a novel methodology for the modelization of different NFs aimed at estimating their performance on different execution platforms. Moreover, we propose to extend the traditional NFV deployment outside of the data center to leverage the entire NSP infrastructure. This can be achieved by integrating native NFs, commonly available in low-cost CPEs, with an existing NFV framework. This facilitates the provision of services that require NFs close to the end user (e.g., IPsec terminator). On the other hand, resource-hungry virtualized NFs are run in the NSP data center, where they can take advantage of the superior computing and storage capabilities. As an application, we also present a novel technique to deploy a distributed service, specifically a web filter, to leverage both the low latency of a CPE and the computational power of a data center. We then show that also the core network, today dedicated solely to packet routing, can be exploited to provide useful services. In particular, we propose a novel method to provide distributed network services in core network devices by means of task distribution and a seamless coordination among the peers involved. The aim is to transform existing network nodes (e.g., routers, switches, access points) into a highly distributed data acquisition and processing platform, which will significantly reduce the storage requirements at the Network Operations Center and the packet duplication overhead. Finally, we propose to use new programmable network devices in data center networks to provide much needed services to distributed applications. By offloading part of the computation directly to the networking hardware, we show that it is possible to reduce both the network traffic and the overall job completion time

    On the malware detection problem : challenges and novel approaches

    Get PDF
    Orientador: AndrĂ© Ricardo Abed GrĂ©gioCoorientador: Paulo LĂ­cio de GeusTese (doutorado) - Universidade Federal do ParanĂĄ, Setor de CiĂȘncias Exatas, Programa de PĂłs-Graduação em InformĂĄtica. Defesa : Curitiba,Inclui referĂȘnciasÁrea de concentração: CiĂȘncia da ComputaçãoResumo: Software Malicioso (malware) Ă© uma das maiores ameaças aos sistemas computacionais atuais, causando danos Ă  imagem de indivĂ­duos e corporaçÔes, portanto requerendo o desenvolvimento de soluçÔes de detecção para prevenir que exemplares de malware causem danos e para permitir o uso seguro dos sistemas. Diversas iniciativas e soluçÔes foram propostas ao longo do tempo para detectar exemplares de malware, de Anti-VĂ­rus (AVs) a sandboxes, mas a detecção de malware de forma efetiva e eficiente ainda se mantĂ©m como um problema em aberto. Portanto, neste trabalho, me proponho a investigar alguns desafios, falĂĄcias e consequĂȘncias das pesquisas em detecção de malware de modo a contribuir para o aumento da capacidade de detecção das soluçÔes de segurança. Mais especificamente, proponho uma nova abordagem para o desenvolvimento de experimentos com malware de modo prĂĄtico mas ainda cientĂ­fico e utilizo-me desta abordagem para investigar quatro questĂ”es relacionadas a pesquisa em detecção de malware: (i) a necessidade de se entender o contexto das infecçÔes para permitir a detecção de ameaças em diferentes cenĂĄrios; (ii) a necessidade de se desenvolver melhores mĂ©tricas para a avaliação de soluçÔes antivĂ­rus; (iii) a viabilidade de soluçÔes com colaboração entre hardware e software para a detecção de malware de forma mais eficiente; (iv) a necessidade de predizer a ocorrĂȘncia de novas ameaças de modo a permitir a resposta Ă  incidentes de segurança de forma mais rĂĄpida.Abstract: Malware is a major threat to most current computer systems, causing image damages and financial losses to individuals and corporations, thus requiring the development of detection solutions to prevent malware to cause harm and allow safe computers usage. Many initiatives and solutions to detect malware have been proposed over time, from AntiViruses (AVs) to sandboxes, but effective and efficient malware detection remains as a still open problem. Therefore, in this work, I propose taking a look on some malware detection challenges, pitfalls and consequences to contribute towards increasing malware detection system's capabilities. More specifically, I propose a new approach to tackle malware research experiments in a practical but still scientific manner and leverage this approach to investigate four issues: (i) the need for understanding context to allow proper detection of localized threats; (ii) the need for developing better metrics for AV solutions evaluation; (iii) the feasibility of leveraging hardware-software collaboration for efficient AV implementation; and (iv) the need for predicting future threats to allow faster incident responses

    Cyber defensive capacity and capability::A perspective from the financial sector of a small state

    Get PDF
    This thesis explores ways in which the financial sectors of small states are able todefend themselves against ever-growing cyber threats, as well as ways these states can improve their cyber defense capability in order to withstand current andfuture attacks. To date, the context of small states in general is understudied. This study presents the challenges faced by financial sectors in small states with regard to withstanding cyberattacks. This study applies a mixed method approach through the use of various surveys, brainstorming sessions with financial sector focus groups, interviews with critical infrastructure stakeholders, a literature review, a comparative analysis of secondary data and a theoretical narrative review. The findings suggest that, for the Aruban financial sector, compliance is important, as with minimal drivers, precautionary behavior is significant. Countermeasures of formal, informal, and technical controls need to be in place. This study indicates the view that defending a small state such as Aruba is challenging, yet enough economic indicators indicate it not being outside the realm of possibility. On a theoretical level, this thesis proposes a conceptual “whole-of-cyber” model inspired by military science and the VSM (Viable Systems Model). The concept of fighting power components and governance S4 function form cyber defensive capacity’s shield and capability. The “whole-of-cyber” approach may be a good way to compensate for the lack of resources of small states. Collaboration may be an only out, as the fastest-growing need will be for advanced IT skillsets
    corecore