7 research outputs found

    Design and Analysis of a True Random Number Generator Based on GSR Signals for Body Sensor Networks

    Get PDF
    This article belongs to the Section Internet of ThingsToday, medical equipment or general-purpose devices such as smart-watches or smart-textiles can acquire a person's vital signs. Regardless of the type of device and its purpose, they are all equipped with one or more sensors and often have wireless connectivity. Due to the transmission of sensitive data through the insecure radio channel and the need to ensure exclusive access to authorised entities, security mechanisms and cryptographic primitives must be incorporated onboard these devices. Random number generators are one such necessary cryptographic primitive. Motivated by this, we propose a True Random Number Generator (TRNG) that makes use of the GSR signal measured by a sensor on the body. After an exhaustive analysis of both the entropy source and the randomness of the output, we can conclude that the output generated by the proposed TRNG behaves as that produced by a random variable. Besides, and in comparison with the previous proposals, the performance offered is much higher than that of the earlier works.This work was supported by the Spanish Ministry of Economy and Competitiveness under the contract ESP-2015-68245-C4-1-P, by the MINECO grant TIN2016-79095-C2-2-R (SMOG-DEV), and by the Comunidad de Madrid (Spain) under the project CYNAMON (P2018/TCS-4566), co-financed by European Structural Funds (ESF and FEDER). This research was also supported by the Interdisciplinary Research Funds (HTC, United Arab Emirates) under the grant No. 103104

    ANCHOR: logically-centralized security for Software-Defined Networks

    Get PDF
    While the centralization of SDN brought advantages such as a faster pace of innovation, it also disrupted some of the natural defenses of traditional architectures against different threats. The literature on SDN has mostly been concerned with the functional side, despite some specific works concerning non-functional properties like 'security' or 'dependability'. Though addressing the latter in an ad-hoc, piecemeal way, may work, it will most likely lead to efficiency and effectiveness problems. We claim that the enforcement of non-functional properties as a pillar of SDN robustness calls for a systemic approach. As a general concept, we propose ANCHOR, a subsystem architecture that promotes the logical centralization of non-functional properties. To show the effectiveness of the concept, we focus on 'security' in this paper: we identify the current security gaps in SDNs and we populate the architecture middleware with the appropriate security mechanisms, in a global and consistent manner. Essential security mechanisms provided by anchor include reliable entropy and resilient pseudo-random generators, and protocols for secure registration and association of SDN devices. We claim and justify in the paper that centralizing such mechanisms is key for their effectiveness, by allowing us to: define and enforce global policies for those properties; reduce the complexity of controllers and forwarding devices; ensure higher levels of robustness for critical services; foster interoperability of the non-functional property enforcement mechanisms; and promote the security and resilience of the architecture itself. We discuss design and implementation aspects, and we prove and evaluate our algorithms and mechanisms, including the formalisation of the main protocols and the verification of their core security properties using the Tamarin prover.Comment: 42 pages, 4 figures, 3 tables, 5 algorithms, 139 reference

    Short Paper: TLS Ecosystems in Networked Devices vs. Web Servers

    Get PDF
    Recently, high-speed IPv4 scanners, such as ZMap, have enabled rapid and timely collection of TLS certificates and other security-sensitive parameters. Such large datasets led to the development of the Censys search interface, facilitating comprehensive analysis of TLS deployments in the wild. Several recent studies analyzed TLS certificates as deployed in web servers. Beyond public web servers, TLS is deployed in many other Internet-connected devices, at home and enterprise environments, and at network backbones. In this paper, we report the results of a preliminary analysis using Censys on TLS deployments in such devices (e.g., routers, modems, NAS, printers, SCADA, and IoT devices in general). We compare certificates and TLS connection parameters from a security perspective, as found in common devices with Alexa 1M sites. Our results highlight significant weaknesses, and may serve as a catalyst to improve TLS security for these devices

    Postcards from the post-HTTP world: Amplification of HTTPS vulnerabilities in the web ecosystem

    Get PDF
    HTTPS aims at securing communication over the Web by providing a cryptographic protection layer that ensures the confidentiality and integrity of communication and enables client/server authentication. However, HTTPS is based on the SSL/TLS protocol suites that have been shown to be vulnerable to various attacks in the years. This has required fixes and mitigations both in the servers and in the browsers, producing a complicated mixture of protocol versions and implementations in the wild, which makes it unclear which attacks are still effective on the modern Web and what is their import on web application security. In this paper, we present the first systematic quantitative evaluation of web application insecurity due to cryptographic vulnerabilities. We specify attack conditions against TLS using attack trees and we crawl the Alexa Top 10k to assess the import of these issues on page integrity, authentication credentials and web tracking. Our results show that the security of a consistent number of websites is severely harmed by cryptographic weaknesses that, in many cases, are due to external or related-domain hosts. This empirically, yet systematically demonstrates how a relatively limited number of exploitable HTTPS vulnerabilities are amplified by the complexity of the web ecosystem

    A principled approach to measuring the IoT ecosystem

    Get PDF
    Internet of Things (IoT) devices combine network connectivity, cheap hardware, and actuation to provide new ways to interface with the world. In spite of this growth, little work has been done to measure the network properties of IoT devices. Such measurements can help to inform systems designers and security researchers of IoT networking behavior in practice to guide future research. Unfortunately, properly measuring the IoT ecosystem is not trivial. Devices may have different capabilities and behaviors, which require both active measurements and passive observation to quantify. Furthermore, the IoT devices that are connected to the public Internet may vary from those connected inside home networks, requiring both an external and internal vantage point to draw measurements from. In this thesis, we demonstrate how IoT measurements drawn from a single vantage point or mesaurement technique lead to a biased view of the network services in the IoT ecosystem. To do this, we conduct several real-world IoT measurements, drawn from both inside and outside home networks using active and passive monitoring. First, we leverage active scanning and passive observation in understanding the Mirai botnet---chiefly, we report on the devices it infected, the command and control infrastructure behind the botnet, and how the malware evolved over time. We then conduct active measurements from inside 16M home networks spanning 83M devices from 11~geographic regions to survey the IoT devices installed around the world. We demonstrate how these measurements can uncover the device types that are most at risk and the vendors who manufacture the weakest devices. We compare our measurements with passive external observation by detecting compromised scanning behavior from smart homes. We find that while passive external observation can drive insight about compromised networks, it offers little by way of concrete device attribution. We next compare our results from active external scanning with active internal scanning and show how relying solely on external scanning for IoT measurements under-reports security important IoT protocols, potentially skewing the services investigated by the security community. Finally, we conduct passive measurements of 275~smart home networks to investigate IoT behavior. We find that IoT device behavior varies by type and devices regularly communicate over a myriad of bespoke ports, in many cases to speak standard protocols (e.g., HTTP). Finally, we observe that devices regularly offer active services (e.g., Telnet, rpcbind) that are rarely, if ever, used in actual communication, demonstrating the need for both active and passive measurements to properly compare device capabilities and behaviors. Our results highlight the need for a confluence of measurement perspectives to comprehensively understand IoT ecosystem. We conclude with recommendations for future measurements of IoT devices as well as directions for the systems and security community informed by our work

    Verifying the Behavior of Security-Impacting Programs

    Get PDF
    Verifying the behavior of systems can be a critical tool for early identification of security vulnerabilities and attacks. Whether the system is a single program, a set of identical devices running the same program, or a population of heterogeneous devices running different ensembles of programs, analyzing the behavior at scale enables early detection of security flaws and potential adversary behavior. In the first case of single program systems such as a cryptographic protocol implementation like OpenSSL, behavioral verification can detect attacks such as Heartbleed without pre-existing knowledge of the attack. The second case of multiple devices is typical of manufacturing settings, where thousands of identical devices can be running a critical program such as cryptographic key generation. Verifying statistical properties of output (keys) across the full set of devices can detect potentially catastrophic entropy flaws before deployment. Finally, in enterprise incident response settings, there can be a large, heterogeneous population of employee laptops running different ensembles of programs. In this case, correct behavior is rarely well-defined and requires evaluation by human analysts, a severely limited resource. Monitoring both the internal and output behavior of laptops at scale enables machine learning to approximate the intuition of a human analyst across a large dataset, and thereby facilitates early detection of malware and host compromise.Doctor of Philosoph
    corecore