716 research outputs found

    Your Smart Home Can't Keep a Secret: Towards Automated Fingerprinting of IoT Traffic with Neural Networks

    Get PDF
    The IoT (Internet of Things) technology has been widely adopted in recent years and has profoundly changed the people's daily lives. However, in the meantime, such a fast-growing technology has also introduced new privacy issues, which need to be better understood and measured. In this work, we look into how private information can be leaked from network traffic generated in the smart home network. Although researchers have proposed techniques to infer IoT device types or user behaviors under clean experiment setup, the effectiveness of such approaches become questionable in the complex but realistic network environment, where common techniques like Network Address and Port Translation (NAPT) and Virtual Private Network (VPN) are enabled. Traffic analysis using traditional methods (e.g., through classical machine-learning models) is much less effective under those settings, as the features picked manually are not distinctive any more. In this work, we propose a traffic analysis framework based on sequence-learning techniques like LSTM and leveraged the temporal relations between packets for the attack of device identification. We evaluated it under different environment settings (e.g., pure-IoT and noisy environment with multiple non-IoT devices). The results showed our framework was able to differentiate device types with a high accuracy. This result suggests IoT network communications pose prominent challenges to users' privacy, even when they are protected by encryption and morphed by the network gateway. As such, new privacy protection methods on IoT traffic need to be developed towards mitigating this new issue

    Short Paper: Blockcheck the Typechain

    Get PDF
    Recent efforts have sought to design new smart contract programming languages that make writing blockchain programs safer. But programs on the blockchain are beholden only to the safety properties enforced by the blockchain itself: even the strictest language-only properties can be rendered moot on a language-oblivious blockchain due to inter-contract interactions. Consequently, while safer languages are a necessity, fully realizing their benefits necessitates a language-aware redesign of the blockchain itself. To this end, we propose that the blockchain be viewed as a typechain: a chain of typed programs-not arbitrary blocks-that are included iff they typecheck against the existing chain. Reaching consensus, or blockchecking, validates typechecking in a byzantine fault-tolerant manner. Safety properties traditionally enforced by a runtime are instead enforced by a type system with the aim of statically capturing smart contract correctness. To provide a robust level of safety, we contend that a typechain must minimally guarantee (1) asset linearity and liveness, (2) physical resource availability, including CPU and memory, (3) exceptionless execution, or no early termination, (4) protocol conformance, or adherence to some state machine, and (5) inter-contract safety, including reentrancy safety. Despite their exacting nature, typechains are extensible, allowing for rich libraries that extend the set of verified properties. We expand on typechain properties and present examples of real-world bugs they prevent

    Advanced Methods for Botnet Intrusion Detection Systems

    Get PDF

    Confidential Machine Learning on Untrusted Platforms: a Survey

    Get PDF
    With the ever-growing data and the need for developing powerful machine learning models, data owners increasingly depend on various untrusted platforms (e.g., public clouds, edges, and machine learning service providers) for scalable processing or collaborative learning. Thus, sensitive data and models are in danger of unauthorized access, misuse, and privacy compromises. A relatively new body of research confidentially trains machine learning models on protected data to address these concerns. In this survey, we summarize notable studies in this emerging area of research. With a unified framework, we highlight the critical challenges and innovations in outsourcing machine learning confidentially. We focus on the cryptographic approaches for confidential machine learning (CML), primarily on model training, while also covering other directions such as perturbation-based approaches and CML in the hardware-assisted computing environment. The discussion will take a holistic way to consider a rich context of the related threat models, security assumptions, design principles, and associated trade-offs amongst data utility, cost, and confidentiality

    MooseGuard: secure file sharing at scale in untrusted environments

    Get PDF
    Shared storage systems provide cheap, scalable, and reliable storage, but secure sharing in these systems requires users to encrypt their data and limit efficient sharing or trust a service provider to faithfully keep their data private. Current research has explored the use of trusted execution environments (TEEs) to operate on sensitive data and sharing policies in isolated execution. That work enables the utilization of untrusted shared resources to store and share sensitive data while maintaining stronger security guarantees. However, current research has limitations in scaling these solutions, as it bottlenecks both metadata and data operations within the same physical TEE, whereas a scaled file system distributes metadata and data operations to separate devices. This paper explores the use of two TEEs specialized for metadata and data operations to provide file sharing at scale with less overhead in addition to strong security guarantees. This approach achieves scaled metadata and concurrent use by utilizing a server-side TEE for isolated execution on a master server and provides data privacy and efficient access revocation through a client-side TEE. MooseGuard is the prototype implementation of this design, utilizing Intel SGX as a TEE and extending the MooseFS distributed file system. MooseGuard's implementation details the modifications needed to provide security and shows how this approach can be applied to a typical distributed file system. An evaluation of MooseGuard demonstrates that TEEs specialized for metadata and data operations allow a secured distributed file system to maintain its scale with only constant overheads. As TEEs and secure hardware become more widely available in public clouds, enterprise, and personal devices, MooseGuard presents a way for users to get the best of both worlds in data privacy and efficient sharing when using scaled, shared storage systems
    corecore