1,767 research outputs found

    Differential Privacy for Industrial Internet of Things: Opportunities, Applications and Challenges

    Get PDF
    The development of Internet of Things (IoT) brings new changes to various fields. Particularly, industrial Internet of Things (IIoT) is promoting a new round of industrial revolution. With more applications of IIoT, privacy protection issues are emerging. Specially, some common algorithms in IIoT technology such as deep models strongly rely on data collection, which leads to the risk of privacy disclosure. Recently, differential privacy has been used to protect user-terminal privacy in IIoT, so it is necessary to make in-depth research on this topic. In this paper, we conduct a comprehensive survey on the opportunities, applications and challenges of differential privacy in IIoT. We firstly review related papers on IIoT and privacy protection, respectively. Then we focus on the metrics of industrial data privacy, and analyze the contradiction between data utilization for deep models and individual privacy protection. Several valuable problems are summarized and new research ideas are put forward. In conclusion, this survey is dedicated to complete comprehensive summary and lay foundation for the follow-up researches on industrial differential privacy

    Differential Privacy for Class-based Data: A Practical Gaussian Mechanism

    Full text link
    In this paper, we present a notion of differential privacy (DP) for data that comes from different classes. Here, the class-membership is private information that needs to be protected. The proposed method is an output perturbation mechanism that adds noise to the release of query response such that the analyst is unable to infer the underlying class-label. The proposed DP method is capable of not only protecting the privacy of class-based data but also meets quality metrics of accuracy and is computationally efficient and practical. We illustrate the efficacy of the proposed method empirically while outperforming the baseline additive Gaussian noise mechanism. We also examine a real-world application and apply the proposed DP method to the autoregression and moving average (ARMA) forecasting method, protecting the privacy of the underlying data source. Case studies on the real-world advanced metering infrastructure (AMI) measurements of household power consumption validate the excellent performance of the proposed DP method while also satisfying the accuracy of forecasted power consumption measurements.Comment: Under review in IEEE Transactions on Information Forensics & Securit

    On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks

    Full text link
    Non-intrusive Load Monitoring (NILM) algorithms, commonly referred to as load disaggregation algorithms, are fundamental tools for effective energy management. Despite the success of deep models in load disaggregation, they face various challenges, particularly those pertaining to privacy and security. This paper investigates the sensitivity of prominent deep NILM baselines to adversarial attacks, which have proven to be a significant threat in domains such as computer vision and speech recognition. Adversarial attacks entail the introduction of imperceptible noise into the input data with the aim of misleading the neural network into generating erroneous outputs. We investigate the Fast Gradient Sign Method (FGSM), a well-known adversarial attack, to perturb the input sequences fed into two commonly employed CNN-based NILM baselines: the Sequence-to-Sequence (S2S) and Sequence-to-Point (S2P) models. Our findings provide compelling evidence for the vulnerability of these models, particularly the S2P model which exhibits an average decline of 20\% in the F1-score even with small amounts of noise. Such weakness has the potential to generate profound implications for energy management systems in residential and industrial sectors reliant on NILM models

    Private Graph Data Release: A Survey

    Full text link
    The application of graph analytics to various domains have yielded tremendous societal and economical benefits in recent years. However, the increasingly widespread adoption of graph analytics comes with a commensurate increase in the need to protect private information in graph databases, especially in light of the many privacy breaches in real-world graph data that was supposed to preserve sensitive information. This paper provides a comprehensive survey of private graph data release algorithms that seek to achieve the fine balance between privacy and utility, with a specific focus on provably private mechanisms. Many of these mechanisms fall under natural extensions of the Differential Privacy framework to graph data, but we also investigate more general privacy formulations like Pufferfish Privacy that can deal with the limitations of Differential Privacy. A wide-ranging survey of the applications of private graph data release mechanisms to social networks, finance, supply chain, health and energy is also provided. This survey paper and the taxonomy it provides should benefit practitioners and researchers alike in the increasingly important area of private graph data release and analysis

    Secure Data Collection and Analysis in Smart Health Monitoring

    Get PDF
    Smart health monitoring uses real-time monitored data to support diagnosis, treatment, and health decision-making in modern smart healthcare systems and benefit our daily life. The accurate health monitoring and prompt transmission of health data are facilitated by the ever-evolving on-body sensors, wireless communication technologies, and wireless sensing techniques. Although the users have witnessed the convenience of smart health monitoring, severe privacy and security concerns on the valuable and sensitive collected data come along with the merit. The data collection, transmission, and analysis are vulnerable to various attacks, e.g., eavesdropping, due to the open nature of wireless media, the resource constraints of sensing devices, and the lack of security protocols. These deficiencies not only make conventional cryptographic methods not applicable in smart health monitoring but also put many obstacles in the path of designing privacy protection mechanisms. In this dissertation, we design dedicated schemes to achieve secure data collection and analysis in smart health monitoring. The first two works propose two robust and secure authentication schemes based on Electrocardiogram (ECG), which outperform traditional user identity authentication schemes in health monitoring, to restrict the access to collected data to legitimate users. To improve the practicality of ECG-based authentication, we address the nonuniformity and sensitivity of ECG signals, as well as the noise contamination issue. The next work investigates an extended authentication goal, denoted as wearable-user pair authentication. It simultaneously authenticates the user identity and device identity to provide further protection. We exploit the uniqueness of the interference between different wireless protocols, which is common in health monitoring due to devices\u27 varying sensing and transmission demands, and design a wearable-user pair authentication scheme based on the interference. However, the harm of this interference is also outstanding. Thus, in the fourth work, we use wireless human activity recognition in health monitoring as an example and analyze how this interference may jeopardize it. We identify a new attack that can produce false recognition result and discuss potential countermeasures against this attack. In the end, we move to a broader scenario and protect the statistics of distributed data reported in mobile crowd sensing, a common practice used in public health monitoring for data collection. We deploy differential privacy to enable the indistinguishability of workers\u27 locations and sensing data without the help of a trusted entity while meeting the accuracy demands of crowd sensing tasks

    Visible Light Communication Cyber Security Vulnerabilities For Indoor And Outdoor Vehicle-To-Vehicle Communication

    Get PDF
    Light fidelity (Li-Fi), developed from the approach of Visible Light Communication (VLC), is a great replacement or complement to existing radio frequency-based (RF) networks. Li-Fi is expected to be deployed in various environments were, due to Wi-Fi congestion and health limitations, RF should not be used. Moreover, VLC can provide the future fifth generation (5G) wireless technology with higher data rates for device connectivity which will alleviate the traffic demand. 5G is playing a vital role in encouraging the modern applications. In 2023, the deployment of all the cellular networks will reach more than 5 billion users globally. As a result, the security and privacy of 5G wireless networks is an essential problem as those modern applications are in people\u27s life everywhere. VLC security is as one of the core physical-layer security (PLS) solutions for 5G networks. Due to the fact that light does not penetrate through solid objects or walls, VLC naturally has higher security and privacy for indoor wireless networks compared to RF networks. However, the broadcasting nature of VLC caused concerns, e.g., eavesdropping, have created serious attention as it is a crucial step to validate the success of VLC in wild. The aim of this thesis is to properly address the security issues of VLC and further enhance the VLC nature security. We analyzed the secrecy performance of a VLC model by studying the characteristics of the transmitter, receiver and the visible light channel. Moreover, we mitigated the security threats in the VLC model for the legitimate user, by 1) implementing more access points (APs) in a multiuser VLC network that are cooperated, 2) reducing the semi-angle of LED to help improve the directivity and secrecy and, 3) using the protected zone strategy around the AP where eavesdroppers are restricted. According to the model\u27s parameters, the results showed that the secrecy performance in the proposed indoor VLC model and the vehicle-to-vehicle (V2V) VLC outdoor model using a combination of multiple PLS techniques as beamforming, secure communication zones, and friendly jamming is enhanced. The proposed model security performance was measured with respect to the signal to noise ratio (SNR), received optical power, and bit error rate (BER) Matlab simulation results
    • …
    corecore