491 research outputs found
Systematizing Genome Privacy Research: A Privacy-Enhancing Technologies Perspective
Rapid advances in human genomics are enabling researchers to gain a better
understanding of the role of the genome in our health and well-being,
stimulating hope for more effective and cost efficient healthcare. However,
this also prompts a number of security and privacy concerns stemming from the
distinctive characteristics of genomic data. To address them, a new research
community has emerged and produced a large number of publications and
initiatives.
In this paper, we rely on a structured methodology to contextualize and
provide a critical analysis of the current knowledge on privacy-enhancing
technologies used for testing, storing, and sharing genomic data, using a
representative sample of the work published in the past decade. We identify and
discuss limitations, technical challenges, and issues faced by the community,
focusing in particular on those that are inherently tied to the nature of the
problem and are harder for the community alone to address. Finally, we report
on the importance and difficulty of the identified challenges based on an
online survey of genome data privacy expertsComment: To appear in the Proceedings on Privacy Enhancing Technologies
(PoPETs), Vol. 2019, Issue
Crowdfunding Non-fungible Tokens on the Blockchain
Non-fungible tokens (NFTs) have been used as a way of rewarding content creators. Artists publish their works on the blockchain as NFTs, which they can then sell. The buyer of an NFT then holds ownership of a unique digital asset, which can be resold in much the same way that real-world art collectors might trade paintings. However, while a deal of effort has been spent on selling works of art on the blockchain, very little attention has been paid to using the blockchain as a means of fundraising to help finance the artistโs work in the first place. Additionally, while blockchains like Ethereum are ideal for smaller works of art, additional support is needed when the artwork is larger than is feasible to store on the blockchain. In this paper, we propose a fundraising mechanism that will help artists to gain financial support for their initiatives, and where the backers can receive a share of the profits in exchange for their support. We discuss our prototype implementation using the SpartanGold framework. We then discuss how this system could be expanded to support large NFTs with the 0Chain blockchain, and describe how we could provide support for ongoing storage of these NFTs
Fake Malware Generation Using HMM and GAN
In the past decade, the number of malware attacks have grown considerably and, more importantly, evolved. Many researchers have successfully integrated state-of-the-art machine learning techniques to combat this ever present and rising threat to information security. However, the lack of enough data to appropriately train these machine learning models is one big challenge that is still present. Generative modelling has proven to be very efficient at generating image-like synthesized data that can match the actual data distribution. In this paper, we aim to generate malware samples as opcode sequences and attempt to differentiate them from the real ones with the goal to build fake malware data that can be used to effectively train the machine learning models. We use and compare different Generative Adversarial Networks (GAN) algorithms and Hidden Markov Models (HMM) to generate such fake samples obtaining promising results
์ธ๊ณต์ง๋ฅ ๋ณด์
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ์์ฐ๊ณผํ๋ํ ํ๋๊ณผ์ ์๋ฌผ์ ๋ณดํ์ ๊ณต, 2021. 2. ์ค์ฑ๋ก.With the development of machine learning (ML), expectations for artificial intelligence (AI) technologies have increased daily. In particular, deep neural networks have demonstrated outstanding performance in many fields. However, if a deep-learning (DL) model causes mispredictions or misclassifications, it can cause difficulty, owing to malicious external influences.
This dissertation discusses DL security and privacy issues and proposes methodologies for security and privacy attacks. First, we reviewed security attacks and defenses from two aspects. Evasion attacks use adversarial examples to disrupt the classification process, and poisoning attacks compromise training by compromising the training data. Next, we reviewed attacks on privacy that can exploit exposed training data and defenses, including differential privacy and encryption.
For adversarial DL, we study the problem of finding adversarial examples against ML-based portable document format (PDF) malware classifiers. We believe that our problem is more challenging than those against ML models for image processing, owing to the highly complex data structure of PDFs, compared with traditional image datasets, and the requirement that the infected PDF should exhibit malicious behavior without being detected. We propose an attack using generative adversarial networks that effectively generates evasive PDFs using a variational autoencoder robust against adversarial examples.
For privacy in DL, we study the problem of avoiding sensitive data being misused and propose a privacy-preserving framework for deep neural networks. Our methods are based on generative models that preserve the privacy of sensitive data while maintaining a high prediction performance. Finally, we study the security aspect in biological domains to detect maliciousness in deoxyribonucleic acid sequences and watermarks to protect intellectual properties.
In summary, the proposed DL models for security and privacy embrace a diversity of research by attempting actual attacks and defenses in various fields.์ธ๊ณต์ง๋ฅ ๋ชจ๋ธ์ ์ฌ์ฉํ๊ธฐ ์ํด์๋ ๊ฐ์ธ๋ณ ๋ฐ์ดํฐ ์์ง์ด ํ์์ ์ด๋ค. ๋ฐ๋ฉด ๊ฐ์ธ์ ๋ฏผ๊ฐํ ๋ฐ์ดํฐ๊ฐ ์ ์ถ๋๋ ๊ฒฝ์ฐ์๋ ํ๋ผ์ด๋ฒ์ ์นจํด์ ์์ง๊ฐ ์๋ค. ์ธ๊ณต์ง๋ฅ ๋ชจ๋ธ์ ์ฌ์ฉํ๋๋ฐ ์์ง๋ ๋ฐ์ดํฐ๊ฐ ์ธ๋ถ์ ์ ์ถ๋์ง ์๋๋ก ํ๊ฑฐ๋, ์ต๋ช
ํ, ๋ถํธํ ๋ฑ์ ๋ณด์ ๊ธฐ๋ฒ์ ์ธ๊ณต์ง๋ฅ ๋ชจ๋ธ์ ์ ์ฉํ๋ ๋ถ์ผ๋ฅผ Private AI๋ก ๋ถ๋ฅํ ์ ์๋ค. ๋ํ ์ธ๊ณต์ง๋ฅ ๋ชจ๋ธ์ด ๋
ธ์ถ๋ ๊ฒฝ์ฐ ์ง์ ์์ ๊ถ์ด ๋ฌด๋ ฅํ๋ ์ ์๋ ๋ฌธ์ ์ ๊ณผ, ์
์์ ์ธ ํ์ต ๋ฐ์ดํฐ๋ฅผ ์ด์ฉํ์ฌ ์ธ๊ณต์ง๋ฅ ์์คํ
์ ์ค์๋ํ ์ ์๊ณ ์ด๋ฌํ ์ธ๊ณต์ง๋ฅ ๋ชจ๋ธ ์์ฒด์ ๋ํ ์ํ์ Secure AI๋ก ๋ถ๋ฅํ ์ ์๋ค.
๋ณธ ๋
ผ๋ฌธ์์๋ ํ์ต ๋ฐ์ดํฐ์ ๋ํ ๊ณต๊ฒฉ์ ๊ธฐ๋ฐ์ผ๋ก ์ ๊ฒฝ๋ง์ ๊ฒฐ์ ์ฌ๋ก๋ฅผ ๋ณด์ฌ์ค๋ค. ๊ธฐ์กด์ AEs ์ฐ๊ตฌ๋ค์ ์ด๋ฏธ์ง๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ๋ง์ ์ฐ๊ตฌ๊ฐ ์งํ๋์๋ค. ๋ณด๋ค ๋ณต์กํ heterogenousํ PDF ๋ฐ์ดํฐ๋ก ์ฐ๊ตฌ๋ฅผ ํ์ฅํ์ฌ generative ๊ธฐ๋ฐ์ ๋ชจ๋ธ์ ์ ์ํ์ฌ ๊ณต๊ฒฉ ์ํ์ ์์ฑํ์๋ค. ๋ค์์ผ๋ก ์ด์ ํจํด์ ๋ณด์ด๋ ์ํ์ ๊ฒ์ถํ ์ ์๋ DNA steganalysis ๋ฐฉ์ด ๋ชจ๋ธ์ ์ ์ํ๋ค. ๋ง์ง๋ง์ผ๋ก ๊ฐ์ธ ์ ๋ณด ๋ณดํธ๋ฅผ ์ํด generative ๋ชจ๋ธ ๊ธฐ๋ฐ์ ์ต๋ช
ํ ๊ธฐ๋ฒ๋ค์ ์ ์ํ๋ค.
์์ฝํ๋ฉด ๋ณธ ๋
ผ๋ฌธ์ ์ธ๊ณต์ง๋ฅ ๋ชจ๋ธ์ ํ์ฉํ ๊ณต๊ฒฉ ๋ฐ ๋ฐฉ์ด ์๊ณ ๋ฆฌ์ฆ๊ณผ ์ ๊ฒฝ๋ง์ ํ์ฉํ๋๋ฐ ๋ฐ์๋๋ ํ๋ผ์ด๋ฒ์ ์ด์๋ฅผ ํด๊ฒฐํ ์ ์๋ ๊ธฐ๊ณํ์ต ์๊ณ ๋ฆฌ์ฆ์ ๊ธฐ๋ฐํ ์ผ๋ จ์ ๋ฐฉ๋ฒ๋ก ์ ์ ์ํ๋ค.Abstract i
List of Figures vi
List of Tables xiii
1 Introduction 1
2 Background 6
2.1 Deep Learning: a brief overview . . . . . . . . . . . . . . . . . . . 6
2.2 Security Attacks on Deep Learning Models . . . . . . . . . . . . . 10
2.2.1 Evasion Attacks . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.2 Poisoning Attack . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 Defense Techniques Against Deep Learning Models . . . . . . . . . 26
2.3.1 Defense Techniques against Evasion Attacks . . . . . . . . 27
2.3.2 Defense against Poisoning Attacks . . . . . . . . . . . . . . 36
2.4 Privacy issues on Deep Learning Models . . . . . . . . . . . . . . . 38
2.4.1 Attacks on Privacy . . . . . . . . . . . . . . . . . . . . . . 39
2.4.2 Defenses Against Attacks on Privacy . . . . . . . . . . . . 40
3 Attacks on Deep Learning Models 47
3.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.1.1 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.1.2 Portable Document Format (PDF) . . . . . . . . . . . . . . 55
3.1.3 PDF Malware Classifiers . . . . . . . . . . . . . . . . . . . 57
3.1.4 Evasion Attacks . . . . . . . . . . . . . . . . . . . . . . . 58
3.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2.1 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . 60
3.2.2 Feature Selection Process . . . . . . . . . . . . . . . . . . 61
3.2.3 Seed Selection for Mutation . . . . . . . . . . . . . . . . . 62
3.2.4 Evading Model . . . . . . . . . . . . . . . . . . . . . . . . 63
3.2.5 Model architecture . . . . . . . . . . . . . . . . . . . . . . 67
3.2.6 PDF Repacking and Verification . . . . . . . . . . . . . . . 67
3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.3.1 Datasets and Model Training . . . . . . . . . . . . . . . . . 68
3.3.2 Target Classifiers . . . . . . . . . . . . . . . . . . . . . . . 71
3.3.3 CVEs for Various Types of PDF Malware . . . . . . . . . . 72
3.3.4 Malicious Signature . . . . . . . . . . . . . . . . . . . . . 72
3.3.5 AntiVirus Engines (VirusTotal) . . . . . . . . . . . . . . . 76
3.3.6 Feature Mutation Result for Contagio . . . . . . . . . . . . 76
3.3.7 Feature Mutation Result for CVEs . . . . . . . . . . . . . . 78
3.3.8 Malicious Signature Verification . . . . . . . . . . . . . . . 78
3.3.9 Evasion Speed . . . . . . . . . . . . . . . . . . . . . . . . 80
3.3.10 AntiVirus Engines (VirusTotal) Result . . . . . . . . . . . . 82
3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4 Defense on Deep Learning Models 88
4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.1.1 Message-Hiding Regions . . . . . . . . . . . . . . . . . . . 91
4.1.2 DNA Steganography . . . . . . . . . . . . . . . . . . . . . 92
4.1.3 Example of Message Hiding . . . . . . . . . . . . . . . . . 94
4.1.4 DNA Steganalysis . . . . . . . . . . . . . . . . . . . . . . 95
4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.2.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.2.2 Proposed Model Architecture . . . . . . . . . . . . . . . . 103
4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.3.1 Experiment Setup . . . . . . . . . . . . . . . . . . . . . . . 105
4.3.2 Environment . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.3.3 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.3.4 Model Training . . . . . . . . . . . . . . . . . . . . . . . . 107
4.3.5 Message Hiding Procedure . . . . . . . . . . . . . . . . . . 108
4.3.6 Evaluation Procedure . . . . . . . . . . . . . . . . . . . . . 109
4.3.7 Performance Comparison . . . . . . . . . . . . . . . . . . . 109
4.3.8 Analyzing Malicious Code in DNA Sequences . . . . . . . 112
4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5 Privacy: Generative Models for Anonymizing Private Data 115
5.1 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.1.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.1.2 Anonymization using GANs . . . . . . . . . . . . . . . . . 119
5.1.3 Security Principle of Anonymized GANs . . . . . . . . . . 123
5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
5.2.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
5.2.2 Target Classifiers . . . . . . . . . . . . . . . . . . . . . . . 126
5.2.3 Model Training . . . . . . . . . . . . . . . . . . . . . . . . 126
5.2.4 Evaluation Process . . . . . . . . . . . . . . . . . . . . . . 126
5.2.5 Comparison to Differential Privacy . . . . . . . . . . . . . 128
5.2.6 Performance Comparison . . . . . . . . . . . . . . . . . . . 128
5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6 Privacy: Privacy-preserving Inference for Deep Learning Models 132
6.1 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.1.2 Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
6.1.3 Deep Private Generation Framework . . . . . . . . . . . . . 137
6.1.4 Security Principle . . . . . . . . . . . . . . . . . . . . . . . 141
6.1.5 Threat to the Classifier . . . . . . . . . . . . . . . . . . . . 143
6.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
6.2.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
6.2.2 Experimental Process . . . . . . . . . . . . . . . . . . . . . 146
6.2.3 Target Classifiers . . . . . . . . . . . . . . . . . . . . . . . 147
6.2.4 Model Training . . . . . . . . . . . . . . . . . . . . . . . . 147
6.2.5 Model Evaluation . . . . . . . . . . . . . . . . . . . . . . . 149
6.2.6 Performance Comparison . . . . . . . . . . . . . . . . . . . 150
6.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7 Conclusion 153
7.0.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7.0.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 155
Bibliography 157
Abstract in Korean 195Docto
Security and Privacy for Modern Wireless Communication Systems
The aim of this reprint focuses on the latest protocol research, software/hardware development and implementation, and system architecture design in addressing emerging security and privacy issues for modern wireless communication networks. Relevant topics include, but are not limited to, the following: deep-learning-based security and privacy design; covert communications; information-theoretical foundations for advanced security and privacy techniques; lightweight cryptography for power constrained networks; physical layer key generation; prototypes and testbeds for security and privacy solutions; encryption and decryption algorithm for low-latency constrained networks; security protocols for modern wireless communication networks; network intrusion detection; physical layer design with security consideration; anonymity in data transmission; vulnerabilities in security and privacy in modern wireless communication networks; challenges of security and privacy in nodeโedgeโcloud computation; security and privacy design for low-power wide-area IoT networks; security and privacy design for vehicle networks; security and privacy design for underwater communications networks
Revealing the Landscape of Privacy-Enhancing Technologies in the Context of Data Markets for the IoT: A Systematic Literature Review
IoT data markets in public and private institutions have become increasingly
relevant in recent years because of their potential to improve data
availability and unlock new business models. However, exchanging data in
markets bears considerable challenges related to disclosing sensitive
information. Despite considerable research focused on different aspects of
privacy-enhancing data markets for the IoT, none of the solutions proposed so
far seems to find a practical adoption. Thus, this study aims to organize the
state-of-the-art solutions, analyze and scope the technologies that have been
suggested in this context, and structure the remaining challenges to determine
areas where future research is required. To accomplish this goal, we conducted
a systematic literature review on privacy enhancement in data markets for the
IoT, covering 50 publications dated up to July 2020, and provided updates with
24 publications dated up to May 2022. Our results indicate that most research
in this area has emerged only recently, and no IoT data market architecture has
established itself as canonical. Existing solutions frequently lack the
required combination of anonymization and secure computation technologies.
Furthermore, there is no consensus on the appropriate use of blockchain
technology for IoT data markets and a low degree of leveraging existing
libraries or reusing generic data market architectures. We also identified
significant challenges remaining, such as the copy problem and the recursive
enforcement problem that-while solutions have been suggested to some extent-are
often not sufficiently addressed in proposed designs. We conclude that
privacy-enhancing technologies need further improvements to positively impact
data markets so that, ultimately, the value of data is preserved through data
scarcity and users' privacy and businesses-critical information are protected.Comment: 49 pages, 17 figures, 11 table
Cloud technology options towards Free Flow of Data
This whitepaper collects the technology solutions that the projects in the Data Protection, Security and Privacy Cluster propose to address the challenges raised by the working areas of the Free Flow of Data initiative. The document describes the technologies, methodologies, models, and tools researched and developed by the clustered projects mapped to the ten areas of work of the Free Flow of Data initiative. The aim is to facilitate the identification of the state-of-the-art of technology options towards solving the data security and privacy challenges posed by the Free Flow of Data initiative in Europe. The document gives reference to the Cluster, the individual projects and the technologies produced by them
Internet of Things Strategic Research Roadmap
Internet of Things (IoT) is an integrated part of Future Internet including existing and evolving Internet and network developments and could be conceptually defined as a dynamic global network infrastructure with self configuring capabilities based on standard and interoperable communication protocols where physical and virtual โthingsโ have identities, physical attributes, and virtual personalities, use intelligent interfaces, and are seamlessly integrated into the information network
- โฆ