93,804 research outputs found
The medical science DMZ: a network design pattern for data-intensive medical science
Abstract:
Objective
We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations.
Materials and Methods
High-end networking, packet-filter firewalls, network intrusion-detection systems.
Results
We describe a “Medical Science DMZ” concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs.
Discussion
The exponentially increasing amounts of “omics” data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research “Big Data.” The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows.
Conclusion
By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements
Preserving Both Privacy and Utility in Network Trace Anonymization
As network security monitoring grows more sophisticated, there is an
increasing need for outsourcing such tasks to third-party analysts. However,
organizations are usually reluctant to share their network traces due to
privacy concerns over sensitive information, e.g., network and system
configuration, which may potentially be exploited for attacks. In cases where
data owners are convinced to share their network traces, the data are typically
subjected to certain anonymization techniques, e.g., CryptoPAn, which replaces
real IP addresses with prefix-preserving pseudonyms. However, most such
techniques either are vulnerable to adversaries with prior knowledge about some
network flows in the traces, or require heavy data sanitization or
perturbation, both of which may result in a significant loss of data utility.
In this paper, we aim to preserve both privacy and utility through shifting the
trade-off from between privacy and utility to between privacy and computational
cost. The key idea is for the analysts to generate and analyze multiple
anonymized views of the original network traces; those views are designed to be
sufficiently indistinguishable even to adversaries armed with prior knowledge,
which preserves the privacy, whereas one of the views will yield true analysis
results privately retrieved by the data owner, which preserves the utility. We
present the general approach and instantiate it based on CryptoPAn. We formally
analyze the privacy of our solution and experimentally evaluate it using real
network traces provided by a major ISP. The results show that our approach can
significantly reduce the level of information leakage (e.g., less than 1\% of
the information leaked by CryptoPAn) with comparable utility
Dynamic Information Flow Analysis in Ruby
With the rapid increase in usage of the internet and online applications, there is a huge demand for applications to handle data privacy and integrity. Applications are already complex with business logic; adding the data safety logic would make them more complicated. The more complex the code becomes, the more possibilities it opens for security-critical bugs. To solve this conundrum, we can push this data safety handling feature to the language level rather than the application level. With a secure language, developers can write their application without having to worry about data security.
This project introduces dynamic information flow analysis in Ruby. I extend the JRuby implementation, which is a widely used implementation of Ruby written in Java. Information flow analysis classifies variables used in the program into different security levels and monitors the data flow across levels. Ruby currently supports data integrity by a tainting mechanism. This project extends this tainting mechanism to handle implicit data flows, enabling it to protect confidentiality as well as integrity. Experimental results based on Ruby benchmarks are presented in this paper, which show that: This project protects confidentiality but at the cost of 1.2 - 10 times slowdown in execution time
Recommended from our members
Privacy Preserving Attribute Based Encryption for Multiple Cloud Collaborative Environment
In a Multiple Cloud Collaborative Environment (MCCE), cloud users and cloud providers interact with each other via a brokering service to request and provision cloud services. The brokering service considers several pieces of data to broker the best deal between users and providers which can subsequently risks the privacy and security of MCCE. In this paper, we propose a Privacy Preserving Attribute-Based Encryption(PPABE) scheme which protects MCCE from a compromised broker. The proposed encryption scheme preserves the privacy by employing data access policy over sets of attributes. The identifying attributes are anonymoized using pseudonyms. The data access policy is further anonymized so as it remain unknown to unauthorized parties. The PP-ABE achieves unlinkability between different data items which flows through the collaborative cloud environment and preserves the privacy of cloud users and cloud providers
Precise Analysis of Purpose Limitation in Data Flow Diagrams
Data Flow Diagrams (DFDs) are primarily used for modelling functional properties of a system. In recent work, it was shown that DFDs can be used to also model non-functional properties, such as security and privacy properties, if they are annotated with appropriate security- and privacy-related information. An important privacy principle one may wish to model in this way is purpose limitation. But previous work on privacy-aware DFDs (PA-DFDs) considers purpose limitation only superficially, without explaining how the purpose of DFD activators and flows ought to be specified, checked or inferred. In this paper, we define a rigorous formal framework for (1) annotating DFDs with purpose labels and privacy signatures, (2) checking the consistency of labels and signatures, and (3) inferring labels from signatures. We implement our theoretical framework in a proof-of concept tool consisting of a domain-specific language (DSL) for specifying privacy signatures and algorithms for checking and inferring purpose labels from such signatures. Finally, we evaluate our framework and tool through a case study based on a DFD from the privacy literature
Security issues and defences for Internet of Things
The Internet of Things (IoT) aims at linking billions of devices using the internet and other heterogeneous networks to share information. However, the issues of security in IoT environments are more challenging than with ordinary Internet. A vast number of devices are exposed to the attackers, and some of those devices contain sensitive personal and confidential data. For example, the sensitive flows of data such as autonomous vehicles, patient life support devices, traffic data in smart cities are extremely concerned by researchers from the security field. The IoT architecture needs to handle security and privacy requirements such as provision of authentication, access control, privacy and confidentiality.
This thesis presents the architecture of IoT and its security issues. Additionally, we introduce the concept of blockchain technology, and the role of blockchain in different security aspects of IoT is discussed through a literature review. In case study of Mirai, we explain how snort and iptables based approach can be used to prevent IoT botnet from finding IoT devices by port scanning
Data transfers between the EU and US : the impact of schrems I and schrems II for cross-border data flows, privacy, and national security
This dissertation seeks to outline the implications of the CJEU judgment in Case C-311/18
Data Protection Commissioner v. Facebook Ireland Ltd and Maximillian Schrems (Schrems II)
on international data transfers, particularly for data transfers between the European Union and
the United States. The Schrems II judgment has invalidated the Privacy Shield, making it the
second data transfer mechanism between the EU and the US that the CJEU strikes down. It
also leaves Standard Contractual Clauses (SCCs) as one of the only options for data transfers,
creating significant burdens for companies/organizations to assess the laws and practices of
third countries to be able to transfer data. The Schrems II decision, without a doubt, will change
the relationship between global data flows and national security, and we have already started
to see the legal uncertainties brought forward by the case. This dissertation aims to give an
overview of the history of data protection laws in both the EU and the US, including differences
in their approaches to data protection. It then examines the two Schrems cases and the
invalidated transfer mechanisms, as well as the legal landscape for transfers after CJEU's last
decision. Lastly, it discusses the impact of the decision on cross-border data flows, data
privacy, surveillance, and national security, while trying to chart a path forward by examining
possible solutions for the continuance of data transfers
GDPR: Governance implications for regimes outside the EU
It is estimated that as of 2017 around 120 nations around the globe had legislation to protect personal data with at least another 30 in train. Many of the early regimes (dating back to the 1980s and 90s) reflect the OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data (1980, updated 2013). However, there are also increasing concerns that these guidelines may no longer be fit for purpose with recent issues regarding breaches of data security and privacy. The EU's General Data Protection Regulation (GDPR) (2016) implements a reformed data privacy regime. Tellingly, some of the new and pending privacy regulations elsewhere reflect the GDPR, a characteristic that suggests much about the impact of international trade. Two questions arise: first, how is the GDPR likely to affect and influence governance of organisations, not only those domiciled in the EU, but also those trading with the Union or having a presence there? Second, compared to the GDPR, what gaps are there in other existing privacy regimes and what are the implications for the governance of those organisations and their risk management strategies? This paper compares the GDPR with privacy regimes in place in New Zealand and Australia (the first of which has GDPR “approved country status” for receipt of data) and attempts to answer the questions above, thus providing a focus for empirical research. As such, the paper provides insight into the impact of the data privacy and security legislative reform, on corporate governance, strategy and risk management beyond the EU in its reach to far distant regions. © The Authors, 2018. All Rights Reserved.Proceedings of the 14th European Conference on Management, Leadership and Governance, ECMLG 201
- …