11 research outputs found

    Privacy in the Age of Contact Tracing: An Analysis of Contact Tracing Apps in Different Statutory and Disease Frameworks

    Get PDF
    The Covid-19 pandemic is a historic pandemic that has affected the lives of virtually everyone on the globe. One approach to slowing the spread of the disease is to use contact tracing, facilitated by our internet-connected smartphones. Different nations and states have partnered to develop a variety of contact tracing apps that use different technologies and architectures. This paper investigates how five contact tracing apps—Germany’s Corona-Warn-App, Israel’s HaMagen, North Dakota’s Care19 Diary and Alert apps, and India’s Aarogya Setu—fare in privacy-oriented statutory frameworks to understand the design choices and public health implications shaped by these statutes. The three statutes—the Health Insurance Portability and Accountability Act, the California Consumer Privacy Act, and the European Union’s General Data Protection Regulation—provide different incentives to app developers across eight categories of design choices: notice and consent, consent requirements for medical data disclosed to third parties, location identifying technologies, data profiles and data collection, minimizing data categories collected, data sale and sharing with nonresearch third parties, third party and researcher access to data, and affirmative user rights. Each framework balances incentives to app developers with the need for governments to cater to pressing emergencies like public health needs. Some of the incentives in each framework end up favoring less privacy-protective design choices, whereas other provisions make it harder for public health authorities to flexibly respond to crises. Finally, this paper investigates how these frameworks would fare with different disease variables, by applying the analysis above to three different diseases that could require contact tracing: SARS, Ebola, and HIV. Our conclusion is that the disease variables themselves will affect whether the balance tilts towards public health or privacy, and that the statutes give varying levels of flexibility to cater to more pressing emergencies

    Healthy Data Protection

    Get PDF
    Modern medicine is evolving at a tremendous speed. On a daily basis, we learn about new treatments, drugs, medical devices, and diagnoses. Both established technology companies and start-ups focus on health-related products and services in competition with traditional healthcare businesses. Telemedicine and electronic health records have the potential to improve the effectiveness of treatments significantly. Progress in the medical field depends above all on data, specifically health information. Physicians, researchers, and developers need health information to help patients by improving diagnoses, customizing treatments and finding new cures. Yet law and policymakers are currently more focused on the fact that health information can also be used to harm individuals. Even after the outbreak of the COVID-19 pandemic (which occurred after the manuscript for this article was largely finalized), the California Attorney General Becera made a point of announcing that he will not delay enforcement of the California Consumer Privacy Act (“CCPA”), which his office estimated imposes a $55 billion cost (approximately 1.8% of California Gross State Product) for initial compliance, not including costs of ongoing compliance, responses to data subject requests, and litigation. Risks resulting from health information processing are very real. Contact tracing and quarantines in response to SARS, MERS, and COVID-19 outbreaks curb civil liberties with similar effects to law enforcement investigations, arrests, and imprisonment. Even outside the unusual circumstances of a global pandemic, employers or insurance companies may disfavor individuals with pre-existing health conditions in connections with job offers and promotions as well as coverage and eligibility decisions. Some diseases carry a negative stigma in social circumstances. To reduce the risks of such harms and protect individual dignity, governments around the world regulate the collection, use, and sharing of health information with ever-stricter laws. European countries have generally prohibited the processing of personal data, subject to limited exceptions, for which companies have to identify and then document or apply. The General Data Protection Regulation (“GDPR”) that took effect in 2018 confirms and amplifies a rigid regulatory regime that was first introduced in the German State Hessen in 1970 and demands that organizations minimize the amount of data they collect, use, share, and retain. Healthcare and healthtech organizations have struggled to comply with this regime and have found EU data protection laws fundamentally hostile to data-driven progress in medicine. The United States, on the other hand, has traditionally relied on sector- and harm-specific laws to protect privacy, including data privacy and security rules under the federal Health Insurance Portability and Accountability Act (“HIPAA”) and numerous state laws including the Confidentiality of Medical Information Act (“CMIA”) in California, which specifically address the collection and use of health information. So long as organizations observe the specific restrictions and prohibitions in sector-specific privacy laws, they may collect, use, and share health information. As a default rule in the United States, businesses are generally permitted to process personal information, including health information. Yet, recently, extremely broad and complex privacy laws have been proposed or enacted in some states, including the California Consumer Privacy Act of 2018 (“CCPA”), which have a potential to render compliance with data privacy laws impractical for most businesses, including those in the healthcare and healthtech sectors. Meanwhile, the People’s Republic of China is encouraging and incentivizing data-driven research and development by Chinese companies, including in the healthcare sector. Data-related legislation is focused on cybersecurity and securing access to data for Chinese government agencies and much less on individual privacy interests. In Europe and the United States, the political pendulum has swung too far in the direction of ever more rigid data regulation and privacy laws, at the expense of potential benefits through medical progress. This is literally unhealthy. Governments, businesses, and other organizations need to collect, use and share more personal health information, not less. The potential benefits of health data processing far outweigh privacy risks, which can be better tackled by harm-specific laws. If discrimination by employers and insurance companies is a concern, then lawmakers and law enforcement agencies need to focus on anti-discrimination rules for employers and insurance companies - not prohibit or restrict the processing of personal data, which does not per se harm anyone. The notion of only allowing data processing under specific conditions leads to a significant hindrance of medical progress by slowing down treatments, referrals, research, and development. It also prevents the use of medical data as a tool for averting dangers for the public good. Data “anonymization” and requirements for specific consent based on overly detailed privacy notices do not protect patient privacy effectively and unnecessarily complicate the processing of health data for medical purposes. Property rights to personal data offer no solutions. Even if individuals - not companies creating databases - were granted property rights to their own data originally, this would not ultimately benefit individuals. Given that transfer and exclusion rights are at the core of property regimes, data property rights would threaten information freedom and privacy alike: after an individual sells her data, the buyer and new owner could exercise his data property rights to enjoin her and her friends and family from continued use of her personal data. Physicians, researchers, and developers would not benefit either; they would have to deal with property rights in addition to privacy and medical confidentiality requirements. Instead of overregulating data processing or creating new property rights in data, lawmakers should require and incentivize organizations to earn and maintain the trust of patients and other data subjects and penalize organizations that use data in specifically prohibited ways to harm individuals. Electronic health records, improved notice and consent mechanisms, and clear legal frameworks will promote medical progress, reduce risks of human error, lower costs, and make data processing and sharing more reliable. We need fewer laws like the GDPR or the CCPA that discourage organizations from collecting, using, retaining, and sharing personal information. Physicians, researchers, developers, drug companies, medical device manufacturers and governments urgently need better and increased access to personal health information. The future of medicine offers enormous opportunities. It depends on trust and healthy data protection. Some degree of data regulation is necessary, but the dose makes the poison. Laws that require or intend to promote the minimization of data collection, use, and sharing may end up killing more patients than hospital germs. In this article, I promote a view that is decidedly different from that supported by the vast majority of privacy scholars, politicians, the media, and the broader zeitgeist in Europe and the United States. I am arguing for a healthier balance between data access and data protection needs in the interest of patients’ health and privacy. I strive to identify ways to protect health data privacy without excessively hindering healthcare and medical progress. After an introduction (I), I examine current approaches to data protection regulation, privacy law, and the protection of patient confidentiality (II), risks associated with the processing of health data (III), needs to protect patient confidence (IV), risks for healthcare and medical progress (V), and possible solutions (VI). I conclude with an outlook and call for healthier approaches to data protection (VII)

    Privacy-Preserving Chaotic Extreme Learning Machine with Fully Homomorphic Encryption

    Full text link
    The Machine Learning and Deep Learning Models require a lot of data for the training process, and in some scenarios, there might be some sensitive data, such as customer information involved, which the organizations might be hesitant to outsource for model building. Some of the privacy-preserving techniques such as Differential Privacy, Homomorphic Encryption, and Secure Multi-Party Computation can be integrated with different Machine Learning and Deep Learning algorithms to provide security to the data as well as the model. In this paper, we propose a Chaotic Extreme Learning Machine and its encrypted form using Fully Homomorphic Encryption where the weights and biases are generated using a logistic map instead of uniform distribution. Our proposed method has performed either better or similar to the Traditional Extreme Learning Machine on most of the datasets.Comment: 26 pages; 1 Figure; 7 Tables. arXiv admin note: text overlap with arXiv:2205.1326

    Group-Level Frameworks for Data Ethics, Privacy, Safety and Security in Digital Environments

    Get PDF
    In today\u27s digital age, the widespread collection, utilization, and sharing of personal data are challenging our conventional beliefs about privacy and information security. This thesis will explore the boundaries of conventional privacy and security frameworks and investigate new methods to handle online privacy by integrating groups. Additionally, we will examine approaches to monitoring the types of information gathered on individuals to tackle transparency concerns in the data broker and data processor sector. We aim to challenge traditional notions of privacy and security to encourage innovative strategies for safeguarding them in our interconnected, dispersed digital environment. This thesis uses a multi-disciplinary approach to complex systems, drawing from various fields such as data ethics, legal theory, and philosophy. Our methods include complex systems modeling, network analysis, data science, and statistics. As a first step, we investigate the limits of individual consent frameworks in online social media platforms. We develop new security settings, called distributed consent, that can be used in an online social network or coordinated across online platforms. We then model the levels of observability of individuals on the platform(s) to measure the effectiveness of the new security settings against surveillance from third parties. Distributed consent can help to protect individuals online from surveillance, but it requires a high coordination cost on the part of the individual. Users must also decide whether to protect their privacy from third parties and network neighbors by disclosing security settings or taking on the burden of coordinating security on single and multiple platforms. However, the coordination burden may be more appropriate for systems-level regulation. We then explore how groups of individuals can work together to protect themselves from the harms of misinformation on online social networks. Social media users are not equally susceptible to all types of misinformation. Further, diverse groups of social media communities can help protect one another from misinformation by correcting each other\u27s blind spots. We highlight the importance of group diversity in network dynamics and explore how natural diversity within groups can provide protection rather than relying on new technologies such as distributed consent settings. Finally, we investigate methods to interrogate what types of personal data are collected by third parties and measure the risks and harms associated with aggregating personal data. We introduce methods that provide transparency into how modern data collection practices pose risks to data subjects online. We hope that the collection of these results provides a humble step toward revealing gaps in privacy and security frameworks and promoting new solutions for the digital age

    Security by envelopment - a novel approach to data-security-oriented configuration of lightweight-automation systems

    Get PDF
    Organisations’ increasing adoption of lightweight automation, such as robotic process automation (RPA), raises concerns about the associated systems’ robustness and security, with data-security concerns becoming further accentuated when tools of this sort are deployed for handling of potentially sensitive data. However, literature on designing these tools in a manner mitigating risks related to organisational data security has remained scarce. This paper addresses this gap by presenting a study in which RPA was successfully designed for a process wherein the software robot handles sensitive personal data. Informed by work on the mindlessness of automation, sociotechnical envelopment, and security by design, this empirical study, employing action design research at Wärtsilä Corporation, pointed to three design principles, related to envelopment, access rights, and audit trails. By adhering to these, Wärtsilä created envelopes around the robot that afford the automation’s safe operation and processing of the sensitive data. This research advances the theory of sociotechnical envelopment’s design and deployment by introducing a novel approach in security by envelopment to elaborate on the security-oriented envelopment of mindless automation agents. The paper also discusses the practical utility of the artefact designed, in terms of both design and evaluation

    Behavioral authentication for security and safety

    Get PDF
    The issues of both system security and safety can be dissected integrally from the perspective of behavioral appropriateness. That is, a system that is secure or safe can be judged by whether the behavior of certain agent(s) is appropriate or not. Specifically, a so-called appropriate behavior involves the right agent performing the right actions at the right time under certain conditions. Then, according to different levels of appropriateness and degrees of custodies, behavioral authentication can be graded into three levels, i.e., the authentication of behavioral Identity, Conformity, and Benignity. In a broad sense, for the security and safety issue, behavioral authentication is not only an innovative and promising method due to its inherent advantages but also a critical and fundamental problem due to the ubiquity of behavior generation and the necessity of behavior regulation in any system. By this classification, this review provides a comprehensive examination of the background and preliminaries of behavioral authentication. It further summarizes existing research based on their respective focus areas and characteristics. The challenges confronted by current behavioral authentication methods are analyzed, and potential research directions are discussed to promote the diversified and integrated development of behavioral authentication

    cii Student Papers - 2021

    Get PDF
    In this collection of papers, we, the Research Group Critical Information Infrastructures (cii) from the Karlsruhe Institute of Technology, present nine selected student research articles contributing to the design, development, and evaluation of critical information infrastructures. During our courses, students mostly work in groups and deal with problems and issues related to sociotechnical challenges in the realm of (critical) information systems. Student papers came from four different cii courses, namely Emerging Trends in Digital Health, Emerging Trends in Internet Technologies, Critical Information Infrastructures, and Digital Health in the winter term of 2020 and summer term of 2021
    corecore